[GH-ISSUE #7790] Using C# and tooling, the tools are not consistently invoked by Ollama, resulting in confusing results and responses #4977

Closed
opened 2026-04-12 16:02:15 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @jan-johansson-mr on GitHub (Nov 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7790

What is the issue?

I've written a simple C# console application that has tools managing a set of slots. The scheme is simple, the set has 10 slots available, and slots can be allocated and released.

The tools are:

  • CountAvailableSlots
  • CountAllocatedSlots
  • Capacity (always 10, the count includes both allocated and released slots)
  • AllocateSlots
  • ReleaseSlots

Here is the C# class managing the set of slots:

internal class SlotSetItem
{
    private readonly int capacity = 10;
    private int slots = 10;

    [Description("This tool counts the number of total available slots.")]
    public async Task<string> CountAvailableSlots()
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"CountAvailableSlots: Returning the number of available slots {slots}");
        Console.ForegroundColor = color;
        return $"The number of available slots to allocate is {slots}";
    }

    [Description("This tool counts the number of total allocated slots.")]
    public async Task<string> CountAllocatedSlots()
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"CountAllocatedSlots: Returning the number of allocated slots {capacity - slots}");
        Console.ForegroundColor = color;
        return $"The number of allocated slots is {capacity - slots}";
    }

    [Description("This tool returns the capacity of slots.")]
    public async Task<string> Capacity()
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"Capacity: Returning capacity");
        Console.ForegroundColor = color;
        return $"The total capacity of slots, both allocated and released, are {capacity}";
    }

    [Description("This tool allocate slots")]
    public async Task<string> AllocateSlots([Description("The number of slots to allocate")] string numberOfSlotsPrompt)
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"""AllocateSlots: Allocating "{numberOfSlotsPrompt}" slots""");
        var oldNumberOfSlots = slots;
        var numberOfSlots = int.Parse(numberOfSlotsPrompt);
        slots = int.Max(0, slots - numberOfSlots);
        Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future");
        Console.ForegroundColor = color;
        return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future";
    }

    [Description("This tool release slots.")]
    public async Task<string> ReleaseSlots([Description("The number of slots to release")] string numberOfSlotsPrompt)
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"""ReleaseSlots: Releasing "{numberOfSlotsPrompt}" slots""");
        var numberOfSlots = int.Parse(numberOfSlotsPrompt);
        slots = int.Min(10, slots + numberOfSlots);
        Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future");
        Console.ForegroundColor = color;
        return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future";
    }
}

The code has been modified a lot, since I have had problems with calling consistency by Ollama, e.g. I had integer parameters (now they are strings), and integer returns (now they are strings). I've noticed a better behavior when returning strings describing the result, instead of using integers (when there were integers returned, then I had random releases and so on).

The usage of the tools by Ollama is not consistent. The engine reports that it invokes the tool, but it doesn't happen (the tool is writing out a message when it is invoked).

Here is a typical output when the call chain works, following my instructions:

Note: The number of allocated slots were 7 before I instructed Ollama to allocate 5 slots (and not more than 10 can be allocated)

Allocate 5 slots and then release 3 slots
AllocateSlots: Allocating "5" slots
The number of allocated slots are 10, leaving 0 slots to be allocated in the future
ReleaseSlots: Releasing "3" slots
The number of allocated slots are 7, leaving 3 slots to be allocated in the future

And here is the output when the call chain doesn't work as expected:

Note: The number of allocated slots were 7 before I instructed Ollama to allocate 5 slots (and not more than 10 can be allocated)

Allocate 5 slots and then release 3 slots
AllocateSlots: Allocating "5" slots
The number of allocated slots are 10, leaving 0 slots to be allocated in the future
{"call_id":"4a90b1d3","name":"ReleaseSlots","arguments":{"numberOfSlotsPrompt":"3"}}

As you can see, in the last output, that Ollama starts out correctly by allocating slots, and then Ollama intends to release slots, but it never happens - the tool ReleaseSlots never gets invoked.

This happens a lot.

I've no idea of why the invocation doesn't happen.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.4.3

Originally created by @jan-johansson-mr on GitHub (Nov 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7790 ### What is the issue? I've written a simple C# console application that has tools managing a set of slots. The scheme is simple, the set has 10 slots available, and slots can be allocated and released. The tools are: - CountAvailableSlots - CountAllocatedSlots - Capacity (always 10, the count includes both allocated and released slots) - AllocateSlots - ReleaseSlots Here is the C# class managing the set of slots: ``` internal class SlotSetItem { private readonly int capacity = 10; private int slots = 10; [Description("This tool counts the number of total available slots.")] public async Task<string> CountAvailableSlots() { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"CountAvailableSlots: Returning the number of available slots {slots}"); Console.ForegroundColor = color; return $"The number of available slots to allocate is {slots}"; } [Description("This tool counts the number of total allocated slots.")] public async Task<string> CountAllocatedSlots() { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"CountAllocatedSlots: Returning the number of allocated slots {capacity - slots}"); Console.ForegroundColor = color; return $"The number of allocated slots is {capacity - slots}"; } [Description("This tool returns the capacity of slots.")] public async Task<string> Capacity() { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"Capacity: Returning capacity"); Console.ForegroundColor = color; return $"The total capacity of slots, both allocated and released, are {capacity}"; } [Description("This tool allocate slots")] public async Task<string> AllocateSlots([Description("The number of slots to allocate")] string numberOfSlotsPrompt) { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"""AllocateSlots: Allocating "{numberOfSlotsPrompt}" slots"""); var oldNumberOfSlots = slots; var numberOfSlots = int.Parse(numberOfSlotsPrompt); slots = int.Max(0, slots - numberOfSlots); Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"); Console.ForegroundColor = color; return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"; } [Description("This tool release slots.")] public async Task<string> ReleaseSlots([Description("The number of slots to release")] string numberOfSlotsPrompt) { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"""ReleaseSlots: Releasing "{numberOfSlotsPrompt}" slots"""); var numberOfSlots = int.Parse(numberOfSlotsPrompt); slots = int.Min(10, slots + numberOfSlots); Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"); Console.ForegroundColor = color; return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"; } } ``` The code has been modified a lot, since I have had problems with calling consistency by Ollama, e.g. I had integer parameters (now they are strings), and integer returns (now they are strings). I've noticed a better behavior when returning strings describing the result, instead of using integers (when there were integers returned, then I had random releases and so on). The usage of the tools by Ollama is not consistent. The engine reports that it invokes the tool, but it doesn't happen (the tool is writing out a message when it is invoked). Here is a typical output when the call chain works, following my instructions: Note: The number of allocated slots were 7 before I instructed Ollama to allocate 5 slots (and not more than 10 can be allocated) ``` Allocate 5 slots and then release 3 slots AllocateSlots: Allocating "5" slots The number of allocated slots are 10, leaving 0 slots to be allocated in the future ReleaseSlots: Releasing "3" slots The number of allocated slots are 7, leaving 3 slots to be allocated in the future ``` And here is the output when the call chain doesn't work as expected: Note: The number of allocated slots were 7 before I instructed Ollama to allocate 5 slots (and not more than 10 can be allocated) ``` Allocate 5 slots and then release 3 slots AllocateSlots: Allocating "5" slots The number of allocated slots are 10, leaving 0 slots to be allocated in the future {"call_id":"4a90b1d3","name":"ReleaseSlots","arguments":{"numberOfSlotsPrompt":"3"}} ``` As you can see, in the last output, that Ollama starts out correctly by allocating slots, and then Ollama intends to release slots, but it never happens - the tool ReleaseSlots never gets invoked. This happens a lot. I've no idea of why the invocation doesn't happen. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.3
GiteaMirror added the bug label 2026-04-12 16:02:15 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 22, 2024):

The model didn't return a tool call that was recognized by your tool call interpreting framework. What framework? What model?

<!-- gh-comment-id:2493237160 --> @rick-github commented on GitHub (Nov 22, 2024): The model didn't return a tool call that was recognized by your tool call interpreting framework. What framework? What model?
Author
Owner

@jan-johansson-mr commented on GitHub (Nov 22, 2024):

I am using .NET 9.0.100, and the model llama3.2

<!-- gh-comment-id:2493284598 --> @jan-johansson-mr commented on GitHub (Nov 22, 2024): I am using .NET 9.0.100, and the model llama3.2
Author
Owner

@rick-github commented on GitHub (Nov 22, 2024):

What framework or library is doing the work of sending requests to ollama and interpreting the results?

<!-- gh-comment-id:2493290402 --> @rick-github commented on GitHub (Nov 22, 2024): What framework or library is doing the work of sending requests to ollama and interpreting the results?
Author
Owner

@jan-johansson-mr commented on GitHub (Nov 22, 2024):

I'm using the following dependencies

  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Hosting" Version="9.0.0" />
    <PackageReference Include="Microsoft.Extensions.AI" Version="9.0.0-preview.9.24556.5" />
    <PackageReference Include="Microsoft.Extensions.AI.Ollama" Version="9.0.0-preview.9.24556.5" />
  </ItemGroup>

And here is the whole application, for completeness

using Microsoft.Extensions.AI;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;
using System.ComponentModel;
using Microsoft.Extensions.Logging;

internal class SlotSetItem
{
    private readonly int capacity = 10;
    private int slots = 10;

    [Description("This tool counts the number of total available slots.")]
    public async Task<string> CountAvailableSlots()
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"CountAvailableSlots: Returning the number of available slots {slots}");
        Console.ForegroundColor = color;
        return $"The number of available slots to allocate is {slots}";
    }

    [Description("This tool counts the number of total allocated slots.")]
    public async Task<string> CountAllocatedSlots()
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"CountAllocatedSlots: Returning the number of allocated slots {capacity - slots}");
        Console.ForegroundColor = color;
        return $"The number of allocated slots is {capacity - slots}";
    }

    [Description("This tool returns the capacity of slots.")]
    public async Task<string> Capacity()
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"Capacity: Returning capacity");
        Console.ForegroundColor = color;
        return $"The total capacity of slots, both allocated and released, are {capacity}";
    }

    [Description("This tool allocate slots")]
    public async Task<string> AllocateSlots([Description("The number of slots to allocate")] string numberOfSlotsPrompt)
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"""AllocateSlots: Allocating "{numberOfSlotsPrompt}" slots""");
        var oldNumberOfSlots = slots;
        var numberOfSlots = int.Parse(numberOfSlotsPrompt);
        slots = int.Max(0, slots - numberOfSlots);
        Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future");
        Console.ForegroundColor = color;
        return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future";
    }

    [Description("This tool release slots.")]
    public async Task<string> ReleaseSlots([Description("The number of slots to release")] string numberOfSlotsPrompt)
    {
        await Task.Yield();
        var color = Console.ForegroundColor;
        Console.ForegroundColor = ConsoleColor.Yellow;
        Console.WriteLine($"""ReleaseSlots: Releasing "{numberOfSlotsPrompt}" slots""");
        var numberOfSlots = int.Parse(numberOfSlotsPrompt);
        slots = int.Min(10, slots + numberOfSlots);
        Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future");
        Console.ForegroundColor = color;
        return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future";
    }
}

internal class Program
{
    private static async Task Main(string[] args)
    {
        HostBuilder hostBuilder = new();

        IChatClient innerChatClient = new OllamaChatClient("http://localhost:11434", "llama3.2");

        hostBuilder.ConfigureServices(services =>
        {
            services.AddLogging(builder => builder.AddConsole().SetMinimumLevel(LogLevel.Information));
            services.AddChatClient(builder => builder
                .UseFunctionInvocation()
                .UseLogging()
                .Use(innerChatClient));
        });

        using var app = hostBuilder.Build();

        List<ChatMessage> chatMessages = [
            new(ChatRole.System,
                """
                You manage a set of slots.
                You release, or free, slots using the tool ReleaseSlots.
                You allocate, or reserve, slots using the tool AllocateSlots.
                You count the available slots allocatable using the tool CountAvailableSlots.
                You count the allocated slots releasable using the tool CountAllocatedSlots.
                You find the total capacity of slots, allocated and released, using the tool Capacity.
                You make no assumptions about the set of slots.
                Do not modify the set of slots, without being instructed to do so.
                You do not verify the number of slots allocated or released.
                Before allocating slots, how many slots are currently allocated?
                Before releasing slots, how many slots are currently allocated?
                You answer any questions.
                Please double-check my allocation slots before confirming.
                """
                ),
        ];

        SlotSetItem _slotsSet = new();

        AIFunction freeSlotsTool = AIFunctionFactory.Create(_slotsSet.ReleaseSlots);
        AIFunction countAvailableSlotsTool = AIFunctionFactory.Create(_slotsSet.CountAvailableSlots);
        AIFunction countAllocatedSlotsTool = AIFunctionFactory.Create(_slotsSet.CountAllocatedSlots);
        AIFunction allocateSlotsTool = AIFunctionFactory.Create(_slotsSet.AllocateSlots);
        AIFunction capacitySlotsTool = AIFunctionFactory.Create(_slotsSet.Capacity);

        var chatOptions = new ChatOptions { Tools = [allocateSlotsTool, freeSlotsTool, countAvailableSlotsTool, countAllocatedSlotsTool, capacitySlotsTool] };

        var client = app.Services.GetRequiredService<IChatClient>();

        Console.WriteLine("--- Ready for interaction ---");

        var line = Console.ReadLine();

        if (string.IsNullOrWhiteSpace(line))
        {
            return;
        }

        for (; !string.IsNullOrWhiteSpace(line); line = Console.ReadLine())
        {
            ChatMessage userMessage = new(ChatRole.User, line);
            chatMessages.Add(userMessage);

            var response = await client.CompleteAsync(chatMessages, chatOptions);

            if (response is not null)
            {
                Console.Write(response);
            }
            else
            {
                Console.Write("Hello, World!");
            }

            chatMessages.Remove(userMessage);

            Console.WriteLine();
            Console.WriteLine("---");
        }
    }
}
<!-- gh-comment-id:2493547083 --> @jan-johansson-mr commented on GitHub (Nov 22, 2024): I'm using the following dependencies ``` <ItemGroup> <PackageReference Include="Microsoft.Extensions.Hosting" Version="9.0.0" /> <PackageReference Include="Microsoft.Extensions.AI" Version="9.0.0-preview.9.24556.5" /> <PackageReference Include="Microsoft.Extensions.AI.Ollama" Version="9.0.0-preview.9.24556.5" /> </ItemGroup> ``` And here is the whole application, for completeness ``` using Microsoft.Extensions.AI; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.DependencyInjection; using System.ComponentModel; using Microsoft.Extensions.Logging; internal class SlotSetItem { private readonly int capacity = 10; private int slots = 10; [Description("This tool counts the number of total available slots.")] public async Task<string> CountAvailableSlots() { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"CountAvailableSlots: Returning the number of available slots {slots}"); Console.ForegroundColor = color; return $"The number of available slots to allocate is {slots}"; } [Description("This tool counts the number of total allocated slots.")] public async Task<string> CountAllocatedSlots() { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"CountAllocatedSlots: Returning the number of allocated slots {capacity - slots}"); Console.ForegroundColor = color; return $"The number of allocated slots is {capacity - slots}"; } [Description("This tool returns the capacity of slots.")] public async Task<string> Capacity() { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"Capacity: Returning capacity"); Console.ForegroundColor = color; return $"The total capacity of slots, both allocated and released, are {capacity}"; } [Description("This tool allocate slots")] public async Task<string> AllocateSlots([Description("The number of slots to allocate")] string numberOfSlotsPrompt) { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"""AllocateSlots: Allocating "{numberOfSlotsPrompt}" slots"""); var oldNumberOfSlots = slots; var numberOfSlots = int.Parse(numberOfSlotsPrompt); slots = int.Max(0, slots - numberOfSlots); Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"); Console.ForegroundColor = color; return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"; } [Description("This tool release slots.")] public async Task<string> ReleaseSlots([Description("The number of slots to release")] string numberOfSlotsPrompt) { await Task.Yield(); var color = Console.ForegroundColor; Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine($"""ReleaseSlots: Releasing "{numberOfSlotsPrompt}" slots"""); var numberOfSlots = int.Parse(numberOfSlotsPrompt); slots = int.Min(10, slots + numberOfSlots); Console.WriteLine($"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"); Console.ForegroundColor = color; return $"The number of allocated slots are {capacity-slots}, leaving {slots} slots to be allocated in the future"; } } internal class Program { private static async Task Main(string[] args) { HostBuilder hostBuilder = new(); IChatClient innerChatClient = new OllamaChatClient("http://localhost:11434", "llama3.2"); hostBuilder.ConfigureServices(services => { services.AddLogging(builder => builder.AddConsole().SetMinimumLevel(LogLevel.Information)); services.AddChatClient(builder => builder .UseFunctionInvocation() .UseLogging() .Use(innerChatClient)); }); using var app = hostBuilder.Build(); List<ChatMessage> chatMessages = [ new(ChatRole.System, """ You manage a set of slots. You release, or free, slots using the tool ReleaseSlots. You allocate, or reserve, slots using the tool AllocateSlots. You count the available slots allocatable using the tool CountAvailableSlots. You count the allocated slots releasable using the tool CountAllocatedSlots. You find the total capacity of slots, allocated and released, using the tool Capacity. You make no assumptions about the set of slots. Do not modify the set of slots, without being instructed to do so. You do not verify the number of slots allocated or released. Before allocating slots, how many slots are currently allocated? Before releasing slots, how many slots are currently allocated? You answer any questions. Please double-check my allocation slots before confirming. """ ), ]; SlotSetItem _slotsSet = new(); AIFunction freeSlotsTool = AIFunctionFactory.Create(_slotsSet.ReleaseSlots); AIFunction countAvailableSlotsTool = AIFunctionFactory.Create(_slotsSet.CountAvailableSlots); AIFunction countAllocatedSlotsTool = AIFunctionFactory.Create(_slotsSet.CountAllocatedSlots); AIFunction allocateSlotsTool = AIFunctionFactory.Create(_slotsSet.AllocateSlots); AIFunction capacitySlotsTool = AIFunctionFactory.Create(_slotsSet.Capacity); var chatOptions = new ChatOptions { Tools = [allocateSlotsTool, freeSlotsTool, countAvailableSlotsTool, countAllocatedSlotsTool, capacitySlotsTool] }; var client = app.Services.GetRequiredService<IChatClient>(); Console.WriteLine("--- Ready for interaction ---"); var line = Console.ReadLine(); if (string.IsNullOrWhiteSpace(line)) { return; } for (; !string.IsNullOrWhiteSpace(line); line = Console.ReadLine()) { ChatMessage userMessage = new(ChatRole.User, line); chatMessages.Add(userMessage); var response = await client.CompleteAsync(chatMessages, chatOptions); if (response is not null) { Console.Write(response); } else { Console.Write("Hello, World!"); } chatMessages.Remove(userMessage); Console.WriteLine(); Console.WriteLine("---"); } } } ```
Author
Owner

@rick-github commented on GitHub (Nov 22, 2024):

I don't know if you will be able to resolve your problem here.

Just for background, skip the following if you know how tool calls work:

The way that tool handling works is that you provide a set of tools to your client library, which converts them to a JSON description. That description, along with the system message that encourages tool use, and the user prompt is sent to the ollama API endpoint. ollama converts the system message, tool list, user prompt, other context into a long string which is interpreted by the model. If the model chooses to use a tool, it's supposed to emit a string in a known format. The ollama server recognizes that format, converts it into a tool_call JSON structure, and returns it to the client. The client processes the result from the model, tool calls are invoked, and the results of those packaged into further messages to ollama. This continues until there are no more tools calls, and the client library returns the final result to the calling app. The summary is that if the model doesn't properly format the response for a tool call, it will be returned to the calling app as a response rather than be processed as a tool call by the client library. This is why your app gets {"call_id":"4a90b1d3","name":"ReleaseSlots","arguments":{"numberOfSlotsPrompt":"3"}} instead of The number of allocated slots are 7, leaving 3 slots to be allocated in the future.

The upshot is that the model is not returning a recognizable tool call. You can try modifying the system prompt to emphasize that it must use the right format in returning a tool call. You can look at the model template (ollama show --template llama3.2) to see what that is. If you have access to the source for the client library, you could modify it to be more generous in detecting tool calls: if the response is a well defined JSON message with a name that references a tool, then make the tool call anyway. You could add some sanity checking to the response processing in your app and retry failed messages. Or you could try a different model that has a better hit rate at emitting tool calls for your application.

The problem is that there is a lot in interpretation involved with LLMs and you need to have failure detection and recovery either in the client library or the app. And even then it might not be robust enough.

<!-- gh-comment-id:2493601880 --> @rick-github commented on GitHub (Nov 22, 2024): I don't know if you will be able to resolve your problem here. Just for background, skip the following if you know how tool calls work: The way that tool handling works is that you provide a set of tools to your client library, which converts them to a JSON description. That description, along with the system message that encourages tool use, and the user prompt is sent to the ollama API endpoint. ollama converts the system message, tool list, user prompt, other context into a long string which is interpreted by the model. If the model chooses to use a tool, it's supposed to emit a string in a known format. The ollama server recognizes that format, converts it into a tool_call JSON structure, and returns it to the client. The client processes the result from the model, tool calls are invoked, and the results of those packaged into further messages to ollama. This continues until there are no more tools calls, and the client library returns the final result to the calling app. The summary is that if the model doesn't properly format the response for a tool call, it will be returned to the calling app as a response rather than be processed as a tool call by the client library. This is why your app gets `{"call_id":"4a90b1d3","name":"ReleaseSlots","arguments":{"numberOfSlotsPrompt":"3"}}` instead of `The number of allocated slots are 7, leaving 3 slots to be allocated in the future`. The upshot is that the model is not returning a recognizable tool call. You can try modifying the system prompt to emphasize that it must use the right format in returning a tool call. You can look at the model template (`ollama show --template llama3.2`) to see what that is. If you have access to the source for the client library, you could modify it to be more generous in detecting tool calls: if the response is a well defined JSON message with a name that references a tool, then make the tool call anyway. You could add some sanity checking to the response processing in your app and retry failed messages. Or you could try a different model that has a better hit rate at emitting tool calls for your application. The problem is that there is a lot in interpretation involved with LLMs and you need to have failure detection and recovery either in the client library or the app. And even then it might not be robust enough.
Author
Owner

@jan-johansson-mr commented on GitHub (Nov 22, 2024):

Thanks for your verbose answer, I appreciate it a lot.

Steve Sanderson has an excellent presentation at youtube where he presents some basics, plus tooling with LLMs. However, he swapped to Azure and gpt-40-mini to showcase tooling. Maybe he did that not only to promote Azure (understandable) but also for the tooling itself.

I'll follow up on your remarks. Again, thanks for your valuable input!

<!-- gh-comment-id:2493867855 --> @jan-johansson-mr commented on GitHub (Nov 22, 2024): Thanks for your verbose answer, I appreciate it a lot. Steve Sanderson has an excellent [presentation at youtube](https://www.youtube.com/watch?v=qcp6ufe_XYo) where he presents some basics, plus tooling with LLMs. However, he swapped to Azure and `gpt-40-mini` to showcase tooling. Maybe he did that not only to promote Azure (understandable) but also for the tooling itself. I'll follow up on your remarks. Again, thanks for your valuable input!
Author
Owner

@jan-johansson-mr commented on GitHub (Nov 23, 2024):

Hi @rick-github,

As a final comment on this issue thread, I tried out the NuGet Ollama Sharp client and it seems to be way more stable than the client (in preview) that is delivered by Microsoft. I've ran several tests, and besides the model (llama3.2-3B) behaving somewhat strange from time to time, the miss-fire of tool calling only has occurred once. And then in the context of bad input from my part.

I got the idea from you to try another client, when you elaborated about the client error handling and whatnot in the call chain.

Best!

<!-- gh-comment-id:2495332070 --> @jan-johansson-mr commented on GitHub (Nov 23, 2024): Hi @rick-github, As a final comment on this issue thread, I tried out the [NuGet Ollama Sharp client](https://www.nuget.org/packages/OllamaSharp) and it seems to be way more stable than the client (in preview) that is delivered by Microsoft. I've ran several tests, and besides the model (llama3.2-3B) behaving somewhat strange from time to time, the miss-fire of tool calling only has occurred once. And then in the context of bad input from my part. I got the idea from you to try another client, when you elaborated about the client error handling and whatnot in the call chain. Best!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4977