[GH-ISSUE #11608] Tool calling is less consistent than LMStudio with the same models #7666

Open
opened 2026-04-12 19:45:39 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @kov on GitHub (Jul 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11608

What is the issue?

I almost never get tool-trained models to consistently use tools properly. I thought it was a limitation of the models, but running the exact same models in LMStudio gets much better results. Running openai codex-rs with the newly release ollama:

user

use your tools to explain this project

codex

To help you understand the project, I'll need to analyze its structure and key components. Here are the steps I'll take:
Identify the Main Directory Structure:
<snip>

json
{
  "name": "shell",
  "arguments": {
    "command": ["rg", "-files"]
  }
}

This command will help me identify the main components and files in the project. Once I have this information, I can provide a more detailed explanation.

It just shows the json in the chat rather than doing a tool call, as you can see. In LMStudio with the same model or even smaller models trained for tool use I get codex-rs to consistently use the tools, no json being shown back to me.

Relevant log output


OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.10.1

Originally created by @kov on GitHub (Jul 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11608 ### What is the issue? I almost never get tool-trained models to consistently use tools properly. I thought it was a limitation of the models, but running the exact same models in LMStudio gets much better results. Running openai codex-rs with the newly release ollama: ``` user use your tools to explain this project codex To help you understand the project, I'll need to analyze its structure and key components. Here are the steps I'll take: Identify the Main Directory Structure: <snip> json { "name": "shell", "arguments": { "command": ["rg", "-files"] } } This command will help me identify the main components and files in the project. Once I have this information, I can provide a more detailed explanation. ``` It just shows the json in the chat rather than doing a tool call, as you can see. In LMStudio with the same model or even smaller models trained for tool use I get codex-rs to consistently use the tools, no json being shown back to me. ### Relevant log output ```shell ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.10.1
GiteaMirror added the bug label 2026-04-12 19:45:39 -05:00
Author
Owner

@technovangelist commented on GitHub (Aug 1, 2025):

Can you share the code you are using to demonstrate this with Ollama

<!-- gh-comment-id:3145484671 --> @technovangelist commented on GitHub (Aug 1, 2025): Can you share the code you are using to demonstrate this with Ollama
Author
Owner

@ajunca commented on GitHub (Aug 4, 2025):

I can confirm this. For example, using OpenCode (or now Crush) with the same models on LMStudio or Ollama, LMStudio success to correctly call the tools and work, Ollama always fails. It kind of sees the tools, but calling is kind of broken. I tried a bunch of tool ready models from the ollama repository, but without success.

In here you can see the problem with example:
https://github.com/sst/opencode/issues/729#issuecomment-3079552502

Maybe there is a "hidden" Ollama configuration to make tools work? Could not find anything.

<!-- gh-comment-id:3150844752 --> @ajunca commented on GitHub (Aug 4, 2025): I can confirm this. For example, using OpenCode (or now Crush) with the same models on LMStudio or Ollama, LMStudio success to correctly call the tools and work, Ollama always fails. It kind of sees the tools, but calling is kind of broken. I tried a bunch of tool ready models from the ollama repository, but without success. In here you can see the problem with example: [https://github.com/sst/opencode/issues/729#issuecomment-3079552502](https://github.com/sst/opencode/issues/729#issuecomment-3079552502) Maybe there is a "hidden" Ollama configuration to make tools work? Could not find anything.
Author
Owner

@kov commented on GitHub (Aug 5, 2025):

I tried a few agents, the one I've been testing with the most is https://github.com/openai/codex/tree/main/codex-rs

<!-- gh-comment-id:3152924588 --> @kov commented on GitHub (Aug 5, 2025): I tried a few agents, the one I've been testing with the most is https://github.com/openai/codex/tree/main/codex-rs
Author
Owner

@kov commented on GitHub (Mar 2, 2026):

An update here. I tested qwen3.5 35B A3B this weekend. It works very well on lmstudio, but completely fails on our company ollama. We will try other inference frameworks and I'll report back. A simple hello sent by the pi coding agent somehow becomes a failed tool call. It is getting very confused somehow. This is with ollama 0.17.4. The "The user" there is part of the model response, I just typed "hello".

                                                                                                                                            
 > hello                                                                                                                                      
                                                                                                                                            

 The user                                                                                                                                   

 {"name": "echo", "arguments":{"output": "Hello! I'm an expert coding assistant ready to help with your coding tasks. I can read files,     
 execute commands, make edits, and write new files. How can I assist you today?"}}   
<!-- gh-comment-id:3984725676 --> @kov commented on GitHub (Mar 2, 2026): An update here. I tested qwen3.5 35B A3B this weekend. It works very well on lmstudio, but completely fails on our company ollama. We will try other inference frameworks and I'll report back. A simple hello sent by the pi coding agent somehow becomes a failed tool call. It is getting very confused somehow. This is with ollama 0.17.4. The "The user" there is part of the model response, I just typed "hello". ``` > hello The user {"name": "echo", "arguments":{"output": "Hello! I'm an expert coding assistant ready to help with your coding tasks. I can read files, execute commands, make edits, and write new files. How can I assist you today?"}} ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7666