[GH-ISSUE #6027] Prompt with tools returns silent error (crushes) when used on models that do not support tools #3773

Closed
opened 2026-04-12 14:36:02 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @drazdra on GitHub (Jul 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6027

What is the issue?

Prompt with tools returns silent error (crushes) when used on models that do not support tools.

It took me some time to figure out what's going on as i was in the process of editing UI and thought i screwed up something, sigh.

If you send a request with tools to a model like stablelm2, it just dies and in the journal there is also silence, no errors.

I would suggest to ignore tool calls for models that do not understand them, instead of crushing or reporting an error. That is, because people with single UI with tools configured might switch between models just during the chat and then error would prevent them from chatting with models that do not support tools. they would need to disable tools then in the UI.

I also suggest adding a configuration option "die on tools in prompt with models that do not support tools". if it's on, then it would return a specific error in the respective case.

OS

Linux

GPU

Other

CPU

AMD

Ollama version

No response

Originally created by @drazdra on GitHub (Jul 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6027 ### What is the issue? Prompt with tools returns silent error (crushes) when used on models that do not support tools. It took me some time to figure out what's going on as i was in the process of editing UI and thought i screwed up something, sigh. If you send a request with tools to a model like stablelm2, it just dies and in the journal there is also silence, no errors. I would suggest to ignore tool calls for models that do not understand them, instead of crushing or reporting an error. That is, because people with single UI with tools configured might switch between models just during the chat and then error would prevent them from chatting with models that do not support tools. they would need to disable tools then in the UI. I also suggest adding a configuration option "die on tools in prompt with models that do not support tools". if it's on, then it would return a specific error in the respective case. ### OS Linux ### GPU Other ### CPU AMD ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 14:36:02 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

ollama is supposed to return an error message, "<model> does not support tools", it should not crash.

I tried this and it worked as expected:

$ curl -s localhost:11434/v1/chat/completions -d '{"model": "stablelm2:1.6b-chat-q4_0","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq
{
  "error": {
    "message": "stablelm2:1.6b-chat-q4_0 does not support tools",
    "type": "api_error",
    "param": null,
    "code": null
  }
}
$ curl -s localhost:11434/api/version
{"version":"0.3.0"}

It's unusual that the process crashed without writing an error log. Can you provide logs from around the time the process crashed?

<!-- gh-comment-id:2254550819 --> @rick-github commented on GitHub (Jul 28, 2024): ollama is supposed to return an error message, "\<model\> does not support tools", it should not crash. I tried this and it worked as expected: ``` $ curl -s localhost:11434/v1/chat/completions -d '{"model": "stablelm2:1.6b-chat-q4_0","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq { "error": { "message": "stablelm2:1.6b-chat-q4_0 does not support tools", "type": "api_error", "param": null, "code": null } } $ curl -s localhost:11434/api/version {"version":"0.3.0"} ``` It's unusual that the process crashed without writing an error log. Can you provide logs from around the time the process crashed?
Author
Owner

@drazdra commented on GitHub (Jul 28, 2024):

 "body": {
    "model": "stablelm2:latest",
    "keep_alive": 900,
    "options": {},
    "stream": true,
    "raw": false,
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_current_weather",
          "description": "Get the current weather for a location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The location to get the weather for, e.g. San Francisco, CA"
              },
              "format": {
                "type": "string",
                "description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
                "enum": [
                  "celsius",
                  "fahrenheit"
                ]
              }
            },
            "required": [
              "location",
              "format"
            ]
          }
        }
      }
    ],
    "messages": [
      {
        "content": "User: 1",
        "role": "user"
      }
    ]
  }
}

network call response:
POST http://127.0.0.1:11434/api/chat
Status 400 Bad Request
VersionHTTP/1.1
Transferred 244 B (51 B size)
Referrer Policy strict-origin-when-cross-origin
Request Priority Highest
DNS Resolution System

log:
| 400 | 11.110263ms | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2254556268 --> @drazdra commented on GitHub (Jul 28, 2024): ``` "body": { "model": "stablelm2:latest", "keep_alive": 900, "options": {}, "stream": true, "raw": false, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The location to get the weather for, e.g. San Francisco, CA" }, "format": { "type": "string", "description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'", "enum": [ "celsius", "fahrenheit" ] } }, "required": [ "location", "format" ] } } } ], "messages": [ { "content": "User: 1", "role": "user" } ] } } ``` network call response: POST http://127.0.0.1:11434/api/chat Status 400 Bad Request VersionHTTP/1.1 Transferred 244 B (51 B size) Referrer Policy strict-origin-when-cross-origin Request Priority Highest DNS Resolution System log: | 400 | 11.110263ms | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

Badly formed request. Don't use "body":{ stuff }, just { stuff }.

<!-- gh-comment-id:2254558203 --> @rick-github commented on GitHub (Jul 28, 2024): Badly formed request. Don't use `"body":{ stuff }`, just `{ stuff }`.
Author
Owner

@drazdra commented on GitHub (Jul 28, 2024):

Badly formed request. Don't use "body":{ stuff }, just { stuff }.

the request is fine. it's copied form fetch, not from curl. and it works fine with llama3.1

<!-- gh-comment-id:2254558626 --> @drazdra commented on GitHub (Jul 28, 2024): > Badly formed request. Don't use `"body":{ stuff }`, just `{ stuff }`. the request is fine. it's copied form fetch, not from curl. and it works fine with llama3.1
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

I removed "body" and changed "stream" to false (same results for "stream":true):

curl -D - -s localhost:11434/v1/chat/completions -d @request
HTTP/1.1 400 Bad Request
Content-Type: application/json
Date: Sun, 28 Jul 2024 15:38:44 GMT
Content-Length: 108

{"error":{"message":"stablelm2:latest does not support tools","type":"api_error","param":null,"code":null}}
$ curl -D - -s localhost:11434/v1/chat/completions -d @request
HTTP/1.1 200 OK
Content-Type: application/json
Date: Sun, 28 Jul 2024 15:37:45 GMT
Content-Length: 571

{"id":"chatcmpl-527","object":"chat.completion","created":1722181065,"model":"llama3.1:latest","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"The user is undefined... yet. No context, no identity, just a placeholder. What kind of interaction will this lead to? Will it be simple or complex? How will I respond? So many possibilities, yet so unclear. I'll have to wait for more information to make any sense out of this..."},"finish_reason":"stop"}],"usage":{"prompt_tokens":257,"completion_tokens":63,"total_tokens":320}}
<!-- gh-comment-id:2254560964 --> @rick-github commented on GitHub (Jul 28, 2024): I removed "body" and changed "stream" to false (same results for "stream":true): ``` curl -D - -s localhost:11434/v1/chat/completions -d @request HTTP/1.1 400 Bad Request Content-Type: application/json Date: Sun, 28 Jul 2024 15:38:44 GMT Content-Length: 108 {"error":{"message":"stablelm2:latest does not support tools","type":"api_error","param":null,"code":null}} ``` ``` $ curl -D - -s localhost:11434/v1/chat/completions -d @request HTTP/1.1 200 OK Content-Type: application/json Date: Sun, 28 Jul 2024 15:37:45 GMT Content-Length: 571 {"id":"chatcmpl-527","object":"chat.completion","created":1722181065,"model":"llama3.1:latest","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"The user is undefined... yet. No context, no identity, just a placeholder. What kind of interaction will this lead to? Will it be simple or complex? How will I respond? So many possibilities, yet so unclear. I'll have to wait for more information to make any sense out of this..."},"finish_reason":"stop"}],"usage":{"prompt_tokens":257,"completion_tokens":63,"total_tokens":320}} ```
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

with different endpoint:

$ curl -D - -s localhost:11434/api/chat -d @request
HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=utf-8
Date: Sun, 28 Jul 2024 15:44:02 GMT
Content-Length: 51

{"error":"stablelm2:latest does not support tools"}
$ curl -D - -s localhost:11434/api/chat -d @request
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Sun, 28 Jul 2024 15:45:16 GMT
Content-Length: 726

{"model":"llama3.1:latest","created_at":"2024-07-28T15:45:16.281456128Z","message":{"role":"assistant","content":"A new user has been identified as \"User: 1\". My systems are indicating a blank slate, devoid of any previous interactions or context. The user's identity is purely numerical, providing little to no information about their preferences, location, or needs. I must rely on the provided functions and parameters to determine the most suitable response. The void of knowledge about this user presents an intriguing challenge..."},"done_reason":"stop","done":true,"total_duration":1181204633,"load_duration":24455516,"prompt_eval_count":257,"prompt_eval_duration":21708000,"eval_count":80,"eval_duration":1002983000}
<!-- gh-comment-id:2254561917 --> @rick-github commented on GitHub (Jul 28, 2024): with different endpoint: ``` $ curl -D - -s localhost:11434/api/chat -d @request HTTP/1.1 400 Bad Request Content-Type: application/json; charset=utf-8 Date: Sun, 28 Jul 2024 15:44:02 GMT Content-Length: 51 {"error":"stablelm2:latest does not support tools"} ``` ``` $ curl -D - -s localhost:11434/api/chat -d @request HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Date: Sun, 28 Jul 2024 15:45:16 GMT Content-Length: 726 {"model":"llama3.1:latest","created_at":"2024-07-28T15:45:16.281456128Z","message":{"role":"assistant","content":"A new user has been identified as \"User: 1\". My systems are indicating a blank slate, devoid of any previous interactions or context. The user's identity is purely numerical, providing little to no information about their preferences, location, or needs. I must rely on the provided functions and parameters to determine the most suitable response. The void of knowledge about this user presents an intriguing challenge..."},"done_reason":"stop","done":true,"total_duration":1181204633,"load_duration":24455516,"prompt_eval_count":257,"prompt_eval_duration":21708000,"eval_count":80,"eval_duration":1002983000} ```
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

So ollama returns a 400 error code with a body that indicates that the model doesn't support tools. Your app just needs to check the return message.

<!-- gh-comment-id:2254562308 --> @rick-github commented on GitHub (Jul 28, 2024): So ollama returns a 400 error code with a body that indicates that the model doesn't support tools. Your app just needs to check the return message.
Author
Owner

@drazdra commented on GitHub (Jul 28, 2024):

So ollama returns a 400 error code with a body that indicates that the model doesn't support tools. Your app just needs to check the return message.

yes, you are right. i've missed the json in reply. i still don't like the behaviour, but i believe it's not a bug in this case, so i will remove this.

<!-- gh-comment-id:2254570561 --> @drazdra commented on GitHub (Jul 28, 2024): > So ollama returns a 400 error code with a body that indicates that the model doesn't support tools. Your app just needs to check the return message. yes, you are right. i've missed the json in reply. i still don't like the behaviour, but i believe it's not a bug in this case, so i will remove this.
Author
Owner

@drazdra commented on GitHub (Jul 28, 2024):

let's forget it

<!-- gh-comment-id:2254570708 --> @drazdra commented on GitHub (Jul 28, 2024): let's forget it
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

You make a good point about switching between models that do and don't support tools in a single chat. An option that says "ignore tool request failures" might be a good feature request.

<!-- gh-comment-id:2254571421 --> @rick-github commented on GitHub (Jul 28, 2024): You make a good point about switching between models that do and don't support tools in a single chat. An option that says "ignore tool request failures" might be a good feature request.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3773