[GH-ISSUE #7547] Response returns 'null' for 'finish_reason' #66858

Open
opened 2026-05-04 08:27:21 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @debruyckere on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7547

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

I'm using the OpenAI .Net library to connect to Ollama, using the default llama3.2 model. I get an "Unknown ChatFinishReason value." error from the library. You can see in below code from ChatFinishReasonExtensions (from OpenAI lib) that the value returned by Ollama is null.

image

The finish reason should apparently never be null. Note that this only happens for requests that time out. In normal use, the value 'stop' is returned which is parsed correctly.

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

ollama version is 0.3.14

Originally created by @debruyckere on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7547 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? I'm using the OpenAI .Net library to connect to Ollama, using the default llama3.2 model. I get an "Unknown ChatFinishReason value." error from the library. You can see in below code from ChatFinishReasonExtensions (from OpenAI lib) that the value returned by Ollama is null. ![image](https://github.com/user-attachments/assets/43a50819-4cc8-407c-a64f-85f4646cc21d) The finish reason should apparently never be null. Note that this only happens for requests that time out. In normal use, the value 'stop' is returned which is parsed correctly. ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version ollama version is 0.3.14
GiteaMirror added the bugapi labels 2026-05-04 08:27:22 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 7, 2024):

Why is ToChatFinishReason being called with an incomplete completion?

<!-- gh-comment-id:2462195860 --> @rick-github commented on GitHub (Nov 7, 2024): Why is ToChatFinishReason being called with an incomplete completion?
Author
Owner

@debruyckere commented on GitHub (Nov 13, 2024):

No idea. It is code of the OpenAI library.

<!-- gh-comment-id:2474489759 --> @debruyckere commented on GitHub (Nov 13, 2024): No idea. It is code of the OpenAI library.
Author
Owner

@ParthSareen commented on GitHub (Nov 13, 2024):

Hey @debruyckere, thanks for bringing this up. Got a couple comments:

  1. Is this case happening in a longer running tasks, is it flakey, or is it happening in every case?
  2. I'd recommend bumping Ollama to one of our newer releases - it may or may not help but will definitely make it easier to decrease the surface area to help :)
<!-- gh-comment-id:2474916677 --> @ParthSareen commented on GitHub (Nov 13, 2024): Hey @debruyckere, thanks for bringing this up. Got a couple comments: 1. Is this case happening in a longer running tasks, is it flakey, or is it happening in every case? 2. I'd recommend bumping Ollama to one of our newer releases - it may or may not help but will definitely make it easier to decrease the surface area to help :)
Author
Owner

@debruyckere commented on GitHub (Nov 14, 2024):

  1. It happens for tasks that time out (see ticket description)
  2. Same behavior with latest 0.4.1
<!-- gh-comment-id:2476454279 --> @debruyckere commented on GitHub (Nov 14, 2024): 1. It happens for tasks that time out (see ticket description) 2. Same behavior with latest 0.4.1
Author
Owner

@rick-github commented on GitHub (Nov 14, 2024):

If a request times out, ollama cannot supply a finish reason. The response is incomplete. The library should not call ToChatFinishReason with an incomplete response. It's a bug in the library.

<!-- gh-comment-id:2476466674 --> @rick-github commented on GitHub (Nov 14, 2024): If a request times out, ollama cannot supply a finish reason. The response is incomplete. The library should not call ToChatFinishReason with an incomplete response. It's a bug in the library.
Author
Owner

@ParthSareen commented on GitHub (Nov 14, 2024):

Hi @debruyckere,

@rick-github is correct in this case - it's the library's responsibility to handle issues which occur on the server side - including timeouts. It should be managing the case when a connection is terminated.

<!-- gh-comment-id:2477655158 --> @ParthSareen commented on GitHub (Nov 14, 2024): Hi @debruyckere, @rick-github is correct in this case - it's the library's responsibility to handle issues which occur on the server side - including timeouts. It should be managing the case when a connection is terminated.
Author
Owner

@debruyckere commented on GitHub (Nov 17, 2024):

If a request times out, ollama cannot supply a finish reason. The response is incomplete.

But it does, and that is the issue. Instead of just not providing a finish reason, it explicitly provides one, but with an invalid value 'null'.

<!-- gh-comment-id:2481388663 --> @debruyckere commented on GitHub (Nov 17, 2024): > If a request times out, ollama cannot supply a finish reason. The response is incomplete. But it does, and that is the issue. Instead of just _not_ providing a finish reason, it explicitly provides one, but with an invalid value 'null'.
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

How does ollama send "finish_reason":"null" when the TCP connection has been terminated because of timeout?

<!-- gh-comment-id:2481390911 --> @rick-github commented on GitHub (Nov 17, 2024): How does ollama send `"finish_reason":"null"` when the TCP connection has been terminated because of timeout?
Author
Owner

@debruyckere commented on GitHub (Nov 18, 2024):

I'm not talking about a TCP time-out, just some general time-out: a somewhat longer input causes it take longer than usual and then it fails with above error. Please find the full response below for both good and bad responses. Notice how the finish_reason property is different, as well as the token counts.

Bad response:

{
    "id": "chatcmpl-537",
    "object": "chat.completion",
    "created": 1731942481,
    "model": "llama3.2:latest",
    "system_fingerprint": "fp_ollama",
    "choices": [{
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "{\"nl\": [\"{{Error Text Instructions}}\",\"{{Text Instructions}}\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\""
            },
            "finish_reason": null
        }
    ],
    "usage": {
        "prompt_tokens": 0,
        "completion_tokens": 0,
        "total_tokens": 0
    }
}

Good response

{
    "id": "chatcmpl-316",
    "object": "chat.completion",
    "created": 1731942775,
    "model": "llama3.2:latest",
    "system_fingerprint": "fp_ollama",
    "choices": [{
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "{\"nl\": [\"Maak het snel\"]}\n\n"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 170,
        "completion_tokens": 11,
        "total_tokens": 181
    }
}

Stack trace of how it fails in the OpenAI library:

System.ArgumentOutOfRangeException: Unknown ChatFinishReason value.
Parameter name: value
   at OpenAI.Chat.ChatFinishReasonExtensions.ToChatFinishReason(String value)
   at OpenAI.Chat.InternalCreateChatCompletionResponseChoice.DeserializeInternalCreateChatCompletionResponseChoice(JsonElement element, ModelReaderWriterOptions options)
   at OpenAI.Chat.ChatCompletion.DeserializeChatCompletion(JsonElement element, ModelReaderWriterOptions options)
   at OpenAI.Chat.ChatCompletion.FromResponse(PipelineResponse response)
   at OpenAI.Chat.ChatClient.<CompleteChatAsync>d__8.MoveNext()
<!-- gh-comment-id:2483355256 --> @debruyckere commented on GitHub (Nov 18, 2024): I'm not talking about a TCP time-out, just some general time-out: a somewhat longer input causes it take longer than usual and then it fails with above error. Please find the full response below for both good and bad responses. Notice how the finish_reason property is different, as well as the token counts. Bad response: ``` { "id": "chatcmpl-537", "object": "chat.completion", "created": 1731942481, "model": "llama3.2:latest", "system_fingerprint": "fp_ollama", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "{\"nl\": [\"{{Error Text Instructions}}\",\"{{Text Instructions}}\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"" }, "finish_reason": null } ], "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0 } } ``` Good response ``` { "id": "chatcmpl-316", "object": "chat.completion", "created": 1731942775, "model": "llama3.2:latest", "system_fingerprint": "fp_ollama", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "{\"nl\": [\"Maak het snel\"]}\n\n" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 170, "completion_tokens": 11, "total_tokens": 181 } } ``` Stack trace of how it fails in the OpenAI library: ``` System.ArgumentOutOfRangeException: Unknown ChatFinishReason value. Parameter name: value at OpenAI.Chat.ChatFinishReasonExtensions.ToChatFinishReason(String value) at OpenAI.Chat.InternalCreateChatCompletionResponseChoice.DeserializeInternalCreateChatCompletionResponseChoice(JsonElement element, ModelReaderWriterOptions options) at OpenAI.Chat.ChatCompletion.DeserializeChatCompletion(JsonElement element, ModelReaderWriterOptions options) at OpenAI.Chat.ChatCompletion.FromResponse(PipelineResponse response) at OpenAI.Chat.ChatClient.<CompleteChatAsync>d__8.MoveNext() ```
Author
Owner

@rick-github commented on GitHub (Nov 18, 2024):

OK, this is not a timeout, it's the generation going off the rails and getting terminated because it exceeded the maximum token repeat limit: a14f76491d/llm/server.go (L809)

This is generally the result of a mismatch between the prompt and the response_format:

$ curl -s localhost:11434/v1/chat/completions -d '{"model":"llama3.2:3b","messages":[{"role":"user","content":"why is the sky blue?"}],"stream":false,"response_format":{"type":"json_object"},"seed":0,"temperature":0}' | jq
{
  "id": "chatcmpl-951",
  "object": "chat.completion",
  "created": 1731945862,
  "model": "llama3.2:3b",
  "system_fingerprint": "fp_ollama",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "{}\n \n  \n\n\n\n\n\n \n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n"
      },
      "finish_reason": null
    }
  ],
  "usage": {
    "prompt_tokens": 0,
    "completion_tokens": 0,
    "total_tokens": 0
  }
}

The code returns ctx.Err() which results in incomplete generation being returned. It should probably throw an actual error since the partial result is likely to be discarded anyway.

diff --git a/llm/server.go b/llm/server.go
index 96815826..4c0bb049 100644
--- a/llm/server.go
+++ b/llm/server.go
@@ -809,7 +809,7 @@ func (s *llmServer) Completion(ctx context.Context, req CompletionRequest, fn fu
                        // 30 picked as an arbitrary max token repeat limit, modify as needed
                        if tokenRepeat > 30 {
                                slog.Debug("prediction aborted, token repeat limit reached")
-                               return ctx.Err()
+                               return fmt.Errorf("prediction aborted, token repeat limit reached")
                        }

                        if c.Content != "" {
$ curl -D /dev/stderr -s localhost:11434/v1/chat/completions -d '{"model":"llama3.2:3b","messages":[{"role":"user","content":"why is the sky blue?"}],"stream":false,"response_format":{"type":"json_object"},"seed":0,"temperature":0}' | jq
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Date: Mon, 18 Nov 2024 16:16:58 GMT
Content-Length: 115

{
  "error": {
    "message": "prediction aborted, token repeat limit reached",
    "type": "api_error",
    "param": null,
    "code": null
  }
}
<!-- gh-comment-id:2483524791 --> @rick-github commented on GitHub (Nov 18, 2024): OK, this is not a timeout, it's the generation going off the rails and getting terminated because it exceeded the maximum token repeat limit: https://github.com/ollama/ollama/blob/a14f76491d694b2f5a0dec6473514b7f93beeea0/llm/server.go#L809 This is generally the result of a [mismatch](https://github.com/ollama/ollama/blob/main/docs/api.md#json-mode) between the prompt and the `response_format`: ```console $ curl -s localhost:11434/v1/chat/completions -d '{"model":"llama3.2:3b","messages":[{"role":"user","content":"why is the sky blue?"}],"stream":false,"response_format":{"type":"json_object"},"seed":0,"temperature":0}' | jq { "id": "chatcmpl-951", "object": "chat.completion", "created": 1731945862, "model": "llama3.2:3b", "system_fingerprint": "fp_ollama", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "{}\n \n \n\n\n\n\n\n \n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n" }, "finish_reason": null } ], "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0 } } ``` The code returns `ctx.Err()` which results in incomplete generation being returned. It should probably throw an actual error since the partial result is likely to be discarded anyway. ```diff diff --git a/llm/server.go b/llm/server.go index 96815826..4c0bb049 100644 --- a/llm/server.go +++ b/llm/server.go @@ -809,7 +809,7 @@ func (s *llmServer) Completion(ctx context.Context, req CompletionRequest, fn fu // 30 picked as an arbitrary max token repeat limit, modify as needed if tokenRepeat > 30 { slog.Debug("prediction aborted, token repeat limit reached") - return ctx.Err() + return fmt.Errorf("prediction aborted, token repeat limit reached") } if c.Content != "" { ``` ```console $ curl -D /dev/stderr -s localhost:11434/v1/chat/completions -d '{"model":"llama3.2:3b","messages":[{"role":"user","content":"why is the sky blue?"}],"stream":false,"response_format":{"type":"json_object"},"seed":0,"temperature":0}' | jq HTTP/1.1 500 Internal Server Error Content-Type: application/json Date: Mon, 18 Nov 2024 16:16:58 GMT Content-Length: 115 { "error": { "message": "prediction aborted, token repeat limit reached", "type": "api_error", "param": null, "code": null } } ```
Author
Owner

@rick-github commented on GitHub (Nov 18, 2024):

The OpenAI API recognizes 500s as an internal error and the python library handles it, so I think this is an appropriate way to deal with the failure.

import openai
from openai import OpenAI
client = OpenAI(base_url="http://localhost:11434/v1",api_key="ollama")

response=""
try:
  #Make your OpenAI API request here
  response = client.chat.completions.create(
    messages=[{"role":"user","content":"why is the sky blue?"}],
    model="llama3.2:3b",
    response_format={"type":"json_object"},
    seed=0,
    temperature=0,
    stream=False
  )
except openai.InternalServerError as e:
  print(f"OpenAI API returned an Internal Server Error: {e}")
  pass
except Exception as e:
  print(f"OpenAI API returned an API Error: {e}")
  pass
print(response)
$ python 7547.py
OpenAI API returned an Internal Server Error: Error code: 500 - {'error': {'message': 'prediction aborted, token repeat limit reached', 'type': 'api_error', 'param': None, 'code': None}}
<!-- gh-comment-id:2483612363 --> @rick-github commented on GitHub (Nov 18, 2024): The OpenAI API [recognizes](https://platform.openai.com/docs/guides/error-codes#api-errors) 500s as an internal error and the python library handles it, so I think this is an appropriate way to deal with the failure. ```python import openai from openai import OpenAI client = OpenAI(base_url="http://localhost:11434/v1",api_key="ollama") response="" try: #Make your OpenAI API request here response = client.chat.completions.create( messages=[{"role":"user","content":"why is the sky blue?"}], model="llama3.2:3b", response_format={"type":"json_object"}, seed=0, temperature=0, stream=False ) except openai.InternalServerError as e: print(f"OpenAI API returned an Internal Server Error: {e}") pass except Exception as e: print(f"OpenAI API returned an API Error: {e}") pass print(response) ``` ```console $ python 7547.py OpenAI API returned an Internal Server Error: Error code: 500 - {'error': {'message': 'prediction aborted, token repeat limit reached', 'type': 'api_error', 'param': None, 'code': None}} ```
Author
Owner

@ParthSareen commented on GitHub (Nov 18, 2024):

Hey @rick-github thanks for keeping up-to-date with this issue! Will spin out some better error messaging for this.

<!-- gh-comment-id:2483642131 --> @ParthSareen commented on GitHub (Nov 18, 2024): Hey @rick-github thanks for keeping up-to-date with this issue! Will spin out some better error messaging for this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66858