[GH-ISSUE #7795] Empty output from chat-endpoint / non-empty endpoint for non-chat endpoint #51493

Closed
opened 2026-04-28 20:24:26 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @Tomas2D on GitHub (Nov 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7795

What is the issue?

When I request a chat endpoint with the attached request body, I receive an empty response (the content is an empty string) with done_reason: stop. When I send the exact same request wrapped (just wrapped in the appropriate models' template) to generate a (non-chat) endpoint, I receive the correct (non-empty) response.

Chat request

Request

$ curl -X POST -H "Content-Type: application/json" -d @chat_body.json http://127.0.0.1:11434/api/chat

File: chat_body.json

Response

{
    "model": "llama3.1",
    "created_at": "2024-11-22T09:32:06.13661Z",
    "message": {
        "role": "assistant",
        "content": ""
    },
    "done_reason": "stop",
    "done": true,
    "total_duration": 4868301375,
    "load_duration": 37070667,
    "prompt_eval_count": 1257,
    "prompt_eval_duration": 3522000000,
    "eval_count": 1
}

Non-Chat request

$ curl -X POST -H "Content-Type: application/json" -d @non_chat_body.json http://127.0.0.1:11434/api/generate

File: non_chat_body.json

Response

{
    "model": "llama3.1",
    "created_at": "2024-11-22T09:37:18.317133Z",
    "response": "Final Answer: Why was the math book sad? Because it had too many problems.",
    "done": true,
    "done_reason": "stop",
    "total_duration": 8587334375,
    "load_duration": 51171791,
    "prompt_eval_count": 1264,
    "prompt_eval_duration": 344000000,
    "eval_count": 18,
    "eval_duration": 1479000000
}

Closing notes

  • There is a 5 % chance of giving a non-empty result.

  • Before testing, I ensured I had the latest ollama3.1:8b.

  • Colleague had no issue with Ollama version 0.3.12.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.4.3

Originally created by @Tomas2D on GitHub (Nov 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7795 ### What is the issue? When I request a chat endpoint with the attached request body, I receive an empty response (the content is an empty string) with `done_reason: stop`. When I send the exact same request wrapped (just wrapped in the appropriate models' template) to generate a (non-chat) endpoint, I receive the correct (non-empty) response. ## Chat request ### Request ```bash $ curl -X POST -H "Content-Type: application/json" -d @chat_body.json http://127.0.0.1:11434/api/chat ``` File: [chat_body.json](https://github.com/user-attachments/files/17868125/chat_body.json) ### Response ```jsonl { "model": "llama3.1", "created_at": "2024-11-22T09:32:06.13661Z", "message": { "role": "assistant", "content": "" }, "done_reason": "stop", "done": true, "total_duration": 4868301375, "load_duration": 37070667, "prompt_eval_count": 1257, "prompt_eval_duration": 3522000000, "eval_count": 1 } ``` ## Non-Chat request ```bash $ curl -X POST -H "Content-Type: application/json" -d @non_chat_body.json http://127.0.0.1:11434/api/generate ``` File: [non_chat_body.json](https://github.com/user-attachments/files/17868126/non_chat_body.json) ### Response ```jsonl { "model": "llama3.1", "created_at": "2024-11-22T09:37:18.317133Z", "response": "Final Answer: Why was the math book sad? Because it had too many problems.", "done": true, "done_reason": "stop", "total_duration": 8587334375, "load_duration": 51171791, "prompt_eval_count": 1264, "prompt_eval_duration": 344000000, "eval_count": 18, "eval_duration": 1479000000 } ``` # Closing notes - There is a 5 % chance of giving a non-empty result. - Before testing, I ensured I had the latest `ollama3.1:8b`. - Colleague had no issue with Ollama version `0.3.12`. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.4.3
GiteaMirror added the bug label 2026-04-28 20:24:26 -05:00
Author
Owner

@jezekra1 commented on GitHub (Nov 22, 2024):

I was able to replicate in on version 0.4.3 but not on the older 0.3.14:

❯ curl -X POST -H "Content-Type: application/json" -d @chat_body.json http://127.0.0.1:11434/api/chat
{"model":"llama3.1","created_at":"2024-11-22T10:03:56.852401Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":25064769625,"load_duration":8629660791,"prompt_eval_count":1257,"prompt_eval_duration":16058000000,"eval_count":1,"eval_duration":2000000}%
<!-- gh-comment-id:2493387134 --> @jezekra1 commented on GitHub (Nov 22, 2024): I was able to replicate in on version `0.4.3` but not on the older `0.3.14`: ``` ❯ curl -X POST -H "Content-Type: application/json" -d @chat_body.json http://127.0.0.1:11434/api/chat {"model":"llama3.1","created_at":"2024-11-22T10:03:56.852401Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":25064769625,"load_duration":8629660791,"prompt_eval_count":1257,"prompt_eval_duration":16058000000,"eval_count":1,"eval_duration":2000000}% ```
Author
Owner

@PetrBulanek commented on GitHub (Nov 22, 2024):

Can confirm, experiencing the same issue with 0.4.3.

<!-- gh-comment-id:2493980215 --> @PetrBulanek commented on GitHub (Nov 22, 2024): Can confirm, experiencing the same issue with `0.4.3`.
Author
Owner

@rick-github commented on GitHub (Nov 22, 2024):

The prompt that's passed to the runner is missing the trailing newline that the model uses as a starting point in generating the response. I think it works in most cases because typically the last message in a message structure is a user message, and the template adds the trailing empty assistant message. In this case, the trailing message is a non-blank assistant message, so that messes up the prompt collation for the model. If you add {"role":"assistant","content":""} to chat_body.json, you get the expected result. I tried to find the change between 0.3.14 and 0.4.0 that caused this change in runner prompt generation but because it also includes the transition to go runners, I haven't located it yet.

<!-- gh-comment-id:2494021561 --> @rick-github commented on GitHub (Nov 22, 2024): The prompt that's passed to the runner is missing the trailing newline that the model uses as a starting point in generating the response. I think it works in most cases because typically the last message in a message structure is a user message, and the template adds the trailing empty assistant message. In this case, the trailing message is a non-blank assistant message, so that messes up the prompt collation for the model. If you add `{"role":"assistant","content":""}` to chat_body.json, you get the expected result. I tried to find the change between 0.3.14 and 0.4.0 that caused this change in runner prompt generation but because it also includes the transition to go runners, I haven't located it yet.
Author
Owner

@Tomas2D commented on GitHub (Nov 22, 2024):

That's a good observation; however, we can't add empty messages in our environment.
It would be great if you or someone else fixed this.

<!-- gh-comment-id:2494477869 --> @Tomas2D commented on GitHub (Nov 22, 2024): That's a good observation; however, we can't add empty messages in our environment. It would be great if you or someone else fixed this.
Author
Owner

@rick-github commented on GitHub (Nov 22, 2024):

Agreed, working on it.

<!-- gh-comment-id:2494482483 --> @rick-github commented on GitHub (Nov 22, 2024): Agreed, working on it.
Author
Owner

@Tomas2D commented on GitHub (Nov 24, 2024):

Just letting you know that this issue continues happening in v0.4.4 and v0.4.5

<!-- gh-comment-id:2495883852 --> @Tomas2D commented on GitHub (Nov 24, 2024): Just letting you know that this issue continues happening in v0.4.4 and v0.4.5
Author
Owner

@shuze295 commented on GitHub (Nov 24, 2024):

how to solve the error? is the only way to try not to use the chat mode

<!-- gh-comment-id:2496068152 --> @shuze295 commented on GitHub (Nov 24, 2024): how to solve the error? is the only way to try not to use the chat mode
Author
Owner

@Tomas2D commented on GitHub (Nov 28, 2024):

Any update? @rick-github

<!-- gh-comment-id:2505974887 --> @Tomas2D commented on GitHub (Nov 28, 2024): Any update? @rick-github
Author
Owner

@gabe-l-hart commented on GitHub (Dec 4, 2024):

This sounds suspiciously similar to #7656 (solved via #7749). That turned out to be an uninitialized value in the CGO interface that was introduced with 0.4 and the transition to the go-based runner. @jessegross any chance you have a smell test on other places we might want to look for a similar issue that would effect only the chat endpoint and not the generate endpoint?

The part that is confusing to me is that the main overhaul from 0.3 -> 0.4 happened in the runner, but the chat template is applied in the main server before sending the completion request to the runner.

<!-- gh-comment-id:2518070319 --> @gabe-l-hart commented on GitHub (Dec 4, 2024): This sounds suspiciously similar to #7656 (solved via #7749). That turned out to be an uninitialized value in the CGO interface that was introduced with `0.4` and the transition to the go-based runner. @jessegross any chance you have a smell test on other places we might want to look for a similar issue that would effect only the chat endpoint and not the generate endpoint? The part that is confusing to me is that the main overhaul from `0.3` -> `0.4` happened in the [runner](https://github.com/ollama/ollama/tree/main/llama/runner), but the chat template is applied [in the main server](https://github.com/ollama/ollama/blob/main/server/routes.go#L1460) before sending the completion request to the runner.
Author
Owner

@jessegross commented on GitHub (Dec 6, 2024):

I agree with @rick-github's assessment of the source of the problem.

@gabe-l-hart This seems unlikely to be related to the one that you fixed - the behavior of the 0.4.x series is actually pretty much exactly what I would expect to happen. In order for the LLM to respond, it would need the trailing <|start_header_id|>assistant<|end_header_id|>\n\n but templates generally don't insert it when the last message is from the assistant.

I'm somewhat surprised that 0.3.x behaves differently (though I see it does). I suppose it's possible that there is logic hidden somewhere to handle this case. However, to be honest, the main problem is that the prompt is doing something that the template is not expecting, so one of the two of them should change. Using generate with raw mode is bypassing the template engine, which is why that path works.

<!-- gh-comment-id:2522044788 --> @jessegross commented on GitHub (Dec 6, 2024): I agree with @rick-github's assessment of the source of the problem. @gabe-l-hart This seems unlikely to be related to the one that you fixed - the behavior of the 0.4.x series is actually pretty much exactly what I would expect to happen. In order for the LLM to respond, it would need the trailing `<|start_header_id|>assistant<|end_header_id|>\n\n` but templates generally don't insert it when the last message is from the assistant. I'm somewhat surprised that 0.3.x behaves differently (though I see it does). I suppose it's possible that there is logic hidden somewhere to handle this case. However, to be honest, the main problem is that the prompt is doing something that the template is not expecting, so one of the two of them should change. Using generate with `raw` mode is bypassing the template engine, which is why that path works.
Author
Owner

@gabe-l-hart commented on GitHub (Dec 6, 2024):

That makes sense. I haven't totally wrapped my head around the prompt template changes from 0.3 -> 0.4, but I do see that in the library, the llama3.1 template has the \n\n, but in the repo's template, those trailing newlines are missing. Is it possible that there's something confused there? I'll keep looking a bit to see what else could have changed.

<!-- gh-comment-id:2523691313 --> @gabe-l-hart commented on GitHub (Dec 6, 2024): That makes sense. I haven't totally wrapped my head around the prompt template changes from `0.3` -> `0.4`, but I do see that in [the library](https://ollama.com/library/llama3.1/blobs/948af2743fc7), the `llama3.1` template has the `\n\n`, but in [the repo's template](https://github.com/ollama/ollama/blob/main/template/llama3-instruct.gotmpl), those trailing newlines are missing. Is it possible that there's something confused there? I'll keep looking a bit to see what else could have changed.
Author
Owner

@gabe-l-hart commented on GitHub (Dec 6, 2024):

Ahh, sorry, I had not looked at the requests themselves and didn't realize that the message sequence has the last message from the assistant. You're spot on that this is "working as expected" in 0.4. I agree with your surprise that this was working in 0.3!

<!-- gh-comment-id:2523718866 --> @gabe-l-hart commented on GitHub (Dec 6, 2024): Ahh, sorry, I had not looked at the requests themselves and didn't realize that the message sequence has the last message from the `assistant`. You're spot on that this is "working as expected" in `0.4`. I agree with your surprise that this was working in `0.3`!
Author
Owner

@gabe-l-hart commented on GitHub (Dec 6, 2024):

The one possible change I see between 0.4 and 0.3 is that the loop for formatting images adds a strings.TrimSpace which gets applied to all messages, regardless of whether they have images present. I wonder if that could somehow be causing the template expansion to fire differently?

<!-- gh-comment-id:2523749929 --> @gabe-l-hart commented on GitHub (Dec 6, 2024): The one possible change I see between `0.4` and `0.3` is that the [loop for formatting images](https://github.com/ollama/ollama/blob/main/server/prompt.go#L129) adds a `strings.TrimSpace` which gets applied to all messages, regardless of whether they have images present. I wonder if that could somehow be causing the template expansion to fire differently?
Author
Owner

@jessegross commented on GitHub (Dec 6, 2024):

@gabe-l-hart You're right! I don't think we should be removing whitespace from prompts and, indeed, that appears to be the problem. I sent out a PR to change this.

<!-- gh-comment-id:2524609500 --> @jessegross commented on GitHub (Dec 6, 2024): @gabe-l-hart You're right! I don't think we should be removing whitespace from prompts and, indeed, that appears to be the problem. I sent out a PR to change this.
Author
Owner

@Tomas2D commented on GitHub (Dec 18, 2024):

Great work! I can acknowledge that the issue has been fixed in v0.0.53

<!-- gh-comment-id:2550549930 --> @Tomas2D commented on GitHub (Dec 18, 2024): Great work! I can acknowledge that the issue has been fixed in v0.0.53
Author
Owner

@pulinagrawal commented on GitHub (Jan 12, 2025):

empty output in both happening in v0.5.4

{
  "model": "llama3.2",
  "options": {
  },
  "messages": [{"role": "system", "content": "You are a DnD Dungeon Master. Say something in your first message to the user without waiting for their prompt."}
  ],
  "tools": [
      {
        "type": "function",
        "function": {
          "name": "roll_for_action",
          "description": "Checks a dice roll for a skill with a given difficulty class",
          "parameters": {
              "type": "object",
              "properties": {"n_dice": {"type": "integer"},
                             "sides": {"type": "integer"},
                             "skill": {"type": "string"},
                             "dc": {"type": "integer"},
                             "player": {"type": "string"}
                            }
          }
        }
      }]
}

Chat

curl -X POST -H "Content-Type: application/json" -d @lab05/lab05_dice_template.json http://127.0.0.1:11434/api/chat                                                     ─╯
{"model":"llama3.2","created_at":"2025-01-12T07:04:13.092381Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":1381458541,"load_duration":20188791,"prompt_eval_count":71,"prompt_eval_duration":1355000000,"eval_count":1,"eval_duration":3000000}

Non-Chat

curl -X POST -H "Content-Type: application/json" -d @lab05/lab05_dice_template.json http://127.0.0.1:11434/api/generate                                                 ─╯
{"model":"llama3.2","created_at":"2025-01-12T07:04:17.786266Z","response":"","done":true,"done_reason":"load"}%  
<!-- gh-comment-id:2585616268 --> @pulinagrawal commented on GitHub (Jan 12, 2025): empty output in both happening in v0.5.4 ``` { "model": "llama3.2", "options": { }, "messages": [{"role": "system", "content": "You are a DnD Dungeon Master. Say something in your first message to the user without waiting for their prompt."} ], "tools": [ { "type": "function", "function": { "name": "roll_for_action", "description": "Checks a dice roll for a skill with a given difficulty class", "parameters": { "type": "object", "properties": {"n_dice": {"type": "integer"}, "sides": {"type": "integer"}, "skill": {"type": "string"}, "dc": {"type": "integer"}, "player": {"type": "string"} } } } }] } ``` Chat ``` curl -X POST -H "Content-Type: application/json" -d @lab05/lab05_dice_template.json http://127.0.0.1:11434/api/chat ─╯ {"model":"llama3.2","created_at":"2025-01-12T07:04:13.092381Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":1381458541,"load_duration":20188791,"prompt_eval_count":71,"prompt_eval_duration":1355000000,"eval_count":1,"eval_duration":3000000} ``` Non-Chat ``` curl -X POST -H "Content-Type: application/json" -d @lab05/lab05_dice_template.json http://127.0.0.1:11434/api/generate ─╯ {"model":"llama3.2","created_at":"2025-01-12T07:04:17.786266Z","response":"","done":true,"done_reason":"load"}% ```
Author
Owner

@pulinagrawal commented on GitHub (Jan 12, 2025):

I was able to fix this by editing the Modelfile. #8392

<!-- gh-comment-id:2585637573 --> @pulinagrawal commented on GitHub (Jan 12, 2025): I was able to fix this by editing the Modelfile. #8392
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51493