[GH-ISSUE #11384] Deepseek R1-70B forgets to reply #54027

Open
opened 2026-04-29 05:06:57 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Notbici on GitHub (Jul 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11384

What is the issue?

Replication:

  1. ollama pull deepseek-r1:70b
  2. export OLLAMA_FLASH_ATTENTION=1
  3. export OLLAMA_DEBUG=2
  4. export OLLAMA_KV_CACHE_TYPE=q8_0
  5. Have a long enough discussion with the model until it forgets to reply. I have no scientific way to explain it, but it happened a lot for me to not use deepseekr1 distills.

I left a long test dialog and also an openwebui export because thinking does not appear to output in the debug logs with debug=2

If theres a better way to get information lmk.

OpenWebUI Export chat-export-1752260868558.json

Ollama Logs

Basically look at around the areas I said "bye" or "thx", you'll see where I was like "huh" this is when I managed to get it to consecutively not reply randomly.

Image

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

0.9.6

Originally created by @Notbici on GitHub (Jul 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11384 ### What is the issue? Replication: 1. ollama pull deepseek-r1:70b 2. export OLLAMA_FLASH_ATTENTION=1 3. export OLLAMA_DEBUG=2 4. export OLLAMA_KV_CACHE_TYPE=q8_0 5. Have a long enough discussion with the model until it forgets to reply. I have no scientific way to explain it, but it happened a lot for me to not use deepseekr1 distills. I left a long test dialog and also an openwebui export because thinking does not appear to output in the debug logs with debug=2 If theres a better way to get information lmk. [OpenWebUI Export chat-export-1752260868558.json](https://github.com/user-attachments/files/21189118/chat-export-1752260868558.json) [Ollama Logs](https://github.com/user-attachments/files/21188988/replic.txt) Basically look at around the areas I said "bye" or "thx", you'll see where I was like "huh" this is when I managed to get it to consecutively not reply randomly. <img width="535" height="539" alt="Image" src="https://github.com/user-attachments/assets/4265842c-b7df-46e0-a2be-315bb2bc1f34" /> ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.9.6
GiteaMirror added the bug label 2026-04-29 05:06:57 -05:00
Author
Owner

@Notbici commented on GitHub (Jul 11, 2025):

on a bigger machine, 671B has no problems like this, so I also can say it could be a bad model even if it is 70B. (Qwen32B is normally fine, but the smaller versions sometimes arent too) but its harder to get Qwen to do this, Deepseek is easy to trigger non replies.

Its pretty dumb, and it'll spend a lot of time thinking about things we spoke way further up, like how many R's even if the chat has moved on. So maybe templating problem, maybe just distilled model? Or openwebui too..

<!-- gh-comment-id:3063457039 --> @Notbici commented on GitHub (Jul 11, 2025): on a bigger machine, 671B has no problems like this, so I also can say it could be a bad model even if it is 70B. (Qwen32B is normally fine, but the smaller versions sometimes arent too) but its harder to get Qwen to do this, Deepseek is easy to trigger non replies. Its pretty dumb, and it'll spend a lot of time thinking about things we spoke way further up, like how many R's even if the chat has moved on. So maybe templating problem, maybe just distilled model? Or openwebui too..
Author
Owner

@pdevine commented on GitHub (Jul 11, 2025):

What is your context size? It looks like you're hitting it. You can set it globally w/ OLLAMA_CONTEXT_LENGTH although in the next version there will be a slider in the settings window to let you control it more easily.

<!-- gh-comment-id:3064178863 --> @pdevine commented on GitHub (Jul 11, 2025): What is your context size? It looks like you're hitting it. You can set it globally w/ `OLLAMA_CONTEXT_LENGTH` although in the next version there will be a slider in the settings window to let you control it more easily.
Author
Owner

@rick-github commented on GitHub (Jul 12, 2025):

It doesn't seem to be a context overflow.

time=2025-07-11T19:06:13.239Z level=DEBUG source=server.go:736 msg="completion request" images=0 prompt=970 format=""
time=2025-07-11T19:06:13.239Z level=TRACE source=server.go:737 msg="completion request" prompt="<|User|>suh..<|Assistant|>Hello! How can I assist you today? 😊<|end▁of▁sentence|><|User|>good question<|Assistant|>\nThank you! I'm curious—what's on your mind? 😊<|end▁of▁sentence|><|User|>food<|Assistant|>\nFood! What are you in the mood for? Let me know if you need recipe ideas, cooking tips, or just want to chat about your favorite dishes! 😊<|end▁of▁sentence|><|User|>strawberries, how many R’s are there<|Assistant|>\nThere are **2 R’s** in the word \"strawberries.\" 😊 Would you like to know more about strawberries?<|end▁of▁sentence|><|User|>thx\n\nyou forgot to reply.. and also that was wrong<|Assistant|>\nApologies if there was any confusion earlier! Let me clarify: the word **\"strawberries\"** has **2 R’s** in it. If you have any other questions or need clarification, feel free to ask! 😊<|end▁of▁sentence|><|User|>thx\n\nhuh interesting, bye<|Assistant|>"
time=2025-07-11T19:06:13.240Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=208 prompt=193 used=186 remaining=7
[GIN] 2025/07/11 - 19:06:14 | 200 |  1.369241163s |  100.91.170.148 | POST     "/api/chat"

The context is 970 bytes or 193 tokens. The model must at least be generating a token in order for OWUI to show its Thinking spinner, but the completion ends after 1.3 seconds without generating any content that could be rendered by OWUI.

The next prompt appends "hello" and the cache shows that it's holding 228 tokens, ie the the previous completion generated 35 tokens. There's no indication of a model load that would happen if the runner had crashed, interrupting the completion. It could perhaps be OWUI.

Does this happen if you use the ollama CLI?

<!-- gh-comment-id:3064488146 --> @rick-github commented on GitHub (Jul 12, 2025): It doesn't seem to be a context overflow. ``` time=2025-07-11T19:06:13.239Z level=DEBUG source=server.go:736 msg="completion request" images=0 prompt=970 format="" time=2025-07-11T19:06:13.239Z level=TRACE source=server.go:737 msg="completion request" prompt="<|User|>suh..<|Assistant|>Hello! How can I assist you today? 😊<|end▁of▁sentence|><|User|>good question<|Assistant|>\nThank you! I'm curious—what's on your mind? 😊<|end▁of▁sentence|><|User|>food<|Assistant|>\nFood! What are you in the mood for? Let me know if you need recipe ideas, cooking tips, or just want to chat about your favorite dishes! 😊<|end▁of▁sentence|><|User|>strawberries, how many R’s are there<|Assistant|>\nThere are **2 R’s** in the word \"strawberries.\" 😊 Would you like to know more about strawberries?<|end▁of▁sentence|><|User|>thx\n\nyou forgot to reply.. and also that was wrong<|Assistant|>\nApologies if there was any confusion earlier! Let me clarify: the word **\"strawberries\"** has **2 R’s** in it. If you have any other questions or need clarification, feel free to ask! 😊<|end▁of▁sentence|><|User|>thx\n\nhuh interesting, bye<|Assistant|>" time=2025-07-11T19:06:13.240Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=208 prompt=193 used=186 remaining=7 [GIN] 2025/07/11 - 19:06:14 | 200 | 1.369241163s | 100.91.170.148 | POST "/api/chat" ``` The context is 970 bytes or 193 tokens. The model must at least be generating a token in order for OWUI to show its `Thinking` spinner, but the completion ends after 1.3 seconds without generating any `content` that could be rendered by OWUI. The next prompt appends "hello" and the cache shows that it's holding 228 tokens, ie the the previous completion generated 35 tokens. There's no indication of a model load that would happen if the runner had crashed, interrupting the completion. It could perhaps be OWUI. Does this happen if you use the ollama CLI?
Author
Owner

@rick-github commented on GitHub (Jul 12, 2025):

Looking at the chat export: after the model responded to the strawberry question at the top of the image, the user sent thx

            "role": "user",
            "content": "thx",
            "timestamp": 1752260762,
            "models": [
              "deepseek-r1:70b"
            ]

The model responded:

            "role": "assistant",
            "content": "<details type=\"reasoning\" done=\"false\">\n<summary>Thinking…</summary>\n> \n> You're welcome! 😊 Let me know if there's anything else I can help with.\n</details>",
            "model": "deepseek-r1:70b",

but it wasn't rendered in the chat, just showed the Thinking spinner. done="false" implies that OWUI didn't receive a token to indicate that thinking is finished. Does OWUI use the think API in ollama, or does it rely on detecting <think>/</think> outputs from the model?

<!-- gh-comment-id:3065982675 --> @rick-github commented on GitHub (Jul 12, 2025): Looking at the chat export: after the model responded to the strawberry question at the top of the image, the user sent `thx` ``` "role": "user", "content": "thx", "timestamp": 1752260762, "models": [ "deepseek-r1:70b" ] ``` The model responded: ``` "role": "assistant", "content": "<details type=\"reasoning\" done=\"false\">\n<summary>Thinking…</summary>\n> \n> You're welcome! 😊 Let me know if there's anything else I can help with.\n</details>", "model": "deepseek-r1:70b", ``` but it wasn't rendered in the chat, just showed the `Thinking` spinner. `done="false"` implies that OWUI didn't receive a token to indicate that thinking is finished. Does OWUI use the `think` API in ollama, or does it rely on detecting `<think>`/`</think>` outputs from the model?
Author
Owner

@igorschlum commented on GitHub (Jul 19, 2025):

I'm using the Ollama 0.10.0 preview on a Mac Studio with 192GB of RAM, testing the same prompt both the chat window (which is great) and the terminal. When using DeepSeek R1-70B, it starts processing but then produces no output in the Ollama chat window. In the terminal, it takes a long time to respond but eventually provides an answer. It seems the Ollama chat window has a timeout delay shorter than the response time of DeepSeek R1-70B.

Context length set to 32K

<!-- gh-comment-id:3092579443 --> @igorschlum commented on GitHub (Jul 19, 2025): I'm using the Ollama 0.10.0 preview on a Mac Studio with 192GB of RAM, testing the same prompt both the chat window (which is great) and the terminal. When using DeepSeek R1-70B, it starts processing but then produces no output in the Ollama chat window. In the terminal, it takes a long time to respond but eventually provides an answer. It seems the Ollama chat window has a timeout delay shorter than the response time of DeepSeek R1-70B. Context length set to 32K
Author
Owner

@pdevine commented on GitHub (Jul 25, 2025):

@igorschlum it should have produced the same results regardless of if you were using the CLI or the UI. Could you try ollama ps in both cases?

<!-- gh-comment-id:3116470103 --> @pdevine commented on GitHub (Jul 25, 2025): @igorschlum it should have produced the same results regardless of if you were using the CLI or the UI. Could you try `ollama ps` in both cases?
Author
Owner

@igorschlum commented on GitHub (Jul 29, 2025):

@pdevine thank you for your answer. I just downloaded the last 0.10.0 preview and use now qwen3:235b
Same issue. If the question is short, I get an answer. If the question is large, I get no answer thru Ollama UI.

igor@Mac-Studio-192 ~ % ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
qwen3:235b cf5635fabe3c 167 GB 8%/92% CPU/GPU 32768 2 minutes from now

If I click edit and reload the prompt, it works most of the time, but sometimes it doesn't.

<!-- gh-comment-id:3133138913 --> @igorschlum commented on GitHub (Jul 29, 2025): @pdevine thank you for your answer. I just downloaded the last 0.10.0 preview and use now qwen3:235b Same issue. If the question is short, I get an answer. If the question is large, I get no answer thru Ollama UI. igor@Mac-Studio-192 ~ % ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3:235b cf5635fabe3c 167 GB 8%/92% CPU/GPU 32768 2 minutes from now If I click edit and reload the prompt, it works most of the time, but sometimes it doesn't.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54027