[GH-ISSUE #12407] While using long num_ctx values like 32768 with ollama's qwen3:8b and deepseek:8b, the output becomes very slow and often times out in version 0.12.0, However, this runs fast in version 0.11.11 #54753

Open
opened 2026-04-29 07:11:59 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @zxiaomzxm on GitHub (Sep 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12407

What is the issue?

While using long num_ctx values like 32768 with ollama's qwen3:8b and deepseek:8b, the output becomes very slow and often times out in version 0.12.0, However, this runs fast in version 0.11.11.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.0

Originally created by @zxiaomzxm on GitHub (Sep 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12407 ### What is the issue? While using long num_ctx values like 32768 with ollama's qwen3:8b and deepseek:8b, the output becomes very slow and often times out in version 0.12.0, However, this runs fast in version 0.11.11. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.0
GiteaMirror added the bug label 2026-04-29 07:12:00 -05:00
Author
Owner

@pdevine commented on GitHub (Sep 25, 2025):

What's the output of ollama ps and what type of GPU are you using?

<!-- gh-comment-id:3332465552 --> @pdevine commented on GitHub (Sep 25, 2025): What's the output of `ollama ps` and what type of GPU are you using?
Author
Owner

@zxiaomzxm commented on GitHub (Sep 25, 2025):

What's the output of ollama ps and what type of GPU are you using?
@pdevine
Image

Image

same phenomenon using qwen3:8b with num_ctx = 32768

when using ollama 0.11.11, the ollama ps output is:

Image
<!-- gh-comment-id:3333564779 --> @zxiaomzxm commented on GitHub (Sep 25, 2025): > What's the output of `ollama ps` and what type of GPU are you using? @pdevine <img width="909" height="128" alt="Image" src="https://github.com/user-attachments/assets/d0e32477-1c88-4193-befa-bb1952ba9479" /> <img width="748" height="211" alt="Image" src="https://github.com/user-attachments/assets/d29f14f2-e27a-4f2a-a09e-c02cfd1d23ce" /> same phenomenon using qwen3:8b with num_ctx = 32768 when using ollama 0.11.11, the ollama ps output is: <img width="818" height="63" alt="Image" src="https://github.com/user-attachments/assets/cc0b09f7-a6fe-4b85-85f5-5523de0eeac9" />
Author
Owner

@deep1305 commented on GitHub (Sep 25, 2025):

I also am facing the same issue since recent update of ollama when running qwen3 models.

<!-- gh-comment-id:3335016150 --> @deep1305 commented on GitHub (Sep 25, 2025): I also am facing the same issue since recent update of ollama when running qwen3 models.
Author
Owner

@jessegross commented on GitHub (Sep 25, 2025):

Can you give specific numbers for the two versions? You can run ollama run deepseek-r1:8b --verbose and then prompt with something like Tell me a short story.

0.12.0 switched these models to the Ollama engine, which also brings in the new memory management. On my hardware, trying to match your scenario as much as possible, this is what I get:
Llama engine->Ollama engine, same number of layers on GPU (0 layers): 6.8->21.31 tokens per second
Ollama engine, old memory layout (0 layers)->Ollama engine, new memory layout (4 layers): 21.31->15.59 tokens per second

So, at least for me, the new Ollama engine is much faster than the llama engine. The new memory management is actually doing a better job in that it is putting more layers on the GPU but for a small number of layers in might be better to keep everything on the CPU rather than splitting. Even still, the overall switch more than doubles the speed on my hardware.

You can test this by running the following on 0.12.0 and seeing how the result compares:

ollama run deepseek-r1:8b --verbose
>>> /set parameter num_gpu 0
Set parameter 'num_gpu' to '0'
>>> Tell me a short story
<!-- gh-comment-id:3335522839 --> @jessegross commented on GitHub (Sep 25, 2025): Can you give specific numbers for the two versions? You can run `ollama run deepseek-r1:8b --verbose` and then prompt with something like `Tell me a short story`. 0.12.0 switched these models to the Ollama engine, which also brings in the new memory management. On my hardware, trying to match your scenario as much as possible, this is what I get: Llama engine->Ollama engine, same number of layers on GPU (0 layers): 6.8->21.31 tokens per second Ollama engine, old memory layout (0 layers)->Ollama engine, new memory layout (4 layers): 21.31->15.59 tokens per second So, at least for me, the new Ollama engine is much faster than the llama engine. The new memory management is actually doing a better job in that it is putting more layers on the GPU but for a small number of layers in might be better to keep everything on the CPU rather than splitting. Even still, the overall switch more than doubles the speed on my hardware. You can test this by running the following on 0.12.0 and seeing how the result compares: ``` ollama run deepseek-r1:8b --verbose >>> /set parameter num_gpu 0 Set parameter 'num_gpu' to '0' >>> Tell me a short story ```
Author
Owner

@wajihullahbaig commented on GitHub (Sep 26, 2025):

There is definitely a problem with 0.12.0
Not only is it slow, it hangs on multiple/batch requests. I was using gemma3:4B, and barely would it give me a response.
Switched back to 0.11, and things are working now.

<!-- gh-comment-id:3336854660 --> @wajihullahbaig commented on GitHub (Sep 26, 2025): There is definitely a problem with 0.12.0 Not only is it slow, it hangs on multiple/batch requests. I was using gemma3:4B, and barely would it give me a response. Switched back to 0.11, and things are working now.
Author
Owner

@jessegross commented on GitHub (Sep 26, 2025):

@wajihullahbaig It's not clear whether your issue is the same. Can you please file a new bug with specific versions you tried (e.g. what version worked - 0.11.0, 0.11.11 or something else?), reproduction steps, logs and hardware information?

<!-- gh-comment-id:3340722021 --> @jessegross commented on GitHub (Sep 26, 2025): @wajihullahbaig It's not clear whether your issue is the same. Can you please file a new bug with specific versions you tried (e.g. what version worked - 0.11.0, 0.11.11 or something else?), reproduction steps, logs and hardware information?
Author
Owner

@Maltz42 commented on GitHub (Sep 27, 2025):

I'm also having errors with qwen3:235b-a22b-instruct-2507-q8_0. I don't have any problems in versions <=0.12.1, but do in 0.12.2 and 0.12.3, with a larger, full context window.

"Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details"

Starting with 0.12.2, nvidia-smi shows GPU usage much closer to full after the model is loaded, and CPU usage is much higher than in v0.12.1 after entering the prompt and before the reply. It makes me wonder if the CPU is doing more of the prompt processing? Also, at the time of error, the log mentions CUDA running out of memory. Seems very likely to be a memory management issue with the new ollama engine + qwen3.

<!-- gh-comment-id:3341167475 --> @Maltz42 commented on GitHub (Sep 27, 2025): I'm also having errors with qwen3:235b-a22b-instruct-2507-q8_0. I don't have any problems in versions <=0.12.1, but do in 0.12.2 and 0.12.3, with a larger, full context window. "Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details" Starting with 0.12.2, nvidia-smi shows GPU usage much closer to full after the model is loaded, and CPU usage is *much* higher than in v0.12.1 after entering the prompt and before the reply. It makes me wonder if the CPU is doing more of the prompt processing? Also, at the time of error, the log mentions CUDA running out of memory. Seems very likely to be a memory management issue with the new ollama engine + qwen3.
Author
Owner

@wajihullahbaig commented on GitHub (Sep 27, 2025):

@wajihullahbaig It's not clear whether your issue is the same. Can you please file a new bug with specific versions you tried (e.g. what version worked - 0.11.0, 0.11.11 or something else?), reproduction steps, logs and hardware information?

Thanks for the reply, I did not save any logs. I cleaned up everything from the whole system and installed the other version. Sorry I should have kept those logs

<!-- gh-comment-id:3341740594 --> @wajihullahbaig commented on GitHub (Sep 27, 2025): > [@wajihullahbaig](https://github.com/wajihullahbaig) It's not clear whether your issue is the same. Can you please file a new bug with specific versions you tried (e.g. what version worked - 0.11.0, 0.11.11 or something else?), reproduction steps, logs and hardware information? Thanks for the reply, I did not save any logs. I cleaned up everything from the whole system and installed the other version. Sorry I should have kept those logs
Author
Owner

@wajihullahbaig commented on GitHub (Sep 27, 2025):

As for the commands mentioned by @jessegross
Its seems like a large hallucination while everything runs on GPU (100%). I have one GPU (Quatro RTX 5000, 16GB)
This is a different system though, Windows 11, with Ollama 0.12.3

Image
<!-- gh-comment-id:3341773718 --> @wajihullahbaig commented on GitHub (Sep 27, 2025): As for the commands mentioned by @jessegross Its seems like a large hallucination while everything runs on GPU (100%). I have one GPU (Quatro RTX 5000, 16GB) This is a different system though, Windows 11, with Ollama 0.12.3 <img width="1797" height="818" alt="Image" src="https://github.com/user-attachments/assets/6078d3dd-5a8a-4434-a988-6c2a02b559fa" />
Author
Owner

@wajihullahbaig commented on GitHub (Sep 27, 2025):

OK with CPU only mode

Image
<!-- gh-comment-id:3341786787 --> @wajihullahbaig commented on GitHub (Sep 27, 2025): OK with CPU only mode <img width="1830" height="660" alt="Image" src="https://github.com/user-attachments/assets/6b80599e-fb01-4d44-bfcb-74605bc9c457" />
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54753