[GH-ISSUE #14579] Qwen3.5 Much slower speeds compared to LlamaCPP #35212

Open
opened 2026-04-22 19:35:45 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @iChristGit on GitHub (Mar 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14579

What is the issue?

I am running the same Q4 quant of Qwen3.5-35B-A3B
On ollama I get around 15 to 20 tokens per second, very slow I’ve tried many different starting arguments and just default ollama startup.

The same PC just running straight up llama CPP I get 100 tokens per second even with default settings no argument, context 160000.

Image

Win11
3090Ti
64Gb ram

I am not sure what to add here, il upload whatever needed.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.7.15

Originally created by @iChristGit on GitHub (Mar 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14579 ### What is the issue? I am running the same Q4 quant of Qwen3.5-35B-A3B On ollama I get around 15 to 20 tokens per second, very slow I’ve tried many different starting arguments and just default ollama startup. The same PC just running straight up llama CPP I get 100 tokens per second even with default settings no argument, context 160000. <img width="590" height="1278" alt="Image" src="https://github.com/user-attachments/assets/23af5b9e-d483-4ecb-874f-efc25d882a35" /> Win11 3090Ti 64Gb ram I am not sure what to add here, il upload whatever needed. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.7.15
GiteaMirror added the bug label 2026-04-22 19:35:45 -05:00
Author
Owner

@Menooker commented on GitHub (Mar 5, 2026):

Possibly related to https://github.com/ollama/ollama/issues/11772
See also https://www.reddit.com/r/LocalLLaMA/comments/1rei65v/qwen3535ba3b_quantization_quality_speed/?tl=en

Someone reported in above link. Quote

I actually love you so much. I'm running this on a 5070ti 12700k 32GB 5400MT system and I had no clue how much difference using the MOE layer option improves performance. Went from 10tps (using gpu offload settings) to 57tps (using your 24 cpu layer config) and then to around 70tps (using 14 cpu layers instead).

Looks like CPU MOE offloading is important for this model.

<!-- gh-comment-id:4003057280 --> @Menooker commented on GitHub (Mar 5, 2026): Possibly related to https://github.com/ollama/ollama/issues/11772 See also https://www.reddit.com/r/LocalLLaMA/comments/1rei65v/qwen3535ba3b_quantization_quality_speed/?tl=en Someone reported in above link. Quote >I actually love you so much. I'm running this on a 5070ti 12700k 32GB 5400MT system and I had no clue how much difference using the MOE layer option improves performance. Went from 10tps (using gpu offload settings) to 57tps (using your 24 cpu layer config) and then to around 70tps (using 14 cpu layers instead). Looks like CPU MOE offloading is important for this model.
Author
Owner

@OniricApps commented on GitHub (Mar 5, 2026):

I also realized a very slow performance of qwen3.5:27b

I can confirm only GPU is used (a RTX 3090 with 24GB of VRAM):

$ ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
qwen3.5:27b 7653528ba5cb 23 GB 100% GPU 16000 4 minutes from now

$ ollama --version
ollama version is 0.17.5

I realized that also CPU is at 100% with "top" command.

qwen3.5 is taking from 3 to 6 minutes per inferences than I execute in less than 20 seconds using other similar sized models like gemma3:27b. Concretely, I'm doing a test with 4 inferences in a row that takes 46 seconds for gemma3 and 3118 seconds for qwen3.5, it is 67x slower. Not normal at all.

<!-- gh-comment-id:4004337758 --> @OniricApps commented on GitHub (Mar 5, 2026): I also realized a very slow performance of qwen3.5:27b I can confirm only GPU is used (a RTX 3090 with 24GB of VRAM): $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.5:27b 7653528ba5cb 23 GB 100% GPU 16000 4 minutes from now $ ollama --version ollama version is 0.17.5 I realized that also CPU is at 100% with "top" command. qwen3.5 is taking from 3 to 6 minutes per inferences than I execute in less than 20 seconds using other similar sized models like gemma3:27b. Concretely, I'm doing a test with 4 inferences in a row that takes 46 seconds for gemma3 and 3118 seconds for qwen3.5, it is 67x slower. Not normal at all.
Author
Owner

@MarkMuravev commented on GitHub (Mar 6, 2026):

+1

<!-- gh-comment-id:4011783868 --> @MarkMuravev commented on GitHub (Mar 6, 2026): +1
Author
Owner

@TsengSR commented on GitHub (Mar 15, 2026):

@iChristGit @OniricApps

I also realized a very slow performance of qwen3.5:27b

I can confirm only GPU is used (a RTX 3090 with 24GB of VRAM):

$ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.5:27b 7653528ba5cb 23 GB 100% GPU 16000 4 minutes from now

$ ollama --version ollama version is 0.17.5

I realized that also CPU is at 100% with "top" command.

qwen3.5 is taking from 3 to 6 minutes per inferences than I execute in less than 20 seconds using other similar sized models like gemma3:27b. Concretely, I'm doing a test with 4 inferences in a row that takes 46 seconds for gemma3 and 3118 seconds for qwen3.5, it is 67x slower. Not normal at all.

I had the same issue. Its the Context Size. By default qwen3.5-27b runs with 32k Context, which is too big for 24 GB VRAM and overflows. I only got 17 token/s.

Setting context to 16k or 8k increased the inference speed from 17 to 25 toklen/s (16k context) to 40 token/s (8k context). Its still 1/2 the speed of Qwen3 30b with 90 token/s.

"ollama ps" shows 24 (32k context) 23 (16k context) and 22 GB VRAM (8k). So if you have anything taking RAM (like windows), 32k won't fit in anymore.

Also Qwen 3.5 27b reasoning is fucked up. Doing a simple prompt of "tell me a story" gives 5 pages of reasoning for a 2 short paragraphs of story. Just turn off reasoning with "/set nothink"

<!-- gh-comment-id:4062931091 --> @TsengSR commented on GitHub (Mar 15, 2026): @iChristGit @OniricApps > I also realized a very slow performance of qwen3.5:27b > > I can confirm only GPU is used (a RTX 3090 with 24GB of VRAM): > > $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.5:27b 7653528ba5cb 23 GB 100% GPU 16000 4 minutes from now > > $ ollama --version ollama version is 0.17.5 > > I realized that also CPU is at 100% with "top" command. > > qwen3.5 is taking from 3 to 6 minutes per inferences than I execute in less than 20 seconds using other similar sized models like gemma3:27b. Concretely, I'm doing a test with 4 inferences in a row that takes 46 seconds for gemma3 and 3118 seconds for qwen3.5, it is 67x slower. Not normal at all. I had the same issue. Its the Context Size. By default qwen3.5-27b runs with 32k Context, which is too big for 24 GB VRAM and overflows. I only got 17 token/s. Setting context to 16k or 8k increased the inference speed from 17 to 25 toklen/s (16k context) to 40 token/s (8k context). Its still 1/2 the speed of Qwen3 30b with 90 token/s. "ollama ps" shows 24 (32k context) 23 (16k context) and 22 GB VRAM (8k). So if you have anything taking RAM (like windows), 32k won't fit in anymore. Also Qwen 3.5 27b reasoning is fucked up. Doing a simple prompt of "tell me a story" gives 5 pages of reasoning for a 2 short paragraphs of story. Just turn off reasoning with "/set nothink"
Author
Owner

@iChristGit commented on GitHub (Mar 15, 2026):

@iChristGit @OniricApps

I also realized a very slow performance of qwen3.5:27b
I can confirm only GPU is used (a RTX 3090 with 24GB of VRAM):
$ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.5:27b 7653528ba5cb 23 GB 100% GPU 16000 4 minutes from now
$ ollama --version ollama version is 0.17.5
I realized that also CPU is at 100% with "top" command.
qwen3.5 is taking from 3 to 6 minutes per inferences than I execute in less than 20 seconds using other similar sized models like gemma3:27b. Concretely, I'm doing a test with 4 inferences in a row that takes 46 seconds for gemma3 and 3118 seconds for qwen3.5, it is 67x slower. Not normal at all.

I had the same issue. Its the Context Size. By default qwen3.5-27b runs with 32k Context, which is too big for 24 GB VRAM and overflows. I only got 17 token/s.

Setting context to 16k or 8k increased the inference speed from 17 to 25 toklen/s (16k context) to 40 token/s (8k context). Its still 1/2 the speed of Qwen3 30b with 90 token/s.

"ollama ps" shows 24 (32k context) 23 (16k context) and 22 GB VRAM (8k). So if you have anything taking RAM (like windows), 32k won't fit in anymore.

Also Qwen 3.5 27b reasoning is fucked up. Doing a simple prompt of "tell me a story" gives 5 pages of reasoning for a 2 short paragraphs of story. Just turn off reasoning with "/set nothink"

I am testing llama cpp at 60-120k tokens and still manage 35-40tk/s
while on ollama even at 16k its very very slow, at a point of unusable..
I am not sure its an issue with context.

<!-- gh-comment-id:4063430616 --> @iChristGit commented on GitHub (Mar 15, 2026): > [@iChristGit](https://github.com/iChristGit) [@OniricApps](https://github.com/OniricApps) > > > I also realized a very slow performance of qwen3.5:27b > > I can confirm only GPU is used (a RTX 3090 with 24GB of VRAM): > > $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.5:27b 7653528ba5cb 23 GB 100% GPU 16000 4 minutes from now > > $ ollama --version ollama version is 0.17.5 > > I realized that also CPU is at 100% with "top" command. > > qwen3.5 is taking from 3 to 6 minutes per inferences than I execute in less than 20 seconds using other similar sized models like gemma3:27b. Concretely, I'm doing a test with 4 inferences in a row that takes 46 seconds for gemma3 and 3118 seconds for qwen3.5, it is 67x slower. Not normal at all. > > I had the same issue. Its the Context Size. By default qwen3.5-27b runs with 32k Context, which is too big for 24 GB VRAM and overflows. I only got 17 token/s. > > Setting context to 16k or 8k increased the inference speed from 17 to 25 toklen/s (16k context) to 40 token/s (8k context). Its still 1/2 the speed of Qwen3 30b with 90 token/s. > > "ollama ps" shows 24 (32k context) 23 (16k context) and 22 GB VRAM (8k). So if you have anything taking RAM (like windows), 32k won't fit in anymore. > > Also Qwen 3.5 27b reasoning is fucked up. Doing a simple prompt of "tell me a story" gives 5 pages of reasoning for a 2 short paragraphs of story. Just turn off reasoning with "/set nothink" I am testing llama cpp at 60-120k tokens and still manage 35-40tk/s while on ollama even at 16k its very very slow, at a point of unusable.. I am not sure its an issue with context.
Author
Owner

@TsengSR commented on GitHub (Mar 15, 2026):

I am testing llama cpp at 60-120k tokens and still manage 35-40tk/s while on ollama even at 16k its very very slow, at a point of unusable.. I am not sure its an issue with context.

In general Qwen3.5 27b is a good bit slower, i.e. Qwen3 30b at Q4 ran at 90 tokens/s for me, while 3.5 27b only manages 40 with the reduced context size.
Check your ollama ps output when the model runs, if its fully utilizing GPU or if it offloads to CPU.

If it offloads, you may need to reduce the context further. Als keep in mind to turn off reasoning.
In my tests the reasoning is about 5 to 10 times the amount of tokens/text than what really gets generated. Its reasoning kinda runs amok overthinking it before starting the actual response. On my first test (with 17 token/s still) it made 10 to 15 times more reasoning than the actual 2-3 paragraph output, which causes an abnormal response times, even at 40 tokens/second

<!-- gh-comment-id:4063446630 --> @TsengSR commented on GitHub (Mar 15, 2026): > I am testing llama cpp at 60-120k tokens and still manage 35-40tk/s while on ollama even at 16k its very very slow, at a point of unusable.. I am not sure its an issue with context. In general Qwen3.5 27b is a good bit slower, i.e. Qwen3 30b at Q4 ran at 90 tokens/s for me, while 3.5 27b only manages 40 with the reduced context size. Check your `ollama ps` output when the model runs, if its fully utilizing GPU or if it offloads to CPU. If it offloads, you may need to reduce the context further. Als keep in mind to turn off reasoning. In my tests the reasoning is about 5 to 10 times the amount of tokens/text than what really gets generated. Its reasoning kinda runs amok overthinking it before starting the actual response. On my first test (with 17 token/s still) it made 10 to 15 times more reasoning than the actual 2-3 paragraph output, which causes an abnormal response times, even at 40 tokens/second
Author
Owner

@iChristGit commented on GitHub (Mar 15, 2026):

I am testing llama cpp at 60-120k tokens and still manage 35-40tk/s while on ollama even at 16k its very very slow, at a point of unusable.. I am not sure its an issue with context.

In general Qwen3.5 27b is a good bit slower, i.e. Qwen3 30b at Q4 ran at 90 tokens/s for me, while 3.5 27b only manages 40 with the reduced context size. Check your ollama ps output when the model runs, if its fully utilizing GPU or if it offloads to CPU.

If it offloads, you may need to reduce the context further. Als keep in mind to turn off reasoning. In my tests the reasoning is about 5 to 10 times the amount of tokens/text than what really gets generated. Its reasoning kinda runs amok overthinking it before starting the actual response. On my first test (with 17 token/s still) it made 10 to 15 times more reasoning than the actual 2-3 paragraph output, which causes an abnormal response times, even at 40 tokens/second

Its not about which model, as I test same model with both llama cpp and ollama, in 35b its less appearent because its still 80+tk/s, but when its 12tk/s vs 35tk/s its noticable

<!-- gh-comment-id:4063462934 --> @iChristGit commented on GitHub (Mar 15, 2026): > > I am testing llama cpp at 60-120k tokens and still manage 35-40tk/s while on ollama even at 16k its very very slow, at a point of unusable.. I am not sure its an issue with context. > > In general Qwen3.5 27b is a good bit slower, i.e. Qwen3 30b at Q4 ran at 90 tokens/s for me, while 3.5 27b only manages 40 with the reduced context size. Check your `ollama ps` output when the model runs, if its fully utilizing GPU or if it offloads to CPU. > > If it offloads, you may need to reduce the context further. Als keep in mind to turn off reasoning. In my tests the reasoning is about 5 to 10 times the amount of tokens/text than what really gets generated. Its reasoning kinda runs amok overthinking it before starting the actual response. On my first test (with 17 token/s still) it made 10 to 15 times more reasoning than the actual 2-3 paragraph output, which causes an abnormal response times, even at 40 tokens/second Its not about which model, as I test same model with both llama cpp and ollama, in 35b its less appearent because its still 80+tk/s, but when its 12tk/s vs 35tk/s its noticable
Author
Owner

@TsengSR commented on GitHub (Mar 15, 2026):

Its not about which model, as I test same model with both llama cpp and ollama, in 35b its less appearent because its still 80+tk/s, but when its 12tk/s vs 35tk/s its noticable

Yea, same issue. 17 vs 40 Tokens when context lowered.

Thats to be expected, the qwen3.5 35b is something completely different from 27b. qwen3.5 35b is a mixture-of-experts where each expert has 3B parameters and only one of them is active for a given token, so your GPU/CPU only has to scan 3b parameters for each token vs 27b parameters on the dense model.

35b has more speed but less accurancy due to each expert only having 3b params. But its good for AI PCs with unifed memory as 3b paramters give a good token/s and it can pick the best expert model for a given prompt.

The catch is, you can't load the 35b model into 3090 or 4090, its too big even at Q4, so it will inevitable go into system memory which is 15-30 times slower than GPU memory. The 27b weights are 16 GB but need 24 GB of VRAM to load at 32k context

<!-- gh-comment-id:4063471738 --> @TsengSR commented on GitHub (Mar 15, 2026): > Its not about which model, as I test same model with both llama cpp and ollama, in 35b its less appearent because its still 80+tk/s, but when its 12tk/s vs 35tk/s its noticable Yea, same issue. 17 vs 40 Tokens when context lowered. Thats to be expected, the qwen3.5 35b is something completely different from 27b. qwen3.5 35b is a mixture-of-experts where each expert has 3B parameters and only one of them is active for a given token, so your GPU/CPU only has to scan 3b parameters for each token vs 27b parameters on the dense model. 35b has more speed but less accurancy due to each expert only having 3b params. But its good for AI PCs with unifed memory as 3b paramters give a good token/s and it can pick the best expert model for a given prompt. The catch is, you can't load the 35b model into 3090 or 4090, its too big even at Q4, so it will inevitable go into system memory which is 15-30 times slower than GPU memory. The 27b weights are 16 GB but need 24 GB of VRAM to load at 32k context
Author
Owner

@iChristGit commented on GitHub (Mar 15, 2026):

Its not about which model, as I test same model with both llama cpp and ollama, in 35b its less appearent because its still 80+tk/s, but when its 12tk/s vs 35tk/s its noticable

Yea, same issue. 17 vs 40 Tokens when context lowered.

Thats to be expected, the qwen3.5 35b is something completely different from 27b. qwen3.5 35b is a mixture-of-experts where each expert has 3B parameters and only one of them is active for a given token, so your GPU/CPU only has to scan 3b parameters for each token vs 27b parameters on the dense model.

35b has more speed but less accurancy due to each expert only having 3b params. But its good for AI PCs with unifed memory as 3b paramters give a good token/s and it can pick the best expert model for a given prompt

My point is that if you take same exact model (Qwen3.5-27B-Q4) and use the SAME amount of context (aka set to 16k), there is still double the performance when running straight llama cpp as opposed to ollama.

You can also spot that difference with the 35B, but since the tk/s is so high people tend to not notice it, but you will still benefit from running llama cpp

<!-- gh-comment-id:4063476310 --> @iChristGit commented on GitHub (Mar 15, 2026): > > Its not about which model, as I test same model with both llama cpp and ollama, in 35b its less appearent because its still 80+tk/s, but when its 12tk/s vs 35tk/s its noticable > > Yea, same issue. 17 vs 40 Tokens when context lowered. > > Thats to be expected, the qwen3.5 35b is something completely different from 27b. qwen3.5 35b is a mixture-of-experts where each expert has 3B parameters and only one of them is active for a given token, so your GPU/CPU only has to scan 3b parameters for each token vs 27b parameters on the dense model. > > 35b has more speed but less accurancy due to each expert only having 3b params. But its good for AI PCs with unifed memory as 3b paramters give a good token/s and it can pick the best expert model for a given prompt My point is that if you take same exact model (Qwen3.5-27B-Q4) and use the SAME amount of context (aka set to 16k), there is still double the performance when running straight llama cpp as opposed to ollama. You can also spot that difference with the 35B, but since the tk/s is so high people tend to not notice it, but you will still benefit from running llama cpp
Author
Owner

@kokroo commented on GitHub (Mar 26, 2026):

+1

<!-- gh-comment-id:4134252155 --> @kokroo commented on GitHub (Mar 26, 2026): +1
Author
Owner

@iChristGit commented on GitHub (Mar 26, 2026):

This seem to get worse now, llama cpp optimized more and now the Qwen3.5-35B quant runs at 140tk/s and ollama got regressed and takes ages to load the model and VERY slow tk/s

<!-- gh-comment-id:4134295284 --> @iChristGit commented on GitHub (Mar 26, 2026): This seem to get worse now, llama cpp optimized more and now the Qwen3.5-35B quant runs at 140tk/s and ollama got regressed and takes ages to load the model and VERY slow tk/s
Author
Owner

@dgit90 commented on GitHub (Mar 27, 2026):

Hardware: RTX 5080 16GB VRAM, AMD Ryzen 9 9950X 16-core, 96GB DDR5-6000 RAM, Windows 11 + WSL2 (Ubuntu 24.04), Ollama installed natively inside WSL2.

Model: qwen3.5:35b-a3b-q4_K_M (23GB, Q4_K_M quantization)

Observed behavior: Simple single-turn prompt ("hi") takes approximately 43 seconds end-to-end. GPU utilization appears low during inference. Model does not fit fully in VRAM (16GB < 23GB), so Ollama is offloading layers.

Expected behavior: With llama.cpp's --n-cpu-moe flag, MoE expert layers can be offloaded to CPU RAM while attention layers remain on GPU, yielding ~70 tok/s on this hardware. Ollama does not currently expose this flag, resulting in naive layer offloading and dramatically worse performance on MoE models that partially exceed VRAM.

Request: Expose --n-cpu-moe (or equivalent MoE-aware offloading) as a configurable parameter in Ollama for MoE architectures. This would significantly improve usability of Qwen3.5 and similar MoE models on consumer hardware with 16GB VRAM.

<!-- gh-comment-id:4140398057 --> @dgit90 commented on GitHub (Mar 27, 2026): Hardware: RTX 5080 16GB VRAM, AMD Ryzen 9 9950X 16-core, 96GB DDR5-6000 RAM, Windows 11 + WSL2 (Ubuntu 24.04), Ollama installed natively inside WSL2. Model: qwen3.5:35b-a3b-q4_K_M (23GB, Q4_K_M quantization) Observed behavior: Simple single-turn prompt ("hi") takes approximately 43 seconds end-to-end. GPU utilization appears low during inference. Model does not fit fully in VRAM (16GB < 23GB), so Ollama is offloading layers. Expected behavior: With llama.cpp's --n-cpu-moe flag, MoE expert layers can be offloaded to CPU RAM while attention layers remain on GPU, yielding ~70 tok/s on this hardware. Ollama does not currently expose this flag, resulting in naive layer offloading and dramatically worse performance on MoE models that partially exceed VRAM. Request: Expose --n-cpu-moe (or equivalent MoE-aware offloading) as a configurable parameter in Ollama for MoE architectures. This would significantly improve usability of Qwen3.5 and similar MoE models on consumer hardware with 16GB VRAM.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35212