[GH-ISSUE #14740] qwen3:32b important performance regression (divided by 3!) after Ollama 0.15.5 to 0.15.6 (persists in 0.17.7) #9530

Closed
opened 2026-04-12 22:27:08 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @viba1 on GitHub (Mar 9, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14740

What is the issue?

Since updating Ollama from version 0.15.5 to 0.15.6, the performance of the qwen3:32b model has drastically dropped (from 35 tokens/second to 12 tokens/second on a single RTX 3090 for exemple).
This degradation has not been fixed in subsequent versions, including the current 0.17.7 (March 2026). This makes the model impractical for interactive tasks.

System:
Linux Debian 13
RTX 3090
NVIDIA linux driver 590.48.01
Ollama 0.17.7
Model: qwen3:32b (default quantization, e.g., Q4_K_M)

Steps to Reproduce
Install Ollama 0.15.5.
Download and run ollama run qwen3:32b → measure ~35 tokens/s.
Update to 0.15.6 or later (e.g., 0.17.7).
Relaunch the same model → speed drops to ~12 tokens/s.

Logs / Evidence
Manual token/s measurements using ollama --verbose.
No hardware or config changes during the period.
Other models not affected to the same extent (ex: gemma3:27b ; gpt-oss or qwen3:14b)

Expected Behavior
Return to Ollama 0.15.5 performance (~35 tokens/s) or explanation of changes (new scheduler, memory estimates, etc.) with options to disable.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.17.7

Originally created by @viba1 on GitHub (Mar 9, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14740 ### What is the issue? Since updating Ollama from version 0.15.5 to 0.15.6, the performance of the qwen3:32b model has drastically dropped (from 35 tokens/second to 12 tokens/second on a single RTX 3090 for exemple). This degradation has not been fixed in subsequent versions, including the current 0.17.7 (March 2026). This makes the model impractical for interactive tasks. System: Linux Debian 13 RTX 3090 NVIDIA linux driver 590.48.01 Ollama 0.17.7 Model: qwen3:32b (default quantization, e.g., Q4_K_M) Steps to Reproduce Install Ollama 0.15.5. Download and run ollama run qwen3:32b → measure ~35 tokens/s. Update to 0.15.6 or later (e.g., 0.17.7). Relaunch the same model → speed drops to ~12 tokens/s. Logs / Evidence Manual token/s measurements using ollama --verbose. No hardware or config changes during the period. Other models not affected to the same extent (ex: gemma3:27b ; gpt-oss or qwen3:14b) Expected Behavior Return to Ollama 0.15.5 performance (~35 tokens/s) or explanation of changes (new scheduler, memory estimates, etc.) with options to disable. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.17.7
GiteaMirror added the bug label 2026-04-12 22:27:08 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 9, 2026):

What's the output of ollama ps?

<!-- gh-comment-id:4024599811 --> @rick-github commented on GitHub (Mar 9, 2026): What's the output of `ollama ps`?
Author
Owner

@viba1 commented on GitHub (Mar 9, 2026):

What's the output of ollama ps?

20%/80% CPU/GPU on 0.17.7 (unchanged still 0.15.5)

<!-- gh-comment-id:4024622145 --> @viba1 commented on GitHub (Mar 9, 2026): > What's the output of `ollama ps`? 20%/80% CPU/GPU on 0.17.7 (unchanged still 0.15.5)
Author
Owner

@rick-github commented on GitHub (Mar 9, 2026):

Server logs will aid in debugging.

<!-- gh-comment-id:4024632443 --> @rick-github commented on GitHub (Mar 9, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@viba1 commented on GitHub (Mar 9, 2026):

Perf evolution on several models since 0.11.0. Qwen3:32b is at the very bottom of the graph

Image
<!-- gh-comment-id:4024633952 --> @viba1 commented on GitHub (Mar 9, 2026): Perf evolution on several models since 0.11.0. Qwen3:32b is at the very bottom of the graph <img width="3567" height="2370" alt="Image" src="https://github.com/user-attachments/assets/5913661d-9274-42b7-9ec3-f962bfb36a45" />
Author
Owner

@viba1 commented on GitHub (Mar 9, 2026):

Do you perform non-regression testing on performance?

<!-- gh-comment-id:4024691164 --> @viba1 commented on GitHub (Mar 9, 2026): Do you perform non-regression testing on performance?
Author
Owner

@rick-github commented on GitHub (Mar 9, 2026):

version tps
0.15.0 49.96
0.15.1 47.77
0.15.2 48.99
0.15.3 48.22
0.15.4 47.81
0.15.5 48.21
0.15.6 49.34
0.16.0 49.08
0.16.1 48.02
0.16.2 48.04

Looking at the graph, it looks like the tics are centered on the version label, so I think this change in performance occurred at 0.15.5. In which case #14116 is the likely culprit. Do you control for context length?

<!-- gh-comment-id:4024803928 --> @rick-github commented on GitHub (Mar 9, 2026): | version | tps | | -- | -- | |0.15.0 | 49.96 | |0.15.1 | 47.77 | |0.15.2 | 48.99 | |0.15.3 | 48.22 | |0.15.4 | 47.81 | |0.15.5 | 48.21 | |0.15.6 | 49.34 | |0.16.0 | 49.08 | |0.16.1 | 48.02 | |0.16.2 | 48.04 | Looking at the graph, it looks like the tics are centered on the version label, so I think this change in performance occurred at 0.15.5. In which case #14116 is the likely culprit. Do you control for context length?
Author
Owner

@viba1 commented on GitHub (Mar 9, 2026):

I can easily reproduce this bug as follows (nothing more):

0.15.4
total duration: 6.447894946s
load duration: 2.602031336s
prompt eval count: 13 token(s)
prompt eval duration: 59.608421ms
prompt eval rate: 218.09 tokens/s
eval count: 129 token(s)
eval duration: 3.749874543s
eval rate: 34.40 tokens/s

0.15.5
total duration: 14.517517531s
load duration: 2.561121516s
prompt eval count: 13 token(s)
prompt eval duration: 186.867465ms
prompt eval rate: 69.57 tokens/s
eval count: 133 token(s)
eval duration: 11.715445428s
eval rate: 11.35 tokens/s

0.17.7
total duration: 13.534157546s
load duration: 2.74505761s
prompt eval count: 13 token(s)
prompt eval duration: 187.072771ms
prompt eval rate: 69.49 tokens/s
eval count: 121 token(s)
eval duration: 10.461112273s
eval rate: 11.57 tokens/s

I can reproduce it every time
What changed between 0.15.4 and 0.15.5 that would explain this performance degradation?

NB: journalctl -u ollama --no-pager --follow --pager-end doesn't show any new line when running directly via ./ollama serve from the directory

<!-- gh-comment-id:4024987850 --> @viba1 commented on GitHub (Mar 9, 2026): I can easily reproduce this bug as follows (nothing more): - simply load ollama 0.15.3, 0.15.4, and 0.17.7 from https://github.com/ollama/ollama/releases - ollama serve - ollama run --verbose qwen3:32b "say hello" 0.15.4 total duration: 6.447894946s load duration: 2.602031336s prompt eval count: 13 token(s) prompt eval duration: 59.608421ms prompt eval rate: 218.09 tokens/s eval count: 129 token(s) eval duration: 3.749874543s eval rate: 34.40 tokens/s 0.15.5 total duration: 14.517517531s load duration: 2.561121516s prompt eval count: 13 token(s) prompt eval duration: 186.867465ms prompt eval rate: 69.57 tokens/s eval count: 133 token(s) eval duration: 11.715445428s eval rate: 11.35 tokens/s 0.17.7 total duration: 13.534157546s load duration: 2.74505761s prompt eval count: 13 token(s) prompt eval duration: 187.072771ms prompt eval rate: 69.49 tokens/s eval count: 121 token(s) eval duration: 10.461112273s eval rate: 11.57 tokens/s I can reproduce it every time What changed between 0.15.4 and 0.15.5 that would explain this performance degradation? NB: journalctl -u ollama --no-pager --follow --pager-end doesn't show any new line when running directly via ./ollama serve from the directory
Author
Owner

@rick-github commented on GitHub (Mar 9, 2026):

In which case https://github.com/ollama/ollama/issues/14116 is the likely culprit. Do you control for context length?

<!-- gh-comment-id:4025015382 --> @rick-github commented on GitHub (Mar 9, 2026): > In which case https://github.com/ollama/ollama/issues/14116 is the likely culprit. Do you control for context length?
Author
Owner

@viba1 commented on GitHub (Mar 9, 2026):

Ok, probably same problem as https://github.com/ollama/ollama/issues/14116
100% GPU before 0.15.5
Your regression tests are missing this: People don't have unlimited amounts of VRAM on their GPUs: 12, 16, 24, or 32 for a single GPU.

<!-- gh-comment-id:4025030957 --> @viba1 commented on GitHub (Mar 9, 2026): Ok, probably same problem as https://github.com/ollama/ollama/issues/14116 100% GPU before 0.15.5 Your regression tests are missing this: People don't have unlimited amounts of VRAM on their GPUs: 12, 16, 24, or 32 for a single GPU.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9530