[GH-ISSUE #7717] Performance regression for 0.4.* caused by number of input tokens #51438

Closed
opened 2026-04-28 20:07:50 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @rick-github on GitHub (Nov 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7717

What is the issue?

As input tokens go up, the rate of token generation goes down. Once the rate has decreased, it becomes a ceiling for subsequent generations.

Affects both CPU and GPU generations, but more pronounced for GPU. Tested with llama3.2:3b-instruct-q4_K_M, llama3.1:8b-instruct-q4_0, qwen2.5:0.5b-instruct-q4_K_M, aya-expanse:8b-q4_K_M.

$ model=llama3.2:3b-instruct-q4_K_M ; curl -s localhost:11434/api/generate -d '{"model":"'$model'","keep_alive":0}' >/dev/null ; for p in 1 2 ; do echo "# pass $p" ; for n in 32 64 128 256 512 1024 2048 4096 8192 16384  ; do curl -s localhost:11434/api/generate -d '{"model":"'$model'","prompt":'"$((echo write a story with these words ; cat /usr/share/dict/words) | dd bs=1 count=$n status=none | jq -sR .)"',"options":{"seed":42,"temperature":0,"num_gpu":-1,"num_ctx":16384,"num_predict":256},"stream":false}' | jq -rc '.|"\(.prompt_eval_count) \(.eval_count/(.eval_duration/1000000000))"' ; done ; done
# pass 1
33 128.12812812812814
52 127.68079800498754
91 124.21154779233382
168 118.84865366759517
307 108.47457627118645
543 92.51897361763643
1036 73.73271889400922
2012 70.64017660044149
3793 67.31527741256903
7540 59.60419091967404
# pass 2
33 79.08557306147668
52 78.93925377736664
91 78.2874617737003
168 78.09640024405125
307 74.4186046511628
543 74.63556851311952
1036 72.62411347517731
2012 68.5041477120685
3793 66.84073107049608
7540 59.327925840092696

0.4.2, llama3.2:3b-instruct-q4_K_M, GPU:
image

0.3.14, llama3.2:3b-instruct-q4_K_M, GPU:
image

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.3.14, 0.4.0, 0.4.1, 0.4.2

Originally created by @rick-github on GitHub (Nov 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7717 ### What is the issue? As input tokens go up, the rate of token generation goes down. Once the rate has decreased, it becomes a ceiling for subsequent generations. Affects both CPU and GPU generations, but more pronounced for GPU. Tested with llama3.2:3b-instruct-q4_K_M, llama3.1:8b-instruct-q4_0, qwen2.5:0.5b-instruct-q4_K_M, aya-expanse:8b-q4_K_M. ```console $ model=llama3.2:3b-instruct-q4_K_M ; curl -s localhost:11434/api/generate -d '{"model":"'$model'","keep_alive":0}' >/dev/null ; for p in 1 2 ; do echo "# pass $p" ; for n in 32 64 128 256 512 1024 2048 4096 8192 16384 ; do curl -s localhost:11434/api/generate -d '{"model":"'$model'","prompt":'"$((echo write a story with these words ; cat /usr/share/dict/words) | dd bs=1 count=$n status=none | jq -sR .)"',"options":{"seed":42,"temperature":0,"num_gpu":-1,"num_ctx":16384,"num_predict":256},"stream":false}' | jq -rc '.|"\(.prompt_eval_count) \(.eval_count/(.eval_duration/1000000000))"' ; done ; done # pass 1 33 128.12812812812814 52 127.68079800498754 91 124.21154779233382 168 118.84865366759517 307 108.47457627118645 543 92.51897361763643 1036 73.73271889400922 2012 70.64017660044149 3793 67.31527741256903 7540 59.60419091967404 # pass 2 33 79.08557306147668 52 78.93925377736664 91 78.2874617737003 168 78.09640024405125 307 74.4186046511628 543 74.63556851311952 1036 72.62411347517731 2012 68.5041477120685 3793 66.84073107049608 7540 59.327925840092696 ``` 0.4.2, llama3.2:3b-instruct-q4_K_M, GPU: ![image](https://github.com/user-attachments/assets/d9e00d5e-327d-4ac9-aa45-f9ae4ebd016e) 0.3.14, llama3.2:3b-instruct-q4_K_M, GPU: ![image](https://github.com/user-attachments/assets/ed869309-bd4b-413a-86e7-82ea0450fef6) ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.14, 0.4.0, 0.4.1, 0.4.2
GiteaMirror added the topperformancebug labels 2026-04-28 20:07:51 -05:00
Author
Owner

@jmorganca commented on GitHub (Nov 18, 2024):

@rick-github thanks for filing this and for graphing the results. Which kind of NVIDIA GPU do you have?

<!-- gh-comment-id:2481872258 --> @jmorganca commented on GitHub (Nov 18, 2024): @rick-github thanks for filing this and for graphing the results. Which kind of NVIDIA GPU do you have?
Author
Owner

@rick-github commented on GitHub (Nov 18, 2024):

Tested on RTX4070 (12G), RTX3080 (16G). All models GPU resident when tested.

<!-- gh-comment-id:2481873227 --> @rick-github commented on GitHub (Nov 18, 2024): Tested on RTX4070 (12G), RTX3080 (16G). All models GPU resident when tested.
Author
Owner

@varyagnord commented on GitHub (Nov 19, 2024):

On Windows we have same problem.

<!-- gh-comment-id:2485791746 --> @varyagnord commented on GitHub (Nov 19, 2024): On Windows we have same problem.
Author
Owner

@murzein commented on GitHub (Nov 19, 2024):

+AMD 7900XTX
+RTX3090

<!-- gh-comment-id:2485795122 --> @murzein commented on GitHub (Nov 19, 2024): +AMD 7900XTX +RTX3090
Author
Owner

@rick-github commented on GitHub (Nov 19, 2024):

Same behaviour observed on A100 (40G) and A100x4 (using OLLAMA_SCHED_SPREAD).

<!-- gh-comment-id:2485908413 --> @rick-github commented on GitHub (Nov 19, 2024): Same behaviour observed on A100 (40G) and A100x4 (using `OLLAMA_SCHED_SPREAD`).
Author
Owner

@jessegross commented on GitHub (Nov 19, 2024):

Thanks a lot @rick-github for the great data! Is it possible for you to try this on main?

807ace5 solved the performance issues that I was seeing. I didn't get quite as clean results as you did when you ran the script here but that looks good for me as well.

<!-- gh-comment-id:2486999646 --> @jessegross commented on GitHub (Nov 19, 2024): Thanks a lot @rick-github for the great data! Is it possible for you to try this on `main`? 807ace5 solved the performance issues that I was seeing. I didn't get quite as clean results as you did when you ran the script here but that looks good for me as well.
Author
Owner

@rick-github commented on GitHub (Nov 20, 2024):

Awesome, performance now matches 0.3.14.

image

<!-- gh-comment-id:2487122632 --> @rick-github commented on GitHub (Nov 20, 2024): Awesome, performance now matches 0.3.14. ![image](https://github.com/user-attachments/assets/b1b6daea-daf0-44e6-8e3e-555c1bb0eb57)
Author
Owner

@jessegross commented on GitHub (Nov 20, 2024):

Thanks so much for testing!

<!-- gh-comment-id:2489408020 --> @jessegross commented on GitHub (Nov 20, 2024): Thanks so much for testing!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51438