Excessive time to first token with new engine and partial offload (prompt eval about 9x slower) #8002

Closed
opened 2025-11-12 14:26:26 -06:00 by GiteaMirror · 15 comments
Owner

Originally created by @UncleRedz on GitHub (Aug 22, 2025).

What is the issue?

I have been running several tests comparing the old engine and the new and I'm having concerns with the time to first token when the model + context used is not fitting within available VRAM.

Below is test results from a proprietary RAG system which utilizes a context of 24576 tokens for data processing, running on Ubuntu 24.04.3 LTS (6.14), AMD Ryzen 7 7700, 32 GIB DDR5, GeForce RTX 4060 8GB with Ollama 0.11.6.

Qwen3 30B-A3B model (MoE):

Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec Time (MM:SS)
Default 0.11.6 25 67%/33% 5918 / 8188 0:48 11,24 2:48
Flash Attention 25 67%/33% 4494 / 8188 0:32 8,31 3:16
New Engine, New Estimates 24 70%/30% 7304 / 8188 2:37 13,1 4:20
New Engine, New Estimates, Flash Attention 21 65%/35% 7282 / 8188 03:28 10,54 5:20
New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 20 63%/37% 7314 / 8188 04:43 11 6:47

Qwen3 14B:

Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec Time (MM:SS)
Default 0.11.6 17 52%/48% 6400 / 8188 0:35 5,45 2:45
New Engine, New Estimates 17 56%/44% 7366 / 8188 4:24 6,49 6:55
New Engine, New Estimates, Flash Attention 13 45%/55% 7252 / 8188 5:14 5,79 7:39
New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 11 35%/65% 7402 / 8188 3:40 7,69 5:26

Qwen3 8B:

Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec Time (MM:SS)
Default 0.11.6 11 31%/69% 6798 / 8188 0:15 11,9 1:40
New Engine, New Estimates 12 37%/63% 7540 / 8188 1:16 15,17 2:25
New Engine, New Estimates, Flash Attention 9 15%/85% 7412 / 8188 0:33 21 1:16
New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 7,2 0%/100% 6800 / 8188 0:08 33,64 0:33

Qwen3 4B:

Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec Time (MM:SS)
Default 0.11.6 9,3 12%/88% 7004 / 8188 0:10 25,42 0:59
New Engine, New Estimates 8,3 8%/92% 7696 / 8188 0:08 36,87 0:49
New Engine, New Estimates, Flash Attention 6,6 0%/100% 6186 / 8188 0:04 49,3 0:36
New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 4,9 0%/100% 4640 / 8188 0:04 46,92 0:31

VRAM as read from "nvidia-smi", Size and CPU/GPU as read from "ollama ps". Time To First Token as time from message(s) sent to first token received. Tokens / Sec as total duration to completion, excluding time to first token, divided by number of streamed chunks received (not sure if this equals tokens or not, but it's what I used). Time as the total time from message(s) sent to completion (answers can be of different length, so not that useful).
"Default 0.11.6" is without any modifications compared to a clean install.

(Models are the first release Qwen3, pulled from the Ollama model repo.)

Findings

In general, the new engine provides higher tokens per second, compared to the old engine. However when the model is too big for the GPU VRAM, the old engine is considerably faster on time to first token. The new engine adds a penalty of several minutes on this setup.

When the model or a majority of the model (say 85% or more) fits within VRAM, the new engine provides an acceptable or faster time to first token compared to the old engine.

The size and split between CPU/GPU and VRAM usage doesn't make much sense to me, for the over sized models (30B/14B) the new engine actually makes use of more VRAM, while being slower on time to first token, but I don't understand why the size varies that much.

Looking through the log files, if anything it seems that more of the model layers goes to GPU/VRAM in the new engine compared to the old, explaining the higher VRAM usage. However what happens with context / KV Cache, etc is not clear from the logs.

While reducing the context size will speed up the time to first token and improve memory usage, the concern I'm having here is the difference between the old engine and the new, the old is capable of keeping the time to first token under one minute, most of the time to around 30 seconds, while the new engine even exceeds 4 minutes in several tests and exceeds 5 minutes in one.

This is simply far too long time for most usecases, and I hope it's something that can be looked into and fixed.

I have not kept the logs for all tests, but if logs are needed, then please specify specifically which of the above tests are of interest and I'll rerun them and include the logs.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.11.6

Originally created by @UncleRedz on GitHub (Aug 22, 2025). ### What is the issue? I have been running several tests comparing the old engine and the new and I'm having concerns with the time to first token when the model + context used is not fitting within available VRAM. Below is test results from a proprietary RAG system which utilizes a context of 24576 tokens for data processing, running on Ubuntu 24.04.3 LTS (6.14), AMD Ryzen 7 7700, 32 GIB DDR5, GeForce RTX 4060 8GB with Ollama 0.11.6. Qwen3 30B-A3B model (MoE): | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | Time (MM:SS) | | :------------------------------------------------------------ | :-------- | :------ | :---------- | :------------------ | :----------- | :----------- | | Default 0.11.6 | 25 | 67%/33% | 5918 / 8188 | 0:48 | 11,24 | 2:48 | | Flash Attention | 25 | 67%/33% | 4494 / 8188 | 0:32 | 8,31 | 3:16 | | New Engine, New Estimates | 24 | 70%/30% | 7304 / 8188 | 2:37 | 13,1 | 4:20 | | New Engine, New Estimates, Flash Attention | 21 | 65%/35% | 7282 / 8188 | 03:28 | 10,54 | 5:20 | | New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 | 20 | 63%/37% | 7314 / 8188 | 04:43 | 11 | 6:47 | Qwen3 14B: | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | Time (MM:SS) | | :------------------------------------------------------------ | :-------- | :------ | :---------- | :------------------ | :----------- | :----------- | | Default 0.11.6 | 17 | 52%/48% | 6400 / 8188 | 0:35 | 5,45 | 2:45 | | New Engine, New Estimates | 17 | 56%/44% | 7366 / 8188 | 4:24 | 6,49 | 6:55 | | New Engine, New Estimates, Flash Attention | 13 | 45%/55% | 7252 / 8188 | 5:14 | 5,79 | 7:39 | | New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 | 11 | 35%/65% | 7402 / 8188 | 3:40 | 7,69 | 5:26 | Qwen3 8B: | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | Time (MM:SS) | | :------------------------------------------------------------ | :-------- | :------ | :---------- | :------------------ | :----------- | :----------- | | Default 0.11.6 | 11 | 31%/69% | 6798 / 8188 | 0:15 | 11,9 | 1:40 | | New Engine, New Estimates | 12 | 37%/63% | 7540 / 8188 | 1:16 | 15,17 | 2:25 | | New Engine, New Estimates, Flash Attention | 9 | 15%/85% | 7412 / 8188 | 0:33 | 21 | 1:16 | | New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 | 7,2 | 0%/100% | 6800 / 8188 | 0:08 | 33,64 | 0:33 | Qwen3 4B: | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | Time (MM:SS) | | :------------------------------------------------------------ | :-------- | :------ | :---------- | :------------------ | :----------- | :----------- | | Default 0.11.6 | 9,3 | 12%/88% | 7004 / 8188 | 0:10 | 25,42 | 0:59 | | New Engine, New Estimates | 8,3 | 8%/92% | 7696 / 8188 | 0:08 | 36,87 | 0:49 | | New Engine, New Estimates, Flash Attention | 6,6 | 0%/100% | 6186 / 8188 | 0:04 | 49,3 | 0:36 | | New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 | 4,9 | 0%/100% | 4640 / 8188 | 0:04 | 46,92 | 0:31 | VRAM as read from "nvidia-smi", Size and CPU/GPU as read from "ollama ps". Time To First Token as time from message(s) sent to first token received. Tokens / Sec as total duration to completion, excluding time to first token, divided by number of streamed chunks received (not sure if this equals tokens or not, but it's what I used). Time as the total time from message(s) sent to completion (answers can be of different length, so not that useful). "Default 0.11.6" is without any modifications compared to a clean install. (Models are the first release Qwen3, pulled from the Ollama model repo.) ### Findings In general, the new engine provides higher tokens per second, compared to the old engine. However when the model is too big for the GPU VRAM, the old engine is **considerably** faster on time to first token. The new engine adds a penalty of several minutes on this setup. When the model or a majority of the model (say 85% or more) fits within VRAM, the new engine provides an acceptable or faster time to first token compared to the old engine. The size and split between CPU/GPU and VRAM usage doesn't make much sense to me, for the over sized models (30B/14B) the new engine actually makes use of more VRAM, while being slower on time to first token, but I don't understand why the size varies that much. Looking through the log files, if anything it seems that more of the model layers goes to GPU/VRAM in the new engine compared to the old, explaining the higher VRAM usage. However what happens with context / KV Cache, etc is not clear from the logs. While reducing the context size will speed up the time to first token and improve memory usage, the concern I'm having here is the difference between the old engine and the new, the old is capable of keeping the time to first token under one minute, most of the time to around 30 seconds, while the new engine even exceeds 4 minutes in several tests and exceeds 5 minutes in one. This is simply far too long time for most usecases, and I hope it's something that can be looked into and fixed. I have not kept the logs for all tests, but if logs are needed, then please specify specifically which of the above tests are of interest and I'll rerun them and include the logs. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.11.6
GiteaMirror added the bug label 2025-11-12 14:26:26 -06:00
Author
Owner

@jessegross commented on GitHub (Aug 22, 2025):

Thank you for the extensive benchmarks!

Would it be possible to post the logs from both the default 0.11.6 and "new engine, new estimates" scenarios for one of the models? It looks like qwen3:14b might be a good one since it shows the largest difference. Ideally, run them with OLLAMA_DEBUG=1 set to get more information.

The sizes and splits reported through ollama ps on the old engine aren't that accurate, whereas with the new estimates they are much better. This can result in some confusion when comparing the two.

@jessegross commented on GitHub (Aug 22, 2025): Thank you for the extensive benchmarks! Would it be possible to post the logs from both the default 0.11.6 and "new engine, new estimates" scenarios for one of the models? It looks like qwen3:14b might be a good one since it shows the largest difference. Ideally, run them with OLLAMA_DEBUG=1 set to get more information. The sizes and splits reported through ollama ps on the old engine aren't that accurate, whereas with the new estimates they are much better. This can result in some confusion when comparing the two.
Author
Owner

@UncleRedz commented on GitHub (Aug 22, 2025):

Thanks, I've included four files here, the Ollama service override config for the two tested configurations ("default 0.11.6" and "new engine, new estimates") which includes the environment variables set.

ollama_qwen3_14b_override_default.conf.txt

ollama_qwen3_14b_override_newengine.conf.txt

Then the following two are the relevant time ranges out of the journal ("journalctl -u ollama --no-pager --pager-end").

ollama_qwen3_14b_default.txt

ollama_qwen3_14b_newengine.txt

Let me know if you need any other logs or information.

@UncleRedz commented on GitHub (Aug 22, 2025): Thanks, I've included four files here, the Ollama service override config for the two tested configurations ("default 0.11.6" and "new engine, new estimates") which includes the environment variables set. [ollama_qwen3_14b_override_default.conf.txt](https://github.com/user-attachments/files/21945377/ollama_qwen3_14b_override_default.conf.txt) [ollama_qwen3_14b_override_newengine.conf.txt](https://github.com/user-attachments/files/21945379/ollama_qwen3_14b_override_newengine.conf.txt) Then the following two are the relevant time ranges out of the journal ("journalctl -u ollama --no-pager --pager-end"). [ollama_qwen3_14b_default.txt](https://github.com/user-attachments/files/21945380/ollama_qwen3_14b_default.txt) [ollama_qwen3_14b_newengine.txt](https://github.com/user-attachments/files/21945381/ollama_qwen3_14b_newengine.txt) Let me know if you need any other logs or information.
Author
Owner

@jessegross commented on GitHub (Aug 23, 2025):

Thanks for the logs. From what I can see, it looks like it might be a property of the new engine rather than the new estimates. Loading time is slightly faster with the new estimates, which is the main place where I was expecting we might have some impact. However, it looks like the new engine with partial offload is much slower at prompt processing.

This is what I see on my machine, forcing it to offload 18 layers, the same number as new estimates used for you:

OLLAMA_NEW_ENGINE=0
total duration:       39.044640691s
load duration:        8.536269876s
prompt eval count:    4096 token(s)
prompt eval duration: 3.793332346s
prompt eval rate:     1079.79 tokens/s
eval count:           216 token(s)
eval duration:        26.713626247s
eval rate:            8.09 tokens/s
OLLAMA_NEW_ENGINE=1
total duration:       57.069263437s
load duration:        9.134583663s
prompt eval count:    4096 token(s)
prompt eval duration: 36.913332789s
prompt eval rate:     110.96 tokens/s
eval count:           145 token(s)
eval duration:        11.019658499s
eval rate:            13.16 tokens/s

Since you are inputting 9643 tokens, the difference is processing speed results in an extra delay of 78 seconds. This is roughly what I see from your logs (99 second difference).

Do you also see similar slow speeds with OLLAMA_NEW_ENGINE=1 and OLLAMA_NEW_ESTIMATES=0?

@jessegross commented on GitHub (Aug 23, 2025): Thanks for the logs. From what I can see, it looks like it might be a property of the new engine rather than the new estimates. Loading time is slightly faster with the new estimates, which is the main place where I was expecting we might have some impact. However, it looks like the new engine with partial offload is much slower at prompt processing. This is what I see on my machine, forcing it to offload 18 layers, the same number as new estimates used for you: ``` OLLAMA_NEW_ENGINE=0 total duration: 39.044640691s load duration: 8.536269876s prompt eval count: 4096 token(s) prompt eval duration: 3.793332346s prompt eval rate: 1079.79 tokens/s eval count: 216 token(s) eval duration: 26.713626247s eval rate: 8.09 tokens/s ``` ``` OLLAMA_NEW_ENGINE=1 total duration: 57.069263437s load duration: 9.134583663s prompt eval count: 4096 token(s) prompt eval duration: 36.913332789s prompt eval rate: 110.96 tokens/s eval count: 145 token(s) eval duration: 11.019658499s eval rate: 13.16 tokens/s ``` Since you are inputting 9643 tokens, the difference is processing speed results in an extra delay of 78 seconds. This is roughly what I see from your logs (99 second difference). Do you also see similar slow speeds with OLLAMA_NEW_ENGINE=1 and OLLAMA_NEW_ESTIMATES=0?
Author
Owner

@UncleRedz commented on GitHub (Aug 23, 2025):

Here is an updated table, for Qwen3 14B, with the added scenario of new engine, but without new estimates.

Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec Time (MM:SS)
Default 0.11.6 17 52%/48% 6400 / 8188 0:35 5,45 2:45
New Engine 17 52%/48% 6028 / 8188 5:19 5,69 7:36
New Engine, New Estimates 17 56%/44% 7366 / 8188 4:24 6,49 6:55
New Engine, New Estimates, Flash Attention 13 45%/55% 7252 / 8188 5:14 5,79 7:39
New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 11 35%/65% 7402 / 8188 3:40 7,69 5:26

What I can see in the logs, is that 13 layers are offloaded instead of 18 and the time to first token is quite long. If anything the new estimates makes things slightly faster, but the new engine is terribly slow at processing the prompt. You are probably right that it's related to new engine and not new estimates.

ollama_qwen3_14b_override_newengine_only.conf.txt

ollama_qwen3_14b_newengine_only.txt

@UncleRedz commented on GitHub (Aug 23, 2025): Here is an updated table, for Qwen3 14B, with the added scenario of new engine, but without new estimates. | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | Time (MM:SS) | | :------------------------------------------------------------ | :-------- | :------ | :---------- | :------------------ | :----------- | :----------- | | Default 0.11.6 | 17 | 52%/48% | 6400 / 8188 | 0:35 | 5,45 | 2:45 | | New Engine | 17 | 52%/48% | 6028 / 8188 | 5:19 | 5,69 | 7:36 | | New Engine, New Estimates | 17 | 56%/44% | 7366 / 8188 | 4:24 | 6,49 | 6:55 | | New Engine, New Estimates, Flash Attention | 13 | 45%/55% | 7252 / 8188 | 5:14 | 5,79 | 7:39 | | New Engine, New Estimates, Flash Attention, KV Cache Quant Q8 | 11 | 35%/65% | 7402 / 8188 | 3:40 | 7,69 | 5:26 | What I can see in the logs, is that 13 layers are offloaded instead of 18 and the time to first token is quite long. If anything the new estimates makes things slightly faster, but the new engine is terribly slow at processing the prompt. You are probably right that it's related to new engine and not new estimates. [ollama_qwen3_14b_override_newengine_only.conf.txt](https://github.com/user-attachments/files/21947911/ollama_qwen3_14b_override_newengine_only.conf.txt) [ollama_qwen3_14b_newengine_only.txt](https://github.com/user-attachments/files/21947910/ollama_qwen3_14b_newengine_only.txt)
Author
Owner

@jessegross commented on GitHub (Sep 16, 2025):

This should be fixed by https://github.com/ollama/ollama/pull/12293 which will be in the next release (presumably 0.11.12).

@jessegross commented on GitHub (Sep 16, 2025): This should be fixed by https://github.com/ollama/ollama/pull/12293 which will be in the next release (presumably 0.11.12).
Author
Owner

@moontato commented on GitHub (Sep 25, 2025):

Hello,
I upgraded to 0.12.2, but even after the update, the time to first token remains as slow as it was in 0.11.10 (the version I was on before).
I tested this using the gpt-oss:20b model; I have an 8 GB VRAM GPU, so I'm splitting the model layers between the CPU (9 layers) and GPU (15 layers).

Could you confirm if there’s anything else I should adjust to observe improvements? Thank you.

@moontato commented on GitHub (Sep 25, 2025): Hello, I upgraded to 0.12.2, but even after the update, the time to first token remains as slow as it was in 0.11.10 (the version I was on before). I tested this using the gpt-oss:20b model; I have an 8 GB VRAM GPU, so I'm splitting the model layers between the CPU (9 layers) and GPU (15 layers). Could you confirm if there’s anything else I should adjust to observe improvements? Thank you.
Author
Owner

@jessegross commented on GitHub (Sep 25, 2025):

@moontato There's nothing that you need to change but were you originally setting OLLAMA_NEW_ENGINE on previous versions? That's what this bug is about.

Are you seeing better performance in some scenario?

@jessegross commented on GitHub (Sep 25, 2025): @moontato There's nothing that you need to change but were you originally setting OLLAMA_NEW_ENGINE on previous versions? That's what this bug is about. Are you seeing better performance in some scenario?
Author
Owner

@moontato commented on GitHub (Sep 25, 2025):

Oh I see, I apologize for the misunderstanding. I don't think I had OLLAMA_NEW_ENGINE set on this computer.

I'm not noticing any particular speed improvements after updating, but I could be wrong.
Would it be recommended to set the OLLAMA_NEW_ENGINE variable now? Since it has been fixed?

@moontato commented on GitHub (Sep 25, 2025): Oh I see, I apologize for the misunderstanding. I don't think I had OLLAMA_NEW_ENGINE set on this computer. I'm not noticing any particular speed improvements after updating, but I could be wrong. Would it be recommended to set the OLLAMA_NEW_ENGINE variable now? Since it has been fixed?
Author
Owner

@jessegross commented on GitHub (Sep 25, 2025):

Both gpt-oss (as you mentioned) and qwen3 (from the original report) are on the new engine by default in 0.12.2 so there is nothing to change.

The bug here was a regression, which has since been fixed. It's not necessarily expected to have a speed improvement between those versions if you weren't testing the new code path.

In your case, the model is running partially on the CPU so this is the likely main cause of slowness.

@jessegross commented on GitHub (Sep 25, 2025): Both gpt-oss (as you mentioned) and qwen3 (from the original report) are on the new engine by default in 0.12.2 so there is nothing to change. The bug here was a regression, which has since been fixed. It's not necessarily expected to have a speed improvement between those versions if you weren't testing the new code path. In your case, the model is running partially on the CPU so this is the likely main cause of slowness.
Author
Owner

@UncleRedz commented on GitHub (Sep 26, 2025):

I've changed GPU since last time, from 4060 8GB to 5060 Ti 16GB, so Qwen3 14B test results are not that useful anymore as it fits entirely into VRAM, however the 30B-A3B is still relevant for testing. To be honest I don't see much of a difference here between 0.11.6, 0.12.0 and 0.12.2. My understanding is that the fix should be in the 0.12.0 release.

As you can see below, there is still a huge difference in time to first token from the 0.11.6 old engine and the 0.12.0/0.12.2 new engine. I've rerun the 0.12.0 and 0.12.2 several times with cold and warm start (pre-loaded LLM) while timing varies a little, it's still minor compared to the change between old engine and new engine.

Model GPU Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec
Qwen3 30B-A3B Q4_K_M RTX 4060 8GB Default ​0.11.6 25 67%/33% 5918 / 8188 0:48 11,24
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB Default ​0.11.6 25 35%/65% 13617 / 16311 0:20 17,27
Qwen3 30B-A3B Q4_K_M RTX 4060 8GB New Engine, New Estimates, Flash Attention 0.11.6 21 65%/35% 7282 / 8188 03:28 10,54
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.11.6 21 25%/75% 15401 / 16311 01:27 21,48
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.12.0 21 25%/75% 15405 / 16311 01:28 21,79
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.12.2 21 25%/75% 15405 / 16311 01:33 21,49
@UncleRedz commented on GitHub (Sep 26, 2025): I've changed GPU since last time, from 4060 8GB to 5060 Ti 16GB, so Qwen3 14B test results are not that useful anymore as it fits entirely into VRAM, however the 30B-A3B is still relevant for testing. To be honest I don't see much of a difference here between 0.11.6, 0.12.0 and 0.12.2. My understanding is that the fix should be in the 0.12.0 release. As you can see below, there is still a huge difference in time to first token from the 0.11.6 old engine and the 0.12.0/0.12.2 new engine. I've rerun the 0.12.0 and 0.12.2 several times with cold and warm start (pre-loaded LLM) while timing varies a little, it's still minor compared to the change between old engine and new engine. | Model | GPU | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | | :------------------- | :--------------- | :---------------------------------------------------- | :-------- | :------ | :------------ | :------------------ | :----------- | | Qwen3 30B-A3B Q4_K_M | RTX 4060 8GB | Default ​**0.11.6** | 25 | 67%/33% | 5918 / 8188 | 0:48 | 11,24 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | Default ​**0.11.6** | 25 | 35%/65% | 13617 / 16311 | **0:20** | 17,27 | | Qwen3 30B-A3B Q4_K_M | RTX 4060 8GB | New Engine, New Estimates, Flash Attention **0.11.6** | 21 | 65%/35% | 7282 / 8188 | 03:28 | 10,54 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.11.6** | 21 | 25%/75% | 15401 / 16311 | **01:27** | 21,48 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.12.0** | 21 | 25%/75% | 15405 / 16311 | **01:28** | 21,79 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.12.2** | 21 | 25%/75% | 15405 / 16311 | **01:33** | 21,49 |
Author
Owner

@moontato commented on GitHub (Sep 26, 2025):

@jessegross , thank you for the information. I think a clearer way to phrase the question would be:

“Should I clear the OLLAMA_NEW_ENGINE variable?”

…considering that the performance degradation with the new engine still appears to persist.

@moontato commented on GitHub (Sep 26, 2025): @jessegross , thank you for the information. I think a clearer way to phrase the question would be: “Should I clear the **OLLAMA_NEW_ENGINE** variable?” …considering that the performance degradation with the new engine still appears to persist.
Author
Owner

@jessegross commented on GitHub (Sep 26, 2025):

@UncleRedz It does look like there is still an issue here in the partial offloading case. That being said, you might want to check 0.12.2 with default settings to get a more apples to apples comparison. New engine and new estimates are on by default for qwen3moe but flash attention is an extra variable that is being changed in these tests. It looks like that may also have an impact with partial offloads.

@moontato gpt-oss is new engine only, so there is nothing that can be changed.

@jessegross commented on GitHub (Sep 26, 2025): @UncleRedz It does look like there is still an issue here in the partial offloading case. That being said, you might want to check 0.12.2 with default settings to get a more apples to apples comparison. New engine and new estimates are on by default for qwen3moe but flash attention is an extra variable that is being changed in these tests. It looks like that may also have an impact with partial offloads. @moontato gpt-oss is new engine only, so there is nothing that can be changed.
Author
Owner

@UncleRedz commented on GitHub (Sep 28, 2025):

@jessegross I've added a test without flash attention and then I also removed the RTX 4060 test results from this table, to make it more clear. While flash attention does add about 8-14 seconds, it's on a small scale compared to the time difference between the old engine and the new.

Please let me know if there is any specific tests or logs that would be of further use.

Model GPU Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB Default ​0.11.6 25 35%/65% 13617 / 16311 0:20 17,27
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates 0.12.2 24 35%/65% 15547 / 16311 01:19 21,72
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.11.6 21 25%/75% 15401 / 16311 01:27 21,48
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.12.0 21 25%/75% 15405 / 16311 01:28 21,79
Qwen3 30B-A3B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.12.2 21 25%/75% 15405 / 16311 01:33 21,49

As a comparison, when everything fits in VRAM the new engine and estimates are doing great.

Model GPU Config Size (GB) CPU/GPU VRAM (MiB) Time To First Token Tokens / Sec
Qwen3 14B Q4_K_M RTX 5060 Ti 16GB Default ​0.11.6 17 4%/96% 14225 / 16311 0:19 24,05
Qwen3 14B Q4_K_M RTX 5060 Ti 16GB New Engine, New Estimates, Flash Attention 0.11.6 13 100% 12599 / 16311 0:07 31,97
@UncleRedz commented on GitHub (Sep 28, 2025): @jessegross I've added a test without flash attention and then I also removed the RTX 4060 test results from this table, to make it more clear. While flash attention does add about 8-14 seconds, it's on a small scale compared to the time difference between the old engine and the new. Please let me know if there is any specific tests or logs that would be of further use. | Model | GPU | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | | :------------------- | :--------------- | :---------------------------------------------------- | :-------- | :------ | :------------ | :------------------ | :----------- | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | Default ​**0.11.6** | 25 | 35%/65% | 13617 / 16311 | **0:20** | 17,27 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates **0.12.2** | 24 | 35%/65% | 15547 / 16311 | **01:19** | 21,72 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.11.6** | 21 | 25%/75% | 15401 / 16311 | **01:27** | 21,48 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.12.0** | 21 | 25%/75% | 15405 / 16311 | **01:28** | 21,79 | | Qwen3 30B-A3B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.12.2** | 21 | 25%/75% | 15405 / 16311 | **01:33** | 21,49 | As a comparison, when everything fits in VRAM the new engine and estimates are doing great. | Model | GPU | Config | Size (GB) | CPU/GPU | VRAM (MiB) | Time To First Token | Tokens / Sec | | :------------------- | :--------------- | :---------------------------------------------------- | :-------- | :------ | :------------ | :------------------ | :----------- | | Qwen3 14B Q4_K_M | RTX 5060 Ti 16GB | Default ​**0.11.6** | 17 | 4%/96% | 14225 / 16311 | 0:19 | 24,05 | | Qwen3 14B Q4_K_M | RTX 5060 Ti 16GB | New Engine, New Estimates, Flash Attention **0.11.6** | 13 | 100% | 12599 / 16311 | 0:07 | 31,97 |
Author
Owner

@Maltz42 commented on GitHub (Oct 6, 2025):

The testing I did was on a larger model with a 24k context window, that spilled over into system RAM, and that had a much larger impact on prompt evaluation than above. Here is my --verbose output (from my duplicate issue above). I'm happy to run more detailed testing using various combinations of New Engine/New Estimates/Flash Attention if requested.

(qwen3:235b-a22b-instruct-2507-q8_0, which runs on the old engine on <0.12.2)

0.12.4rc5
total duration:       23m5.891375436s
load duration:        54.853551ms
prompt eval count:    24490 token(s)
prompt eval duration: 22m13.003072681s
prompt eval rate:     18.37 tokens/s
eval count:           248 token(s)
eval duration:        46.448428687s
eval rate:            5.34 tokens/s

0.12.1:
total duration:       4m18.250874851s
load duration:        51.181565ms
prompt eval count:    24483 token(s)
prompt eval duration: 2m44.736125262s
prompt eval rate:     148.62 tokens/s
eval count:           266 token(s)
eval duration:        1m17.38724237s
eval rate:            3.44 tokens/s
@Maltz42 commented on GitHub (Oct 6, 2025): The testing I did was on a larger model with a 24k context window, that spilled over into system RAM, and that had a much larger impact on prompt evaluation than above. Here is my --verbose output (from my duplicate issue above). I'm happy to run more detailed testing using various combinations of New Engine/New Estimates/Flash Attention if requested. (qwen3:235b-a22b-instruct-2507-q8_0, which runs on the old engine on <0.12.2) ``` 0.12.4rc5 total duration: 23m5.891375436s load duration: 54.853551ms prompt eval count: 24490 token(s) prompt eval duration: 22m13.003072681s prompt eval rate: 18.37 tokens/s eval count: 248 token(s) eval duration: 46.448428687s eval rate: 5.34 tokens/s 0.12.1: total duration: 4m18.250874851s load duration: 51.181565ms prompt eval count: 24483 token(s) prompt eval duration: 2m44.736125262s prompt eval rate: 148.62 tokens/s eval count: 266 token(s) eval duration: 1m17.38724237s eval rate: 3.44 tokens/s ```
Author
Owner

@Maltz42 commented on GitHub (Oct 7, 2025):

I guess I should add, since I mentioned it in the duplicate issue I opened but haven't seen it specifically here, that what I was seeing appeared to indicate that when a model is too big to fit in VRAM, the old engine seems to leave room for the context window in VRAM, so prompt eval occurs there. The new engine appears to use system RAM and CPU for prompt eval instead. That makes the response perform better, since more of the model is in VRAM, but makes prompt evaluation significantly slower. The fuller the context window, the more pronounced that becomes.

@Maltz42 commented on GitHub (Oct 7, 2025): I guess I should add, since I mentioned it in the duplicate issue I opened but haven't seen it specifically here, that what I was seeing appeared to indicate that when a model is too big to fit in VRAM, the old engine seems to leave room for the context window in VRAM, so prompt eval occurs there. The new engine appears to use system RAM and CPU for prompt eval instead. That makes the response perform better, since more of the model is in VRAM, but makes prompt evaluation significantly slower. The fuller the context window, the more pronounced that becomes.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#8002