[GH-ISSUE #11355] New engine cannot load all 41 layers for mistral-small3.2 on NVIDIA A10 24Gb #7490

Closed
opened 2026-04-12 19:34:09 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @fchahun on GitHub (Jul 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11355

What is the issue?

With the exact same parameters (num_ctx=16384, flash attention enabled, other parameters at default values), same input image (1540 x 702) and same prompt:

  • Ollama v0.6.8 accepts num_gpu= 41 with all model layers loaded on GPU and runs at optimal speed. nvtop shows that 19.04 Gb are allocated in VRAM and that GPU usage is around 85-90% during inference.

Image

  • Ollama v0.9.6 with num-gpu=41 generates a CUDA memory allocation failure for the compute graph allocation. The compute graph seems much larger (9 Gb) that what is reported by v0.6.8 (164 Mb).

Error: 500 {"error":"llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9646586240\ntime=2025-07-10T06:38:10.605Z level=DEBUG source=ggml.go:648 msg=\"compute graph\" nodes=1175 splits=1"}

  • Ollama v0.9.6 (or any version > 0.6.8) only accepts num-gpu=40. nvtop shows that only 18.43Gb are allocated in VRAM and that GPU usage is reduced to 70-75% during inference. There is an observed 20% decrease in performance.

ImageThe same phenomenon is observed with previous mistral release mistral-small3.

Full logs with OLLAMA_DEBUG=1 are too large to be included in this issue, but the relevant lines that characterize the difference in behavior (with num_gpu=41) seem to be the following:

v0.6.8:

Jul 09 11:01:09 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:09.995Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=26 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="27.7 GiB" memory.required.partial="21.7 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[21.7 GiB]" memory.weights.total="13.0 GiB" memory.weights.repeating="12.5 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="1.7 GiB" memory.graph.partial="1.7 GiB" projector.weights="738.4 MiB" projector.graph="8.8 GiB"
[...}
Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.835Z level=DEBUG source=server.go:634 msg="model load progress 1.00"
Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.844Z level=DEBUG source=ggml.go:550 msg="compute graph" nodes=1248 splits=2
Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.844Z level=INFO source=ggml.go:553 **msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="164.0 MiB"**
Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.844Z level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB"
Jul 09 11:01:13 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:13.087Z level=INFO source=server.go:628 msg="llama runner started in 3.02 seconds"
Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.008Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=40 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="27.5 GiB" memory.required.partial="17.5 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[17.5 GiB]" memory.weights.total="13.0 GiB" memory.weights.repeating="12.5 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="1.7 GiB" memory.graph.partial="1.7 GiB" projector.weights="738.4 MiB" projector.graph="8.8 GiB"
[...}
Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 0: cudaMalloc failed: out of memory
Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9646586240Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.605Z level=DEBUG source=ggml.go:648 msg="compute graph" nodes=1175 splits=1
Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.606Z level=INFO source=ggml.go:666 **msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="9.0 GiB"**
Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.606Z level=INFO source=ggml.go:666 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: panic: insufficient memory - required allocations: {InputWeights:377487360A CPU:{Name:CPU UUID: Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} GPUs:[{Name:CUDA0 UUID:GPU-cd782bd8-1ef3-68cb-3ff3-1fc87bc23dcd Weights:[357253120A 357253120A 357253120A 357253120A 357253120A 313999360A 313999360A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 355901440A 355901440A 357253120A 355901440A 355901440A 1396150272A] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:9646586240F}]}

What does explain the difference in compute graph VRAM allocation between the two versions, for the same prompt and same input image?

Relevant log output


OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.9.6

Originally created by @fchahun on GitHub (Jul 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11355 ### What is the issue? With the exact same parameters (num_ctx=16384, flash attention enabled, other parameters at default values), same input image (1540 x 702) and same prompt: - Ollama v0.6.8 accepts **num_gpu= 41** with all model layers loaded on GPU and runs at optimal speed. **nvtop** shows that 19.04 Gb are allocated in VRAM and that GPU usage is around 85-90% during inference. ![Image](https://github.com/user-attachments/assets/1098ec55-cf16-4b6c-a53c-28b059ab380e) - Ollama v0.9.6 with **num-gpu=41** generates a CUDA memory allocation failure for the compute graph allocation. The compute graph seems much larger (9 Gb) that what is reported by v0.6.8 (164 Mb). `Error: 500 {"error":"llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9646586240\ntime=2025-07-10T06:38:10.605Z level=DEBUG source=ggml.go:648 msg=\"compute graph\" nodes=1175 splits=1"} ` - Ollama v0.9.6 (or any version > 0.6.8) only accepts **num-gpu=40**. **nvtop** shows that only 18.43Gb are allocated in VRAM and that GPU usage is reduced to 70-75% during inference. There is an observed **20% decrease** in performance. ![Image](https://github.com/user-attachments/assets/91f815f4-15f2-4cf8-9654-2e1d6377bd69)The same phenomenon is observed with previous mistral release mistral-small3. Full logs with OLLAMA_DEBUG=1 are too large to be included in this issue, but the relevant lines that characterize the difference in behavior (with num_gpu=41) seem to be the following: **v0.6.8:** ``` Jul 09 11:01:09 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:09.995Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=26 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="27.7 GiB" memory.required.partial="21.7 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[21.7 GiB]" memory.weights.total="13.0 GiB" memory.weights.repeating="12.5 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="1.7 GiB" memory.graph.partial="1.7 GiB" projector.weights="738.4 MiB" projector.graph="8.8 GiB" [...} Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.835Z level=DEBUG source=server.go:634 msg="model load progress 1.00" Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.844Z level=DEBUG source=ggml.go:550 msg="compute graph" nodes=1248 splits=2 Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.844Z level=INFO source=ggml.go:553 **msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="164.0 MiB"** Jul 09 11:01:12 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:12.844Z level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB" Jul 09 11:01:13 gr-d-gpu-test ollama[26033]: time=2025-07-09T11:01:13.087Z level=INFO source=server.go:628 msg="llama runner started in 3.02 seconds" ``` ``` Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.008Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=40 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="27.5 GiB" memory.required.partial="17.5 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[17.5 GiB]" memory.weights.total="13.0 GiB" memory.weights.repeating="12.5 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="1.7 GiB" memory.graph.partial="1.7 GiB" projector.weights="738.4 MiB" projector.graph="8.8 GiB" [...} Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 0: cudaMalloc failed: out of memory Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9646586240Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.605Z level=DEBUG source=ggml.go:648 msg="compute graph" nodes=1175 splits=1 Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.606Z level=INFO source=ggml.go:666 **msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="9.0 GiB"** Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: time=2025-07-10T06:38:10.606Z level=INFO source=ggml.go:666 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" Jul 10 06:38:10 gr-d-gpu-test ollama[136100]: panic: insufficient memory - required allocations: {InputWeights:377487360A CPU:{Name:CPU UUID: Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} GPUs:[{Name:CUDA0 UUID:GPU-cd782bd8-1ef3-68cb-3ff3-1fc87bc23dcd Weights:[357253120A 357253120A 357253120A 357253120A 357253120A 313999360A 313999360A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 312647680A 312647680A 357253120A 355901440A 355901440A 357253120A 355901440A 355901440A 1396150272A] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:9646586240F}]} ``` **What does explain the difference in compute graph VRAM allocation between the two versions, for the same prompt and same input image?** ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.9.6
GiteaMirror added the bug label 2026-04-12 19:34:09 -05:00
Author
Owner

@jessegross commented on GitHub (Jul 10, 2025):

The reported compute graph on 0.6.8 is only for the text portion, not the image. On 0.9.6 it is including the image as well. There are several inaccuracies in the memory estimation and sometimes the smaller sizes in 0.6.8 mean that you get lucky and can offload more to the GPU (resulting in higher performance) but in other cases it causes a crash.

If you feeling adventurous, you can try https://github.com/ollama/ollama/pull/11090, which should avoid these problems.

<!-- gh-comment-id:3058406474 --> @jessegross commented on GitHub (Jul 10, 2025): The reported compute graph on 0.6.8 is only for the text portion, not the image. On 0.9.6 it is including the image as well. There are several inaccuracies in the memory estimation and sometimes the smaller sizes in 0.6.8 mean that you get lucky and can offload more to the GPU (resulting in higher performance) but in other cases it causes a crash. If you feeling adventurous, you can try https://github.com/ollama/ollama/pull/11090, which should avoid these problems.
Author
Owner

@swtb3-ryder commented on GitHub (Jul 10, 2025):

Also seeing this problem with Qwen 30 A3B in int 4 and gemma3 27 qat. I used to be able to run these models without any issues on a single A10 with ctx=16k

Suddenly I'm getting crashes similar to op.

<!-- gh-comment-id:3058518657 --> @swtb3-ryder commented on GitHub (Jul 10, 2025): Also seeing this problem with Qwen 30 A3B in int 4 and gemma3 27 qat. I used to be able to run these models without any issues on a single A10 with ctx=16k Suddenly I'm getting crashes similar to op.
Author
Owner

@fchahun commented on GitHub (Jul 11, 2025):

If you feeling adventurous, you can try https://github.com/ollama/ollama/pull/11090, which should avoid these problems.

In fact, I had already noticed this post when you published it, and I am already using "OLLAMA_NEW_ESTIMATES=1" in the Ollama service config file (see below), in the hope it would solve the problem. It does not: the logs posted yesterday are obtained with this setting.

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_NEW_ENGINE=1"
Environment="OLLAMA_NEW_ESTIMATES=1"
Environment="OLLAMA_DEBUG=1"
#Environment="OLLAMA_KV_CACHE_TYPE=q8_0"

The reported compute graph on 0.6.8 is only for the text portion, not the image. On 0.9.6 it is including the image as well.

Ok, but in this case, should not the VRAM size required to allocate the compute graph depend on image dimensions?

I tried again with a 10-fold reduction of image dimensions down to a ridiculous 254 x 116 pixels (which BTW makes the image unusable for OCR) and I still get the same error message. Moreover, the log reports the same VRAM allocation estimates for the compute graph as with the original 1154 x 702 image.

Image

log_v0.9.6_num_gpu=41_new_estimates_small_image.txt

<!-- gh-comment-id:3061957071 --> @fchahun commented on GitHub (Jul 11, 2025): > If you feeling adventurous, you can try https://github.com/ollama/ollama/pull/11090, which should avoid these problems. In fact, I had already noticed this post when you published it, and I am already using "OLLAMA_NEW_ESTIMATES=1" in the Ollama service config file (see below), in the hope it would solve the problem. It does not: the logs posted yesterday are obtained **with this setting**. ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_NEW_ENGINE=1" Environment="OLLAMA_NEW_ESTIMATES=1" Environment="OLLAMA_DEBUG=1" #Environment="OLLAMA_KV_CACHE_TYPE=q8_0" ``` >The reported compute graph on 0.6.8 is only for the text portion, not the image. On 0.9.6 it is including the image as well. Ok, but in this case, should not the VRAM size required to allocate the compute graph **depend on image dimensions**? I tried again with a 10-fold reduction of image dimensions down to a ridiculous 254 x 116 pixels (which BTW makes the image unusable for OCR) and I still get the same error message. Moreover, the log reports the same VRAM allocation estimates for the compute graph as with the original 1154 x 702 image. ![Image](https://github.com/user-attachments/assets/bd30affa-6d0e-468f-a3d5-5b69f634a5a9) [log_v0.9.6_num_gpu=41_new_estimates_small_image.txt](https://github.com/user-attachments/files/21182114/log_v0.9.6_num_gpu.41_new_estimates_small_image.txt)
Author
Owner

@swtb3-ryder commented on GitHub (Jul 11, 2025):

Just some context in my case I'm just using a text prompt ...no image at all. But still the same issue with models that should fit on an A10

<!-- gh-comment-id:3062138136 --> @swtb3-ryder commented on GitHub (Jul 11, 2025): Just some context in my case I'm just using a text prompt ...no image at all. But still the same issue with models that should fit on an A10
Author
Owner

@jessegross commented on GitHub (Jul 11, 2025):

In order for OLLAMA_NEW_ESTIMATES to be effective, you need to build from source from that PR - it is has not been merged into mainline yet.

Memory needs to be allocated for the worst case scenario, so the required memory does not depend on resolution or whether there is an image at all.

<!-- gh-comment-id:3063710235 --> @jessegross commented on GitHub (Jul 11, 2025): In order for OLLAMA_NEW_ESTIMATES to be effective, you need to build from source from that PR - it is has not been merged into mainline yet. Memory needs to be allocated for the worst case scenario, so the required memory does not depend on resolution or whether there is an image at all.
Author
Owner

@fchahun commented on GitHub (Jul 12, 2025):

In order for OLLAMA_NEW_ESTIMATES to be effective, you need to build from source from that PR - it is has not been merged into mainline yet.

OK. I will try to find time to do that and report results.

In #11090 you mentioned that :

The new engine is automatically enabled for the following architectures [including] mistral 3.

Until the OLLAMA_NEW_ESTIMATES feature is fully tested and merged in the mainline, would it be possible as a quick fix to make "OLLAMA_NEW_ENGINE=0" force the usage of the old engine when using the latest releases?

Otherwise, with stable Ollama versions, it is currently impossible to avoid a performance penalty when running mistral-small3.x on an A10, and this requires juggling with multiple Ollama versions.

<!-- gh-comment-id:3064804955 --> @fchahun commented on GitHub (Jul 12, 2025): > In order for OLLAMA_NEW_ESTIMATES to be effective, you need to build from source from that PR - it is has not been merged into mainline yet. OK. I will try to find time to do that and report results. In #11090 you mentioned that : > The new engine is automatically enabled for the following architectures [including] mistral 3. Until the OLLAMA_NEW_ESTIMATES feature is fully tested and merged in the mainline, would it be possible as a quick fix to make "OLLAMA_NEW_ENGINE=0" force the usage of the old engine when using the latest releases? Otherwise, with stable Ollama versions, it is currently impossible to avoid a performance penalty when running mistral-small3.x on an A10, and this requires juggling with multiple Ollama versions.
Author
Owner

@jessegross commented on GitHub (Jul 14, 2025):

Until the OLLAMA_NEW_ESTIMATES feature is fully tested and merged in the mainline, would it be possible as a quick fix to make "OLLAMA_NEW_ENGINE=0" force the usage of the old engine when using the latest releases?

Otherwise, with stable Ollama versions, it is currently impossible to avoid a performance penalty when running mistral-small3.x on an A10, and this requires juggling with multiple Ollama versions.

mistral-small3 has always run on the new engine in Ollama, so it can't be forced to run on the old engine. What you are seeing is due to changes in memory estimation but they need to be kept this way to avoid the possibility of crashing.

<!-- gh-comment-id:3070532006 --> @jessegross commented on GitHub (Jul 14, 2025): > Until the OLLAMA_NEW_ESTIMATES feature is fully tested and merged in the mainline, would it be possible as a quick fix to make "OLLAMA_NEW_ENGINE=0" force the usage of the old engine when using the latest releases? > > Otherwise, with stable Ollama versions, it is currently impossible to avoid a performance penalty when running mistral-small3.x on an A10, and this requires juggling with multiple Ollama versions. mistral-small3 has always run on the new engine in Ollama, so it can't be forced to run on the old engine. What you are seeing is due to changes in memory estimation but they need to be kept this way to avoid the possibility of crashing.
Author
Owner

@jessegross commented on GitHub (Sep 24, 2025):

I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.

<!-- gh-comment-id:3330119325 --> @jessegross commented on GitHub (Sep 24, 2025): I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7490