[GH-ISSUE #13368] Ministral-3 only loads on the CPU #34588

Closed
opened 2026-04-22 18:17:09 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @mgraffam on GitHub (Dec 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13368

What is the issue?

I have 4 Nvidia P4 cards and can run models such as gemma3:27b on them. However, ministral-3:14b and even ministral3-8b refuse to load on the GPUs. Memory is allocated briefly, and intermittently before Ollama gives up and just runs them on the CPUs. I get "model layout" doesn't fit and cudaMalloc errors.

But believe this to be a bug as I've never encountered Ollama unable to split models across these cards before.

Relevant log output

Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.147Z level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40
Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.148Z level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50
Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.148Z level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.60
Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.149Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:16 GPULayers:35[ID:GPU-e0af048b-b9f8-ec62-4b32-c5d4074b4c7f Layers:15(0..14)  ID:GPU-8555a492-a491-ee07-5087-66f1f7948710 Layers:15(15..29) ID:GPU-c8219bb9-ba4e-7e8d-6c7b-13ba825c367e Layers:5(30..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 07 17:17:28 amethyst ollama[48292]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 3: cudaMalloc failed: out of memory
Dec 07 17:17:28 amethyst ollama[48292]: ggml_gallocr_reserve_n: failed to allocate CUDA3 buffer of size 9646586240

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.13.1

Originally created by @mgraffam on GitHub (Dec 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13368 ### What is the issue? I have 4 Nvidia P4 cards and can run models such as gemma3:27b on them. However, ministral-3:14b and even ministral3-8b refuse to load on the GPUs. Memory is allocated briefly, and intermittently before Ollama gives up and just runs them on the CPUs. I get "model layout" doesn't fit and cudaMalloc errors. But believe this to be a bug as I've never encountered Ollama unable to split models across these cards before. ### Relevant log output ```shell Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.147Z level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40 Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.148Z level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50 Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.148Z level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.60 Dec 07 17:17:27 amethyst ollama[48292]: time=2025-12-07T17:17:27.149Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:16 GPULayers:35[ID:GPU-e0af048b-b9f8-ec62-4b32-c5d4074b4c7f Layers:15(0..14) ID:GPU-8555a492-a491-ee07-5087-66f1f7948710 Layers:15(15..29) ID:GPU-c8219bb9-ba4e-7e8d-6c7b-13ba825c367e Layers:5(30..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 07 17:17:28 amethyst ollama[48292]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 3: cudaMalloc failed: out of memory Dec 07 17:17:28 amethyst ollama[48292]: ggml_gallocr_reserve_n: failed to allocate CUDA3 buffer of size 9646586240 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.1
GiteaMirror added the bug label 2026-04-22 18:17:09 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 7, 2025):

Full server log will aid in debugging, but it's likely that the size of the vision component (~9G) of the model is larger than the available VRAM in a single GPU, as the vision component needs to load on a single device. 0.13.2 enables flash attention for the vision component and will make it easier to fit.

<!-- gh-comment-id:3622691395 --> @rick-github commented on GitHub (Dec 7, 2025): Full server log will aid in debugging, but it's likely that the size of the vision component (~9G) of the model is larger than the available VRAM in a single GPU, as the vision component needs to load on a single device. 0.13.2 enables flash attention for the vision component and will make it easier to fit.
Author
Owner

@mgraffam commented on GitHub (Dec 7, 2025):

I guess that seems right. I just tried the 3b model and it also tries to allocate 9G.

<!-- gh-comment-id:3622850320 --> @mgraffam commented on GitHub (Dec 7, 2025): I guess that seems right. I just tried the 3b model and it also tries to allocate 9G.
Author
Owner

@yuheho7749 commented on GitHub (Dec 8, 2025):

I'm on an RX 9070xt. Looks like the "compute graph" is 9.2G regardless of what model (tried for 14b and 8b)

time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="4.1 GiB"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.5 GiB"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="153.0 MiB"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="9.2 GiB"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=ggml.go:494 msg="offloaded 34/35 layers to GPU"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:272 msg="total memory" size="15.0 GiB"
time=2025-12-07T20:58:25.466-08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
<!-- gh-comment-id:3624822100 --> @yuheho7749 commented on GitHub (Dec 8, 2025): I'm on an RX 9070xt. Looks like the "compute graph" is 9.2G regardless of what model (tried for 14b and 8b) ``` time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="4.1 GiB" time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.5 GiB" time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="153.0 MiB" time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="9.2 GiB" time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB" time=2025-12-07T20:58:25.466-08:00 level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU" time=2025-12-07T20:58:25.466-08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2025-12-07T20:58:25.466-08:00 level=INFO source=ggml.go:494 msg="offloaded 34/35 layers to GPU" time=2025-12-07T20:58:25.466-08:00 level=INFO source=device.go:272 msg="total memory" size="15.0 GiB" time=2025-12-07T20:58:25.466-08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 ```
Author
Owner

@brucestephens commented on GitHub (Dec 8, 2025):

I doubt it's simply VRAM size. I have a Framework desktop with 128G of unified memory. Running deepseek-r1:32b ollama ps says it's using 30GB, 100% on the GPU. Running ministral-3:8b says 20GB, 100% on the CPU.

(0.13.2-rc2 doesn't resolve it, but probably that's expected.)

<!-- gh-comment-id:3627674501 --> @brucestephens commented on GitHub (Dec 8, 2025): I doubt it's simply VRAM size. I have a Framework desktop with 128G of unified memory. Running deepseek-r1:32b ollama ps says it's using 30GB, 100% on the GPU. Running ministral-3:8b says 20GB, 100% on the CPU. (0.13.2-rc2 doesn't resolve it, but probably that's expected.)
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

evo-x2, 128G unified RAM.

$ ollama -v
ollama version is 0.13.1
$ ollama ps
NAME               ID              SIZE     PROCESSOR    CONTEXT    UNTIL   
ministral-3:8b     77300ee7514e    21 GB    100% GPU     32768      Forever    
deepseek-r1:32b    edba8017331d    30 GB    100% GPU     32768      Forever    
$ ollama -v
ollama version is 0.13.2-rc2
$ ollama ps
NAME               ID              SIZE     PROCESSOR    CONTEXT    UNTIL   
ministral-3:8b     77300ee7514e    21 GB    100% GPU     32768      Forever    
deepseek-r1:32b    edba8017331d    30 GB    100% GPU     32768      Forever  
<!-- gh-comment-id:3627772322 --> @rick-github commented on GitHub (Dec 8, 2025): evo-x2, 128G unified RAM. ``` $ ollama -v ollama version is 0.13.1 $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL ministral-3:8b 77300ee7514e 21 GB 100% GPU 32768 Forever deepseek-r1:32b edba8017331d 30 GB 100% GPU 32768 Forever ``` ``` $ ollama -v ollama version is 0.13.2-rc2 $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL ministral-3:8b 77300ee7514e 21 GB 100% GPU 32768 Forever deepseek-r1:32b edba8017331d 30 GB 100% GPU 32768 Forever ```
Author
Owner

@brucestephens commented on GitHub (Dec 8, 2025):

How interesting. Maybe I've misconfigured it somehow:

$ ollama -v
ollama version is 0.13.1
$ ollama ps
NAME               ID              SIZE     PROCESSOR    CONTEXT    UNTIL
deepseek-r1:32b    edba8017331d    30 GB    100% GPU     32000      3 minutes from now
ministral-3:8b     77300ee7514e    20 GB    100% CPU     32000      2 minutes from now

$ ollama -v
ollama version is 0.13.2-rc2
$ ollama ps
NAME               ID              SIZE     PROCESSOR    CONTEXT    UNTIL
deepseek-r1:32b    edba8017331d    30 GB    100% GPU     32000      4 minutes from now
ministral-3:8b     77300ee7514e    11 GB    100% CPU     32000      4 minutes from now

(The context is a bit odd but when I changed it to 32768 that didn't help!)

<!-- gh-comment-id:3628176055 --> @brucestephens commented on GitHub (Dec 8, 2025): How interesting. Maybe I've misconfigured it somehow: ``` $ ollama -v ollama version is 0.13.1 $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL deepseek-r1:32b edba8017331d 30 GB 100% GPU 32000 3 minutes from now ministral-3:8b 77300ee7514e 20 GB 100% CPU 32000 2 minutes from now ``` ``` $ ollama -v ollama version is 0.13.2-rc2 $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL deepseek-r1:32b edba8017331d 30 GB 100% GPU 32000 4 minutes from now ministral-3:8b 77300ee7514e 11 GB 100% CPU 32000 4 minutes from now ``` (The context is a bit odd but when I changed it to 32768 that didn't help!)
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

Server log will aid in debugging.

<!-- gh-comment-id:3628192195 --> @rick-github commented on GitHub (Dec 8, 2025): [Server log](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@spitzerd commented on GitHub (Dec 8, 2025):

I faced the same issue with Ollama 0.13.1. Running Windows 11, AMD RX 6700XT and Vulcan activated. Ministral-3 8b did not run on the GPU, but other models did. After upgrading to 0.13.2, Ministral-3 8b is running on the GPU, too.

<!-- gh-comment-id:3628417361 --> @spitzerd commented on GitHub (Dec 8, 2025): I faced the same issue with Ollama 0.13.1. Running Windows 11, AMD RX 6700XT and Vulcan activated. Ministral-3 8b did not run on the GPU, but other models did. After upgrading to 0.13.2, Ministral-3 8b is running on the GPU, too.
Author
Owner

@brucestephens commented on GitHub (Dec 8, 2025):

Here's the server log for ministral-3:8b and deepseek-r1:7b. (I apologise for not trimming them but I just don't understand the logs enough to know what I can safely clip. They're not that big in any case.)

ministral-3b.log

deepseek.log

<!-- gh-comment-id:3628581340 --> @brucestephens commented on GitHub (Dec 8, 2025): Here's the server log for ministral-3:8b and deepseek-r1:7b. (I apologise for not trimming them but I just don't understand the logs enough to know what I can safely clip. They're not _that_ big in any case.) [ministral-3b.log](https://github.com/user-attachments/files/24039640/ministral-3b.log) [deepseek.log](https://github.com/user-attachments/files/24039647/deepseek.log)
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

Both of these logs show no GPU usage:

ministral-3b.log:Dec 08 18:45:01 framework ollama[132721]: time=2025-12-08T18:45:01.855Z level=INFO source=ggml.go:494 msg="offloaded 0/35 layers to GPU"
deepseek.log:Dec 08 18:49:39 framework ollama[132721]: load_tensors: offloaded 0/29 layers to GPU

There is apparently no available VRAM:

Dec 08 18:45:00 framework ollama[132721]: time=2025-12-08T18:45:00.543Z level=INFO source=sched.go:450 msg="gpu memory" id=0 library=ROCm available="0 B" free="219.4 MiB" minimum="457.0 MiB" overhead="0 B"

If you include the start of the log there will be information about device detection.

<!-- gh-comment-id:3628856109 --> @rick-github commented on GitHub (Dec 8, 2025): Both of these logs show no GPU usage: ``` ministral-3b.log:Dec 08 18:45:01 framework ollama[132721]: time=2025-12-08T18:45:01.855Z level=INFO source=ggml.go:494 msg="offloaded 0/35 layers to GPU" deepseek.log:Dec 08 18:49:39 framework ollama[132721]: load_tensors: offloaded 0/29 layers to GPU ``` There is apparently no available VRAM: ``` Dec 08 18:45:00 framework ollama[132721]: time=2025-12-08T18:45:00.543Z level=INFO source=sched.go:450 msg="gpu memory" id=0 library=ROCm available="0 B" free="219.4 MiB" minimum="457.0 MiB" overhead="0 B" ``` If you include the start of the log there will be information about device detection.
Author
Owner

@brucestephens commented on GitHub (Dec 8, 2025):

Thanks, I was obviously confusing myself.

I had extracted ollama-linux-amd64-rocm.tgz as well as ollama-linux-amd64.tgz and I suspect that was a mistake. The ROCm driver reports low memory so is presumably not being used. And I didn't have OLLAMA_VULKAN set so the Vulkan backend was not being used. Now I've corrected that (setting OLLAMA_VULKAN in /etc/systemd/system/ollama.service.d/override.conf and (perhaps unnecessarily) removing the lib/ollama/rocm files) things seem to be working better. (The ROCm driver failing is quite possibly something I haven't set up correctly on the system but I seem to remember reading that in some situations using Vulkan works better anyway.)

The logs now show the Vulkan driver being used and things being offloaded to the GPU

Dec 08 21:55:11 framework ollama[5966]: time=2025-12-08T21:55:11.087Z level=INFO source=types.go:42 msg="inference compute" id=00000000-c200-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Radeon 8060S Graphics (RADV GFX1151)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:c2:00.0 type=iGPU total="63.0 GiB" available="62.7 GiB"
Dec 08 21:55:32 framework ollama[5966]: time=2025-12-08T21:55:32.889Z level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU"
<!-- gh-comment-id:3629207825 --> @brucestephens commented on GitHub (Dec 8, 2025): Thanks, I was obviously confusing myself. I had extracted `ollama-linux-amd64-rocm.tgz` as well as `ollama-linux-amd64.tgz` and I suspect that was a mistake. The ROCm driver reports low memory so is presumably not being used. And I didn't have `OLLAMA_VULKAN` set so the Vulkan backend was not being used. Now I've corrected that (setting OLLAMA_VULKAN in `/etc/systemd/system/ollama.service.d/override.conf` and (perhaps unnecessarily) removing the `lib/ollama/rocm` files) things seem to be working better. (The ROCm driver failing is quite possibly something I haven't set up correctly on the system but I seem to remember reading that in some situations using Vulkan works better anyway.) The logs now show the Vulkan driver being used and things being offloaded to the GPU ``` Dec 08 21:55:11 framework ollama[5966]: time=2025-12-08T21:55:11.087Z level=INFO source=types.go:42 msg="inference compute" id=00000000-c200-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Radeon 8060S Graphics (RADV GFX1151)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:c2:00.0 type=iGPU total="63.0 GiB" available="62.7 GiB" ``` ``` Dec 08 21:55:32 framework ollama[5966]: time=2025-12-08T21:55:32.889Z level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU" ```
Author
Owner

@mgraffam commented on GitHub (Dec 9, 2025):

0.13.2 resolves this for me w/ multiple P4s. I'm able to load ministral-3:8b w/ num_ctx set to 147456 entirely onto the GPU, and can probably tweak that smidge higher.

<!-- gh-comment-id:3633192699 --> @mgraffam commented on GitHub (Dec 9, 2025): 0.13.2 resolves this for me w/ multiple P4s. I'm able to load ministral-3:8b w/ num_ctx set to 147456 entirely onto the GPU, and can probably tweak that smidge higher.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34588