[GH-ISSUE #13312] ministral-3:3b on jetson #55305

Closed
opened 2026-04-29 08:49:11 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @chrisqianz on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13312

What is the issue?

ministral-3:3b on jetson nano 8G running in CPU not GPU.

Relevant log output


OS

Linux

GPU

Other

CPU

Other

Ollama version

0.13.1

Originally created by @chrisqianz on GitHub (Dec 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13312 ### What is the issue? ministral-3:3b on jetson nano 8G running in CPU not GPU. ### Relevant log output ```shell ``` ### OS Linux ### GPU Other ### CPU Other ### Ollama version 0.13.1
GiteaMirror added the bug label 2026-04-29 08:49:11 -05:00
Author
Owner

@dan-and commented on GitHub (Dec 3, 2025):

Please add at least a log file of ollama, so that anyone can help you out.

Perfect would be a log file of starting another model where you show that the GPU is working and then the same approach with ministral-3:3b

It may help to set OLLAMA_DEBUG to 2, so that there is enough logging enabled

<!-- gh-comment-id:3605998403 --> @dan-and commented on GitHub (Dec 3, 2025): Please add at least a log file of ollama, so that anyone can help you out. Perfect would be a log file of starting another model where you show that the GPU is working and then the same approach with ministral-3:3b It may help to set OLLAMA_DEBUG to 2, so that there is enough logging enabled
Author
Owner

@OriolCanillasGautier commented on GitHub (Dec 3, 2025):

The same happens with Quadros (P2000 and P2200) on ubuntu, using the latest version (0.13.1). Qwen3 8b being bigger is much faster as it does work on gpu properly.
models:
hf.co/unsloth/Qwen3-8B-GGUF:Q4_K_XL 5.1 GB
ministral-3:3b 3.0 GB

test:
user@pc:~$ ollama run ministral-3:3b --verbose

salut, comment vas tu ?
Salut ! 😊 Je vais très bien, merci de demander ! Merci d'avoir utilisé Mistral AI et Le Chat – c'est super d'interagir avec une IA comme ça.

Comment puis-je t'aider aujourd'hui ? 😊
Tu veux parler de quelque chose en particulier ?

total duration: 13.246541963s
load duration: 114.184842ms
prompt eval count: 546 token(s)
prompt eval duration: 276.902436ms
prompt eval rate: 1971.81 tokens/s
eval count: 62 token(s)
eval duration: 12.825729914s
eval rate: 4.83 tokens/s

/exit

user@pc:~$ ollama run hf.co/unsloth/Qwen3-8B-GGUF:Q4_K_XL --verbose

salut, comment vas tu ?

Okay, the user greeted me in French with "salut, comment vas tu?" which means "hello, how are you?" I should respond in French to keep the conversation friendly.

I need to make sure my response is polite and acknowledges their greeting. Maybe start with "Bonjour!" to match their "salut." Then, I should mention that I'm just a virtual assistant and don't have
feelings, but I'm here to help.

I should keep it simple and welcoming. Let them know I'm ready to assist with any questions they have. Also, offer to switch to English if they prefer.

Wait, the user might be testing my French skills, so I should ensure the grammar and vocabulary are correct. Use common phrases to sound natural. Avoid any complex structures.

Check if there's any cultural nuance I should be aware of. In France, people often use "salut" in casual settings, so a friendly response is appropriate.

Make sure the response is concise but friendly. Don't add unnecessary information. Focus on inviting them to ask for help.

Double-check the spelling and grammar in French. "Bonjour" is correct, "je suis un assistant virtuel" is right. "Je n'ai pas de sentiments" is accurate.

Yes, that should work. Let them know I'm here to help and offer to switch languages if needed. Keep it open-ended so they feel comfortable to ask anything.

Bonjour ! Je suis un assistant virtuel, donc je n'ai pas de sentiments, mais je suis toujours prêt à t'aider ! Comment puis-je te soutenir aujourd'hui ? 😊 Si tu préfères parler en anglais, je peux aussi
le faire.

total duration: 20.294071176s
load duration: 81.823643ms
prompt eval count: 15 token(s)
prompt eval duration: 119.445925ms
prompt eval rate: 125.58 tokens/s
eval count: 352 token(s)
eval duration: 19.950884058s
eval rate: 17.64 tokens/s

/exit

logs:

de des. 03 11:04:23 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:04:23 | 200 | 6.459833852s | 127.0.0.1 | POST "/api/generate"
de des. 03 11:04:51 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:04:51 | 200 | 21.30351002s | 127.0.0.1 | POST "/api/chat"
de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 15.846µs | 127.0.0.1 | HEAD "/"
de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 63.458039ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 61.967128ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 112.93445ms | 127.0.0.1 | POST "/api/generate"
de des. 03 11:05:22 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:22 | 200 | 11.442655141s | 127.0.0.1 | POST "/api/chat"
de des. 03 11:07:45 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:07:45 | 200 | 16.384µs | 127.0.0.1 | HEAD "/"
de des. 03 11:07:45 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:07:45 | 200 | 42.915114ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:08:00 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:00 | 200 | 17.047µs | 127.0.0.1 | HEAD "/"
de des. 03 11:08:00 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:00 | 200 | 56.876196ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:08:29 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:29 | 200 | 24.123µs | 127.0.0.1 | HEAD "/"
de des. 03 11:08:29 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:29 | 200 | 816.548µs | 127.0.0.1 | GET "/api/tags"
de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 16.407µs | 127.0.0.1 | HEAD "/"
de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 59.522863ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 54.554258ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 117.108614ms | 127.0.0.1 | POST "/api/generate"
de des. 03 11:09:52 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:52 | 200 | 13.246677519s | 127.0.0.1 | POST "/api/chat"
de des. 03 11:10:07 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:07 | 200 | 16.527µs | 127.0.0.1 | HEAD "/"
de des. 03 11:10:07 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:07 | 200 | 39.700793ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:10:07 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:07 | 200 | 38.664421ms | 127.0.0.1 | POST "/api/show"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.309+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37561"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.470+01:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 library=CUDA total="5.0 GiB" available="4.9 GiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.470+01:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-cccd2f20-3d96-8813-4051-968238266813 library=CUDA total="5.0 GiB" available="4.9 GiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.518+01:00 level=INFO source=server.go:209 msg="enabling flash attention"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-34a514d08f7449cb4a694a707aaa2eedccb7bb68290121bf5e5a569b2abe71c3 --port 39445"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=sched.go:443 msg="system memory" total="93.0 GiB" free="80.9 GiB" free_swap="8.0 GiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 library=CUDA available="4.5 GiB" free="4.9 GiB" minimum="457.0 MiB" overhead="0 B"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-cccd2f20-3d96-8813-4051-968238266813 library=CUDA available="4.4 GiB" free="4.9 GiB" minimum="457.0 MiB" overhead="0 B"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.530+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.530+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:39445"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.541+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.564+01:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name=Qwen3-8B description="" num_tensors=399 num_key_values=33
de des. 03 11:10:07 PC127 ollama[43993]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
de des. 03 11:10:07 PC127 ollama[43993]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
de des. 03 11:10:07 PC127 ollama[43993]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
de des. 03 11:10:07 PC127 ollama[43993]: ggml_cuda_init: found 2 CUDA devices:
de des. 03 11:10:07 PC127 ollama[43993]: Device 0: Quadro P2200, compute capability 6.1, VMM: yes, ID: GPU-d5703024-26e4-303b-8104-7d33cce0eeb1
de des. 03 11:10:07 PC127 ollama[43993]: Device 1: Quadro P2000, compute capability 6.1, VMM: yes, ID: GPU-cccd2f20-3d96-8813-4051-968238266813
de des. 03 11:10:07 PC127 ollama[43993]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.613+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.729+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:20(0..19) ID:GPU-cccd2f20-3d96-8813-4051-968238266813 Layers:17(20..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.831+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:20(0..19) ID:GPU-cccd2f20-3d96-8813-4051-968238266813 Layers:17(20..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:20(0..19) ID:GPU-cccd2f20-3d96-8813-4051-968238266813 Layers:17(20..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.2 GiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="2.2 GiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="320.0 MiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="256.0 MiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="118.0 MiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="110.0 MiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:272 msg="total memory" size="5.6 GiB"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=sched.go:517 msg="loaded runners" count=2
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.887+01:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.888+01:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
de des. 03 11:10:09 PC127 ollama[43993]: time=2025-12-03T11:10:09.393+01:00 level=INFO source=server.go:1332 msg="llama runner started in 1.87 seconds"
de des. 03 11:10:09 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:09 | 200 | 2.162078457s | 127.0.0.1 | POST "/api/generate"
de des. 03 11:10:31 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:31 | 200 | 20.29410894s | 127.0.0.1 | POST "/api/chat"


<!-- gh-comment-id:3606085723 --> @OriolCanillasGautier commented on GitHub (Dec 3, 2025): The same happens with Quadros (P2000 and P2200) on ubuntu, using the latest version (0.13.1). Qwen3 8b being bigger is much faster as it does work on gpu properly. models: hf.co/unsloth/Qwen3-8B-GGUF:Q4_K_XL 5.1 GB ministral-3:3b 3.0 GB test: user@pc:~$ ollama run ministral-3:3b --verbose >>> salut, comment vas tu ? Salut ! 😊 Je vais très bien, merci de demander ! Merci d'avoir utilisé Mistral AI et Le Chat – c'est super d'interagir avec une IA comme ça. Comment puis-je t'aider aujourd'hui ? 😊 Tu veux parler de quelque chose en particulier ? total duration: 13.246541963s load duration: 114.184842ms prompt eval count: 546 token(s) prompt eval duration: 276.902436ms prompt eval rate: 1971.81 tokens/s eval count: 62 token(s) eval duration: 12.825729914s eval rate: 4.83 tokens/s >>> /exit user@pc:~$ ollama run hf.co/unsloth/Qwen3-8B-GGUF:Q4_K_XL --verbose >>> salut, comment vas tu ? <think> Okay, the user greeted me in French with "salut, comment vas tu?" which means "hello, how are you?" I should respond in French to keep the conversation friendly. I need to make sure my response is polite and acknowledges their greeting. Maybe start with "Bonjour!" to match their "salut." Then, I should mention that I'm just a virtual assistant and don't have feelings, but I'm here to help. I should keep it simple and welcoming. Let them know I'm ready to assist with any questions they have. Also, offer to switch to English if they prefer. Wait, the user might be testing my French skills, so I should ensure the grammar and vocabulary are correct. Use common phrases to sound natural. Avoid any complex structures. Check if there's any cultural nuance I should be aware of. In France, people often use "salut" in casual settings, so a friendly response is appropriate. Make sure the response is concise but friendly. Don't add unnecessary information. Focus on inviting them to ask for help. Double-check the spelling and grammar in French. "Bonjour" is correct, "je suis un assistant virtuel" is right. "Je n'ai pas de sentiments" is accurate. Yes, that should work. Let them know I'm here to help and offer to switch languages if needed. Keep it open-ended so they feel comfortable to ask anything. </think> Bonjour ! Je suis un assistant virtuel, donc je n'ai pas de sentiments, mais je suis toujours prêt à t'aider ! Comment puis-je te soutenir aujourd'hui ? 😊 Si tu préfères parler en anglais, je peux aussi le faire. total duration: 20.294071176s load duration: 81.823643ms prompt eval count: 15 token(s) prompt eval duration: 119.445925ms prompt eval rate: 125.58 tokens/s eval count: 352 token(s) eval duration: 19.950884058s eval rate: 17.64 tokens/s >>> /exit logs: --- de des. 03 11:04:23 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:04:23 | 200 | 6.459833852s | 127.0.0.1 | POST "/api/generate" de des. 03 11:04:51 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:04:51 | 200 | 21.30351002s | 127.0.0.1 | POST "/api/chat" de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 15.846µs | 127.0.0.1 | HEAD "/" de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 63.458039ms | 127.0.0.1 | POST "/api/show" de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 61.967128ms | 127.0.0.1 | POST "/api/show" de des. 03 11:05:05 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:05 | 200 | 112.93445ms | 127.0.0.1 | POST "/api/generate" de des. 03 11:05:22 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:05:22 | 200 | 11.442655141s | 127.0.0.1 | POST "/api/chat" de des. 03 11:07:45 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:07:45 | 200 | 16.384µs | 127.0.0.1 | HEAD "/" de des. 03 11:07:45 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:07:45 | 200 | 42.915114ms | 127.0.0.1 | POST "/api/show" de des. 03 11:08:00 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:00 | 200 | 17.047µs | 127.0.0.1 | HEAD "/" de des. 03 11:08:00 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:00 | 200 | 56.876196ms | 127.0.0.1 | POST "/api/show" de des. 03 11:08:29 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:29 | 200 | 24.123µs | 127.0.0.1 | HEAD "/" de des. 03 11:08:29 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:08:29 | 200 | 816.548µs | 127.0.0.1 | GET "/api/tags" de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 16.407µs | 127.0.0.1 | HEAD "/" de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 59.522863ms | 127.0.0.1 | POST "/api/show" de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 54.554258ms | 127.0.0.1 | POST "/api/show" de des. 03 11:09:28 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:28 | 200 | 117.108614ms | 127.0.0.1 | POST "/api/generate" de des. 03 11:09:52 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:09:52 | 200 | 13.246677519s | 127.0.0.1 | POST "/api/chat" de des. 03 11:10:07 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:07 | 200 | 16.527µs | 127.0.0.1 | HEAD "/" de des. 03 11:10:07 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:07 | 200 | 39.700793ms | 127.0.0.1 | POST "/api/show" de des. 03 11:10:07 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:07 | 200 | 38.664421ms | 127.0.0.1 | POST "/api/show" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.309+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37561" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.470+01:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 library=CUDA total="5.0 GiB" available="4.9 GiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.470+01:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-cccd2f20-3d96-8813-4051-968238266813 library=CUDA total="5.0 GiB" available="4.9 GiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.518+01:00 level=INFO source=server.go:209 msg="enabling flash attention" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-34a514d08f7449cb4a694a707aaa2eedccb7bb68290121bf5e5a569b2abe71c3 --port 39445" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=sched.go:443 msg="system memory" total="93.0 GiB" free="80.9 GiB" free_swap="8.0 GiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 library=CUDA available="4.5 GiB" free="4.9 GiB" minimum="457.0 MiB" overhead="0 B" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-cccd2f20-3d96-8813-4051-968238266813 library=CUDA available="4.4 GiB" free="4.9 GiB" minimum="457.0 MiB" overhead="0 B" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.519+01:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.530+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.530+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:39445" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.541+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.564+01:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name=Qwen3-8B description="" num_tensors=399 num_key_values=33 de des. 03 11:10:07 PC127 ollama[43993]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so de des. 03 11:10:07 PC127 ollama[43993]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no de des. 03 11:10:07 PC127 ollama[43993]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no de des. 03 11:10:07 PC127 ollama[43993]: ggml_cuda_init: found 2 CUDA devices: de des. 03 11:10:07 PC127 ollama[43993]: Device 0: Quadro P2200, compute capability 6.1, VMM: yes, ID: GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 de des. 03 11:10:07 PC127 ollama[43993]: Device 1: Quadro P2000, compute capability 6.1, VMM: yes, ID: GPU-cccd2f20-3d96-8813-4051-968238266813 de des. 03 11:10:07 PC127 ollama[43993]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.613+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.729+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:20(0..19) ID:GPU-cccd2f20-3d96-8813-4051-968238266813 Layers:17(20..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.831+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:20(0..19) ID:GPU-cccd2f20-3d96-8813-4051-968238266813 Layers:17(20..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-d5703024-26e4-303b-8104-7d33cce0eeb1 Layers:20(0..19) ID:GPU-cccd2f20-3d96-8813-4051-968238266813 Layers:17(20..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.2 GiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="2.2 GiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="320.0 MiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="256.0 MiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="118.0 MiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="110.0 MiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=device.go:272 msg="total memory" size="5.6 GiB" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.886+01:00 level=INFO source=sched.go:517 msg="loaded runners" count=2 de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.887+01:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" de des. 03 11:10:07 PC127 ollama[43993]: time=2025-12-03T11:10:07.888+01:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" de des. 03 11:10:09 PC127 ollama[43993]: time=2025-12-03T11:10:09.393+01:00 level=INFO source=server.go:1332 msg="llama runner started in 1.87 seconds" de des. 03 11:10:09 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:09 | 200 | 2.162078457s | 127.0.0.1 | POST "/api/generate" de des. 03 11:10:31 PC127 ollama[43993]: [GIN] 2025/12/03 - 11:10:31 | 200 | 20.29410894s | 127.0.0.1 | POST "/api/chat" ---
Author
Owner

@chrisqianz commented on GitHub (Dec 3, 2025):

log show out of memory😂, but qwen3-4b is working in GPU.
12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemAllocInternalTagged: 1075072515 error 12
12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemHandleAlloc: error 0
12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemAllocInternalTagged: 1075072515 error 12
12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemHandleAlloc: error 0
12月 03 18:13:23 chris-jetson ollama[3235]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
12月 03 18:13:23 chris-jetson ollama[3235]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
12月 03 18:13:23 chris-jetson ollama[3235]: time=2025-12-03T18:13:23.094+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-614792f7-

NAME ID SIZE PROCESSOR CONTEXT UNTIL
ministral-3:3b a48e77f25d79 13 GB 100% CPU 4096 4 minutes from now

NAME ID SIZE PROCESSOR CONTEXT UNTIL
qwen3:4b 359d7dd4bcda 3.6 GB 100% GPU 4096 4 minutes from now

<!-- gh-comment-id:3606140954 --> @chrisqianz commented on GitHub (Dec 3, 2025): log show out of memory😂, but qwen3-4b is working in GPU. 12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemAllocInternalTagged: 1075072515 error 12 12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemHandleAlloc: error 0 12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemAllocInternalTagged: 1075072515 error 12 12月 03 18:13:23 chris-jetson ollama[3235]: NvMapMemHandleAlloc: error 0 12月 03 18:13:23 chris-jetson ollama[3235]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory 12月 03 18:13:23 chris-jetson ollama[3235]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 12月 03 18:13:23 chris-jetson ollama[3235]: time=2025-12-03T18:13:23.094+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-614792f7- NAME ID SIZE PROCESSOR CONTEXT UNTIL ministral-3:3b a48e77f25d79 13 GB 100% CPU 4096 4 minutes from now NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3:4b 359d7dd4bcda 3.6 GB 100% GPU 4096 4 minutes from now
Author
Owner

@quiet23 commented on GitHub (Dec 3, 2025):

I am having the same issue with ministral-3:8b and ministral-3:14b running on P104 8G + GT 1030 2GB. phi4:14b is partially offloaded to GPU, ministral does not.
ministral-3:8b and 14b logs both have almost the same lines (cudos to @chrisqianz for spotting the out of memory):
8b:

ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.31 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668199424

14b:

ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760

Why ollama tries to allocate 9.2GB for either model despite the difference in model size?

ollama 0.13.1 release.

ollama_phi4.log

ollama_ministral-3_8b.log

ollama_ministral-3_14b.log

<!-- gh-comment-id:3607305507 --> @quiet23 commented on GitHub (Dec 3, 2025): I am having the same issue with ministral-3:8b and ministral-3:14b running on P104 8G + GT 1030 2GB. phi4:14b is partially offloaded to GPU, ministral does not. ministral-3:8b and 14b logs both have almost the same lines (cudos to @chrisqianz for spotting the out of memory): 8b: ``` ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.31 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668199424 ``` 14b: ``` ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 ``` Why ollama tries to allocate 9.2GB for either model despite the difference in model size? ollama 0.13.1 release. [ollama_phi4.log](https://github.com/user-attachments/files/23908116/ollama_phi4.log) [ollama_ministral-3_8b.log](https://github.com/user-attachments/files/23910176/ollama_ministral-3_8b.log) [ollama_ministral-3_14b.log](https://github.com/user-attachments/files/23908117/ollama_ministral-3_14b.log)
Author
Owner

@puresick commented on GitHub (Dec 3, 2025):

I am experiencing the same with ollama 0.13.1 running inside a podman container on my AMD Ryzen AI 9 HX 370 CPU with Radeon 890M GPU and 96 GB of memory (of which 32 GB are assigned to the GPU as VRAM). I am using the Vulkan API for GPU acceleration.

Testing with ministral-3:3b, ministral-3:8b and ministral-3:14b results for each to run on the CPU rather than the GPU.

qwen3:30b-a3b for comparison runs flawlessly on the GPU.

One thing I notices is that my GPU usage starts to grow when starting any of the ministral models, but then quickly falls back to idle load and the CPU starts picking up to run the model.

Attached you'll find the logs for running ministral-3:3b and qwen3:30b-a3b on my hardware.

ollama-logs-ministral.txt

ollama-logs-qwen.txt

<!-- gh-comment-id:3607321769 --> @puresick commented on GitHub (Dec 3, 2025): I am experiencing the same with ollama 0.13.1 running inside a podman container on my AMD Ryzen AI 9 HX 370 CPU with Radeon 890M GPU and 96 GB of memory (of which 32 GB are assigned to the GPU as VRAM). I am using the Vulkan API for GPU acceleration. Testing with ministral-3:3b, ministral-3:8b and ministral-3:14b results for each to run on the CPU rather than the GPU. qwen3:30b-a3b for comparison runs flawlessly on the GPU. One thing I notices is that my GPU usage starts to grow when starting any of the ministral models, but then quickly falls back to idle load and the CPU starts picking up to run the model. Attached you'll find the logs for running ministral-3:3b and qwen3:30b-a3b on my hardware. [ollama-logs-ministral.txt](https://github.com/user-attachments/files/23910591/ollama-logs-ministral.txt) [ollama-logs-qwen.txt](https://github.com/user-attachments/files/23910586/ollama-logs-qwen.txt)
Author
Owner

@dan-and commented on GitHub (Dec 3, 2025):

Currently, ministral-3 uses quite a good amount of memory:

$ ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
ministral-3:14b 8a5cdca192c0 19 GB 100% GPU 4096 59 minutes from now
ministral-3:8b 77300ee7514e 16 GB 100% GPU 4096 59 minutes from now
ministral-3:3b a48e77f25d79 13 GB 100% GPU 4096 57 minutes from now

@quiet23 , please note that running a model over more than one GPU, you can substract 2GB per additional GPU. Therefore the 1030 with 2GB will not help you at all, except you have a small model which runs on the 1030 only.

@puresick
based on the logs, ollama (via vulcan) reports 8GB of VRAM only.

<!-- gh-comment-id:3607953866 --> @dan-and commented on GitHub (Dec 3, 2025): Currently, ministral-3 uses quite a good amount of memory: $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL ministral-3:14b 8a5cdca192c0 19 GB 100% GPU 4096 59 minutes from now ministral-3:8b 77300ee7514e 16 GB 100% GPU 4096 59 minutes from now ministral-3:3b a48e77f25d79 13 GB 100% GPU 4096 57 minutes from now @quiet23 , please note that running a model over more than one GPU, you can substract 2GB per additional GPU. Therefore the 1030 with 2GB will not help you at all, except you have a small model which runs on the 1030 only. @puresick based on the logs, ollama (via vulcan) reports 8GB of VRAM only.
Author
Owner

@deetungsten commented on GitHub (Dec 3, 2025):

Currently, ministral-3 uses quite a good amount of memory:

I'm assuming this is a bug with ollama? I dont have any issues using llama cpp using unsloth's instructions using their GGUF

<!-- gh-comment-id:3608802185 --> @deetungsten commented on GitHub (Dec 3, 2025): > Currently, ministral-3 uses quite a good amount of memory: I'm assuming this is a bug with ollama? I dont have any issues using llama cpp using [unsloth's instructions](https://docs.unsloth.ai/new/ministral-3#llama.cpp-run-ministral-3-14b-instruct-tutorial) using their [GGUF](https://huggingface.co/unsloth/Ministral-3-3B-Instruct-2512-GGUF)
Author
Owner

@puresick commented on GitHub (Dec 4, 2025):

@dan-and Thanks for pointing this out!

Is there a way to configure ollamas vulkan implementation to use more than 8GB of VRAM?

<!-- gh-comment-id:3610916649 --> @puresick commented on GitHub (Dec 4, 2025): @dan-and Thanks for pointing this out! Is there a way to configure ollamas vulkan implementation to use more than 8GB of VRAM?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55305