[GH-ISSUE #10217] mistral-small3.1 is not loaded fully to GPU on RX 7900 XTX #32464

Closed
opened 2026-04-22 13:45:24 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @shilga on GitHub (Apr 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10217

What is the issue?

As title says. There is 24 GB of VRAM and Ollama decides to use only 12. If I set num_gpu manually it uses full GPU and runs fine. Other larger models load correctly.

Relevant log output

2025/04/10 14:11:42 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-10T14:11:42.848Z level=INFO source=images.go:458 msg="total blobs: 29"
time=2025-04-10T14:11:42.848Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-10T14:11:42.848Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)"
time=2025-04-10T14:11:42.848Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-10T14:11:42.851Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-04-10T14:11:42.852Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-978ecbf0abc221c7 gpu_type=gfx1100
time=2025-04-10T14:11:42.852Z level=INFO source=types.go:130 msg="inference compute" id=GPU-978ecbf0abc221c7 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="24.0 GiB" available="24.0 GiB"
time=2025-04-10T14:13:40.382Z level=INFO source=server.go:105 msg="system memory" total="15.3 GiB" free="13.6 GiB" free_swap="7.5 GiB"
time=2025-04-10T14:13:40.383Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=41 layers.offload=39 layers.split="" memory.available="[24.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="23.7 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[23.7 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB"
time=2025-04-10T14:13:40.415Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1
time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06
time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540
time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06
time=2025-04-10T14:13:40.420Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 39 --threads 16 --no-mmap --parallel 1 --port 37351"
time=2025-04-10T14:13:40.421Z level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-10T14:13:40.421Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-10T14:13:40.422Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-10T14:13:40.429Z level=INFO source=runner.go:816 msg="starting ollama engine"
time=2025-04-10T14:13:40.430Z level=INFO source=runner.go:879 msg="Server listening on 127.0.0.1:37351"
time=2025-04-10T14:13:40.470Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-04-10T14:13:40.470Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-04-10T14:13:40.470Z level=INFO source=ggml.go:67 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43
time=2025-04-10T14:13:40.673Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-04-10T14:13:42.144Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-04-10T14:13:42.145Z level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="2.0 GiB"
time=2025-04-10T14:13:42.145Z level=INFO source=ggml.go:289 msg="model weights" buffer=ROCm0 size="12.4 GiB"
time=2025-04-10T14:13:51.069Z level=INFO source=ggml.go:388 msg="compute graph" backend=ROCm0 buffer_type=ROCm0
time=2025-04-10T14:13:51.069Z level=INFO source=ggml.go:388 msg="compute graph" backend=CPU buffer_type=ROCm_Host
time=2025-04-10T14:13:51.069Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1
time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06
time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540
time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06
time=2025-04-10T14:13:51.199Z level=INFO source=server.go:619 msg="llama runner started in 10.78 seconds"
[GIN] 2025/04/10 - 14:13:52 | 200 | 11.905303051s |     169.254.1.2 | POST     "/api/chat"
[GIN] 2025/04/10 - 14:13:53 | 200 |  1.198407122s |     169.254.1.2 | POST     "/api/chat"
[GIN] 2025/04/10 - 14:13:54 | 200 |  970.314595ms |     169.254.1.2 | POST     "/api/chat"
[GIN] 2025/04/10 - 14:14:34 | 200 | 11.822899328s |     169.254.1.2 | POST     "/api/chat"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.6.5

Originally created by @shilga on GitHub (Apr 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10217 ### What is the issue? As title says. There is 24 GB of VRAM and Ollama decides to use only 12. If I set num_gpu manually it uses full GPU and runs fine. Other larger models load correctly. ### Relevant log output ```shell 2025/04/10 14:11:42 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-10T14:11:42.848Z level=INFO source=images.go:458 msg="total blobs: 29" time=2025-04-10T14:11:42.848Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-10T14:11:42.848Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)" time=2025-04-10T14:11:42.848Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-10T14:11:42.851Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-04-10T14:11:42.852Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-978ecbf0abc221c7 gpu_type=gfx1100 time=2025-04-10T14:11:42.852Z level=INFO source=types.go:130 msg="inference compute" id=GPU-978ecbf0abc221c7 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="24.0 GiB" available="24.0 GiB" time=2025-04-10T14:13:40.382Z level=INFO source=server.go:105 msg="system memory" total="15.3 GiB" free="13.6 GiB" free_swap="7.5 GiB" time=2025-04-10T14:13:40.383Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=41 layers.offload=39 layers.split="" memory.available="[24.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="23.7 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[23.7 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" time=2025-04-10T14:13:40.415Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1 time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06 time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540 time=2025-04-10T14:13:40.420Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06 time=2025-04-10T14:13:40.420Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 39 --threads 16 --no-mmap --parallel 1 --port 37351" time=2025-04-10T14:13:40.421Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-10T14:13:40.421Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-10T14:13:40.422Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-10T14:13:40.429Z level=INFO source=runner.go:816 msg="starting ollama engine" time=2025-04-10T14:13:40.430Z level=INFO source=runner.go:879 msg="Server listening on 127.0.0.1:37351" time=2025-04-10T14:13:40.470Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-04-10T14:13:40.470Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-10T14:13:40.470Z level=INFO source=ggml.go:67 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 time=2025-04-10T14:13:40.673Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-04-10T14:13:42.144Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-04-10T14:13:42.145Z level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="2.0 GiB" time=2025-04-10T14:13:42.145Z level=INFO source=ggml.go:289 msg="model weights" buffer=ROCm0 size="12.4 GiB" time=2025-04-10T14:13:51.069Z level=INFO source=ggml.go:388 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 time=2025-04-10T14:13:51.069Z level=INFO source=ggml.go:388 msg="compute graph" backend=CPU buffer_type=ROCm_Host time=2025-04-10T14:13:51.069Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1 time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06 time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540 time=2025-04-10T14:13:51.071Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06 time=2025-04-10T14:13:51.199Z level=INFO source=server.go:619 msg="llama runner started in 10.78 seconds" [GIN] 2025/04/10 - 14:13:52 | 200 | 11.905303051s | 169.254.1.2 | POST "/api/chat" [GIN] 2025/04/10 - 14:13:53 | 200 | 1.198407122s | 169.254.1.2 | POST "/api/chat" [GIN] 2025/04/10 - 14:13:54 | 200 | 970.314595ms | 169.254.1.2 | POST "/api/chat" [GIN] 2025/04/10 - 14:14:34 | 200 | 11.822899328s | 169.254.1.2 | POST "/api/chat" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.6.5
GiteaMirror added the bug label 2026-04-22 13:45:24 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

#10167

<!-- gh-comment-id:2794170908 --> @rick-github commented on GitHub (Apr 10, 2025): #10167
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32464