[GH-ISSUE #13002] Ollama sometimes using CPU instead of GPU #55122

Closed
opened 2026-04-29 08:22:14 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @DNAScanner on GitHub (Nov 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13002

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

The first request (ollama run gpt-oss:20b) worked normal. Computation seemed to be running on my GPU and it's VRAM.
After that, I waited a few minutes and in the same conversation, I once again sent a request/message. The model has already been unloaded, so the model had to be loaded in first. Now, the model is partially stored in VRAM and normal RAM. As the model started generating text, my CPU was at ~55% utilization and 16% on my GPU, while for the first request, there was barely any load on my CPU and 99-100% on my GPU.

I have an AMD Radeon RX 7900 XT.

Relevant log output

Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.336+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.337+01:00 level=INFO source=images.go:522 msg="total blobs: 5"
Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.337+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.338+01:00 level=INFO source=routes.go:1578 msg="Listening on 127.0.0.1:11434 (version 0.12.10)"
Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.339+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.342+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37789"
Nov 07 19:46:45 dnascanner ollama[842]: time=2025-11-07T19:46:45.053+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41993"
Nov 07 19:46:45 dnascanner ollama[842]: time=2025-11-07T19:46:45.053+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43901"
Nov 07 19:46:50 dnascanner systemd-coredump[935]: [🡕] Process 909 (ollama) of user 964 dumped core.
                                                  
                                                  Stack trace of thread 911:
                                                  #0  0x00007ff402e9894c n/a (libc.so.6 + 0x9894c)
                                                  #1  0x00007ff402e3e410 raise (libc.so.6 + 0x3e410)
                                                  #2  0x00007ff402e2557a abort (libc.so.6 + 0x2557a)
                                                  #3  0x00007ff370a99642 _ZL18rocblas_abort_oncev (librocblas.so.4 + 0xa99642)
                                                  #4  0x00007ff370a9959f rocblas_abort (librocblas.so.4 + 0xa9959f)
                                                  #5  0x00007ff3708b675f _ZN12_GLOBAL__N_123get_library_and_adapterEPSt10shared_ptrIN7Tensile21MasterSolutionLibraryINS1_18ContractionProblemENS1_19ContractionSolutionEEEEPS0_I20hipDeviceProp_tR0600Ei (librocblas.so.4 + 0x8b675f)
                                                  #6  0x00007ff37b76bb92 n/a (libggml-hip.so + 0x16bb92)
                                                  #7  0x00007ff37b76c943 ggml_backend_cuda_reg (libggml-hip.so + 0x16c943)
                                                  #8  0x00007ff37b76e1c6 ggml_backend_init (libggml-hip.so + 0x16e1c6)
                                                  #9  0x000055a47a727b5b n/a (/usr/bin/ollama + 0xe08b5b)
                                                  #10 0x000055a47a725ac2 n/a (/usr/bin/ollama + 0xe06ac2)
                                                  #11 0x000055a47a726efc n/a (/usr/bin/ollama + 0xe07efc)
                                                  #12 0x000055a4799e1d41 n/a (/usr/bin/ollama + 0xc2d41)
                                                  ELF object binary architecture: AMD x86-64
░░ Subject: Process 909 (ollama) dumped core
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ Documentation: man:core(5)
░░ 
░░ Process 909 (ollama) crashed and dumped core.
░░ 
░░ This usually indicates a programming error in the crashing program and
░░ should be reported to its vendor as a bug.
Nov 07 19:46:50 dnascanner ollama[842]: time=2025-11-07T19:46:50.125+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:1]" error="runner crashed"
Nov 07 19:46:50 dnascanner ollama[842]: time=2025-11-07T19:46:50.515+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-4171ddc6812fc283 filter_id="" library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon RX 7900 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="20.0 GiB" available="19.9 GiB"
Nov 07 19:46:50 dnascanner ollama[842]: time=2025-11-07T19:46:50.515+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"
Nov 07 19:47:42 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:42 | 200 |       27.44µs |       127.0.0.1 | HEAD     "/"
Nov 07 19:47:42 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:42 | 200 |   67.968406ms |       127.0.0.1 | POST     "/api/show"
Nov 07 19:47:42 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:42 | 200 |   64.870959ms |       127.0.0.1 | POST     "/api/show"
Nov 07 19:47:42 dnascanner ollama[842]: time=2025-11-07T19:47:42.384+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45879"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.387+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.387+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.468+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 33541"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:653 msg="loading model" "model layers"=25 requested=-1
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:658 msg="system memory" total="62.0 GiB" free="51.4 GiB" free_swap="4.0 GiB"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-4171ddc6812fc283 library=ROCm available="19.5 GiB" free="19.9 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.475+01:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.475+01:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:33541"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.489+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.518+01:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
Nov 07 19:47:50 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 07 19:47:50 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 07 19:47:50 dnascanner ollama[842]: ggml_cuda_init: found 1 ROCm devices:
Nov 07 19:47:50 dnascanner ollama[842]:   Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: GPU-4171ddc6812fc283
Nov 07 19:47:50 dnascanner ollama[842]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
Nov 07 19:47:50 dnascanner ollama[842]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.144+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.671+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="11.8 GiB"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="1.1 GiB"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="192.0 MiB"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="131.1 MiB"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="5.6 MiB"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:244 msg="total memory" size="13.2 GiB"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.736+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Nov 07 19:47:56 dnascanner ollama[842]: time=2025-11-07T19:47:56.497+01:00 level=INFO source=server.go:1289 msg="llama runner started in 11.03 seconds"
Nov 07 19:47:56 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:56 | 200 | 14.241432401s |       127.0.0.1 | POST     "/api/generate"
Nov 07 19:48:28 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:48:28 | 200 | 26.075531412s |       127.0.0.1 | POST     "/api/chat"
Nov 07 19:53:28 dnascanner ollama[842]: time=2025-11-07T19:53:28.417+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37851"
Nov 07 19:53:31 dnascanner ollama[842]: time=2025-11-07T19:53:31.420+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout"
Nov 07 19:53:31 dnascanner ollama[842]: time=2025-11-07T19:53:31.420+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 07 19:53:31 dnascanner ollama[842]: time=2025-11-07T19:53:31.420+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42233"
Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout"
Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42003"
Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout"
Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 07 19:54:21 dnascanner ollama[842]: time=2025-11-07T19:54:21.564+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36875"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.567+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.567+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 37019"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:653 msg="loading model" "model layers"=25 requested=-1
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:658 msg="system memory" total="62.0 GiB" free="53.0 GiB" free_swap="4.0 GiB"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-4171ddc6812fc283 library=ROCm available="7.1 GiB" free="7.6 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.654+01:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.654+01:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:37019"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.660+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.696+01:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
Nov 07 19:54:29 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 07 19:54:29 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 07 19:54:29 dnascanner ollama[842]: ggml_cuda_init: found 1 ROCm devices:
Nov 07 19:54:29 dnascanner ollama[842]:   Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: GPU-4171ddc6812fc283
Nov 07 19:54:29 dnascanner ollama[842]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
Nov 07 19:54:29 dnascanner ollama[842]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.322+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.617+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:15[ID:GPU-4171ddc6812fc283 Layers:15(9..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.657+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:15[ID:GPU-4171ddc6812fc283 Layers:15(9..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:15[ID:GPU-4171ddc6812fc283 Layers:15(9..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="6.7 GiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="6.2 GiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="120.0 MiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="72.0 MiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="232.7 MiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="5.6 MiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=ggml.go:482 msg="offloading 15 repeating layers to GPU"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=ggml.go:494 msg="offloaded 15/25 layers to GPU"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:244 msg="total memory" size="13.3 GiB"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.721+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.721+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Nov 07 19:54:30 dnascanner ollama[842]: time=2025-11-07T19:54:30.727+01:00 level=INFO source=server.go:1289 msg="llama runner started in 6.08 seconds"
Nov 07 19:55:29 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:55:29 | 200 |          1m7s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.12.10

Originally created by @DNAScanner on GitHub (Nov 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13002 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? The first request (`ollama run gpt-oss:20b`) worked normal. Computation seemed to be running on my GPU and it's VRAM. After that, I waited a few minutes and in the same conversation, I once again sent a request/message. The model has already been unloaded, so the model had to be loaded in first. Now, the model is partially stored in VRAM and normal RAM. As the model started generating text, my CPU was at ~55% utilization and 16% on my GPU, while for the first request, there was barely any load on my CPU and 99-100% on my GPU. I have an AMD Radeon RX 7900 XT. ### Relevant log output ```shell Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.336+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.337+01:00 level=INFO source=images.go:522 msg="total blobs: 5" Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.337+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.338+01:00 level=INFO source=routes.go:1578 msg="Listening on 127.0.0.1:11434 (version 0.12.10)" Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.339+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Nov 07 19:46:39 dnascanner ollama[842]: time=2025-11-07T19:46:39.342+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37789" Nov 07 19:46:45 dnascanner ollama[842]: time=2025-11-07T19:46:45.053+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41993" Nov 07 19:46:45 dnascanner ollama[842]: time=2025-11-07T19:46:45.053+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43901" Nov 07 19:46:50 dnascanner systemd-coredump[935]: [🡕] Process 909 (ollama) of user 964 dumped core. Stack trace of thread 911: #0 0x00007ff402e9894c n/a (libc.so.6 + 0x9894c) #1 0x00007ff402e3e410 raise (libc.so.6 + 0x3e410) #2 0x00007ff402e2557a abort (libc.so.6 + 0x2557a) #3 0x00007ff370a99642 _ZL18rocblas_abort_oncev (librocblas.so.4 + 0xa99642) #4 0x00007ff370a9959f rocblas_abort (librocblas.so.4 + 0xa9959f) #5 0x00007ff3708b675f _ZN12_GLOBAL__N_123get_library_and_adapterEPSt10shared_ptrIN7Tensile21MasterSolutionLibraryINS1_18ContractionProblemENS1_19ContractionSolutionEEEEPS0_I20hipDeviceProp_tR0600Ei (librocblas.so.4 + 0x8b675f) #6 0x00007ff37b76bb92 n/a (libggml-hip.so + 0x16bb92) #7 0x00007ff37b76c943 ggml_backend_cuda_reg (libggml-hip.so + 0x16c943) #8 0x00007ff37b76e1c6 ggml_backend_init (libggml-hip.so + 0x16e1c6) #9 0x000055a47a727b5b n/a (/usr/bin/ollama + 0xe08b5b) #10 0x000055a47a725ac2 n/a (/usr/bin/ollama + 0xe06ac2) #11 0x000055a47a726efc n/a (/usr/bin/ollama + 0xe07efc) #12 0x000055a4799e1d41 n/a (/usr/bin/ollama + 0xc2d41) ELF object binary architecture: AMD x86-64 ░░ Subject: Process 909 (ollama) dumped core ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ Documentation: man:core(5) ░░ ░░ Process 909 (ollama) crashed and dumped core. ░░ ░░ This usually indicates a programming error in the crashing program and ░░ should be reported to its vendor as a bug. Nov 07 19:46:50 dnascanner ollama[842]: time=2025-11-07T19:46:50.125+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:1]" error="runner crashed" Nov 07 19:46:50 dnascanner ollama[842]: time=2025-11-07T19:46:50.515+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-4171ddc6812fc283 filter_id="" library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon RX 7900 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="20.0 GiB" available="19.9 GiB" Nov 07 19:46:50 dnascanner ollama[842]: time=2025-11-07T19:46:50.515+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB" Nov 07 19:47:42 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:42 | 200 | 27.44µs | 127.0.0.1 | HEAD "/" Nov 07 19:47:42 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:42 | 200 | 67.968406ms | 127.0.0.1 | POST "/api/show" Nov 07 19:47:42 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:42 | 200 | 64.870959ms | 127.0.0.1 | POST "/api/show" Nov 07 19:47:42 dnascanner ollama[842]: time=2025-11-07T19:47:42.384+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45879" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.387+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.387+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.468+01:00 level=INFO source=server.go:215 msg="enabling flash attention" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 33541" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:653 msg="loading model" "model layers"=25 requested=-1 Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:658 msg="system memory" total="62.0 GiB" free="51.4 GiB" free_swap="4.0 GiB" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.469+01:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-4171ddc6812fc283 library=ROCm available="19.5 GiB" free="19.9 GiB" minimum="457.0 MiB" overhead="0 B" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.475+01:00 level=INFO source=runner.go:1349 msg="starting ollama engine" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.475+01:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:33541" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.489+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:47:45 dnascanner ollama[842]: time=2025-11-07T19:47:45.518+01:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 Nov 07 19:47:50 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 07 19:47:50 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 07 19:47:50 dnascanner ollama[842]: ggml_cuda_init: found 1 ROCm devices: Nov 07 19:47:50 dnascanner ollama[842]: Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: GPU-4171ddc6812fc283 Nov 07 19:47:50 dnascanner ollama[842]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so Nov 07 19:47:50 dnascanner ollama[842]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.144+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.671+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="11.8 GiB" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="1.1 GiB" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="192.0 MiB" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="131.1 MiB" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="5.6 MiB" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=device.go:244 msg="total memory" size="13.2 GiB" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.725+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 07 19:47:50 dnascanner ollama[842]: time=2025-11-07T19:47:50.736+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Nov 07 19:47:56 dnascanner ollama[842]: time=2025-11-07T19:47:56.497+01:00 level=INFO source=server.go:1289 msg="llama runner started in 11.03 seconds" Nov 07 19:47:56 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:47:56 | 200 | 14.241432401s | 127.0.0.1 | POST "/api/generate" Nov 07 19:48:28 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:48:28 | 200 | 26.075531412s | 127.0.0.1 | POST "/api/chat" Nov 07 19:53:28 dnascanner ollama[842]: time=2025-11-07T19:53:28.417+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37851" Nov 07 19:53:31 dnascanner ollama[842]: time=2025-11-07T19:53:31.420+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout" Nov 07 19:53:31 dnascanner ollama[842]: time=2025-11-07T19:53:31.420+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 07 19:53:31 dnascanner ollama[842]: time=2025-11-07T19:53:31.420+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42233" Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout" Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42003" Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout" Nov 07 19:53:33 dnascanner ollama[842]: time=2025-11-07T19:53:33.167+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 07 19:54:21 dnascanner ollama[842]: time=2025-11-07T19:54:21.564+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36875" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.567+01:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-4171ddc6812fc283] error="failed to finish discovery before timeout" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.567+01:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:215 msg="enabling flash attention" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 37019" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:653 msg="loading model" "model layers"=25 requested=-1 Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:658 msg="system memory" total="62.0 GiB" free="53.0 GiB" free_swap="4.0 GiB" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.648+01:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-4171ddc6812fc283 library=ROCm available="7.1 GiB" free="7.6 GiB" minimum="457.0 MiB" overhead="0 B" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.654+01:00 level=INFO source=runner.go:1349 msg="starting ollama engine" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.654+01:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:37019" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.660+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-4171ddc6812fc283 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:54:24 dnascanner ollama[842]: time=2025-11-07T19:54:24.696+01:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 Nov 07 19:54:29 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 07 19:54:29 dnascanner ollama[842]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 07 19:54:29 dnascanner ollama[842]: ggml_cuda_init: found 1 ROCm devices: Nov 07 19:54:29 dnascanner ollama[842]: Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: GPU-4171ddc6812fc283 Nov 07 19:54:29 dnascanner ollama[842]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so Nov 07 19:54:29 dnascanner ollama[842]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.322+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.617+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:15[ID:GPU-4171ddc6812fc283 Layers:15(9..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.657+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:15[ID:GPU-4171ddc6812fc283 Layers:15(9..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:15[ID:GPU-4171ddc6812fc283 Layers:15(9..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="6.7 GiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="6.2 GiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="120.0 MiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="72.0 MiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="232.7 MiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="5.6 MiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=ggml.go:482 msg="offloading 15 repeating layers to GPU" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=ggml.go:494 msg="offloaded 15/25 layers to GPU" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=device.go:244 msg="total memory" size="13.3 GiB" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.720+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.721+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 07 19:54:29 dnascanner ollama[842]: time=2025-11-07T19:54:29.721+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Nov 07 19:54:30 dnascanner ollama[842]: time=2025-11-07T19:54:30.727+01:00 level=INFO source=server.go:1289 msg="llama runner started in 6.08 seconds" Nov 07 19:55:29 dnascanner ollama[842]: [GIN] 2025/11/07 - 19:55:29 | 200 | 1m7s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.12.10
GiteaMirror added the amdbuglinux labels 2026-04-29 08:22:16 -05:00
Author
Owner

@Muktarsadiq commented on GitHub (Nov 10, 2025):

hello, I'm interested in taking a look at this

<!-- gh-comment-id:3511407273 --> @Muktarsadiq commented on GitHub (Nov 10, 2025): hello, I'm interested in taking a look at this
Author
Owner

@dhiltgen commented on GitHub (Dec 5, 2025):

This should be fixed in 0.13.1

<!-- gh-comment-id:3614901476 --> @dhiltgen commented on GitHub (Dec 5, 2025): This should be fixed in 0.13.1
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55122