[GH-ISSUE #12922] 0.12.9 [Manjaro] Ollama isn't seeing recovered VRAM when switching models #55082

Closed
opened 2026-04-29 08:17:56 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @StrykeSlammerII on GitHub (Nov 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12922

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I've had some trouble nailing down exactly what's occuring, so I'll be happy to try additional testing to get additional logs.

In the log below, I started with a model that fit entirely in 13.1GB VRAM. Then I changed to a model that does not fit in VRAM, but Ollama only finds memory.available="[2.9 GiB]"
Even if Ollama isn't able to use all the memory it finds, it should be able to release and refind the memory it was using previously.

radeontop shows VRAM usage dropping as expected when the first model is unloaded.

Stopping Ollama server and restarting it shows

time=2025-11-03T04:52:00.226-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b3d5d3574c66244c filtered_id="" library=ROCm compute=gfx1200 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama driver=60443.48 pci_id=0000:04:00.0 type=discrete total="15.9 GiB" available="15.4 GiB"

and radeontop shows 2GB VRAM used when Ollama is stopped.

Restarting Ollama and starting with the larger model first eventually gives

time=2025-11-03T04:54:02.896-05:00 level=DEBUG source=sched.go:505 msg="finished setting up" runner.name=registry.ollama.ai/library/FI-oe:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="20.3 GiB" runner.vram="15.2 GiB"

and radeontop shows 15.nn GB VRAM used, as expected, rather than ~4GB used when switching from the small model to that same large model.

This is a AMD RX 9060 XT GPU, 16 MB VRAM.
Note: Previously I had some issues caused by leftover files from a manual install. We thought I found and cleared all of those, but there's the potential that some leftover settings or other system issues may be the root cause.

Relevant log output

time=2025-11-03T04:35:49.665-05:00 level=INFO source=server.go:483 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B"
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:799 msg="found an idle runner to unload" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:229 msg="resetting model to expire immediately to make room" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 refCount=0
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:240 msg="waiting for pending requests to complete and unload to occur" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:304 msg="runner expired event received" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:319 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:342 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory"
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=server.go:1735 msg="llamarunner free vram reporting not supported"
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=runner.go:315 msg="existing runner discovery took" duration=2.811µs
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2025-11-03T04:35:49.665-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33727"
time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c
time=2025-11-03T04:35:49.681-05:00 level=INFO source=runner.go:910 msg="starting go runner"
time=2025-11-03T04:35:49.681-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
time=2025-11-03T04:35:52.668-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout"
time=2025-11-03T04:35:52.668-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.00300616s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c]
time=2025-11-03T04:35:52.668-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values"
time=2025-11-03T04:35:52.668-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=3.003102165s
time=2025-11-03T04:35:52.716-05:00 level=DEBUG source=server.go:1699 msg="stopping llama server" pid=3295827
time=2025-11-03T04:35:52.716-05:00 level=DEBUG source=server.go:1705 msg="waiting for llama server to exit" pid=3295827
time=2025-11-03T04:35:52.778-05:00 level=DEBUG source=server.go:1709 msg="llama server stopped" pid=3295827
time=2025-11-03T04:35:52.778-05:00 level=DEBUG source=sched.go:351 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0
time=2025-11-03T04:35:52.919-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory"
time=2025-11-03T04:35:52.919-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2025-11-03T04:35:52.920-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34349"
time=2025-11-03T04:35:52.920-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx1200 (0x1200), VMM: no, Wave Size: 32, ID: GPU-b3d5d3574c66244c
load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
time=2025-11-03T04:35:55.138-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-11-03T04:35:55.139-05:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:42803"
time=2025-11-03T04:35:55.920-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout"
time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.000487808s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c]
time=2025-11-03T04:35:55.920-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values"
time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=3.000748018s
time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory"
time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2025-11-03T04:35:55.921-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44781"
time=2025-11-03T04:35:55.921-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c
time=2025-11-03T04:35:57.669-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout"
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=1.749525577s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c]
time=2025-11-03T04:35:57.670-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values"
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=1.74964644s
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=sched.go:694 msg="gpu VRAM usage didn't recover within timeout" seconds=8.004390743 free_before="2.9 GiB" free_now="2.9 GiB" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=sched.go:354 msg="sending an unloaded event" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=sched.go:246 msg="unload completed" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory"
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2025-11-03T04:35:57.670-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43191"
time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c
time=2025-11-03T04:36:00.671-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout"
time=2025-11-03T04:36:00.671-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.001587802s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c]
time=2025-11-03T04:36:00.671-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values"
time=2025-11-03T04:36:00.671-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=3.001659036s
time=2025-11-03T04:36:00.674-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-03T04:36:00.674-05:00 level=DEBUG source=sched.go:204 msg="loading first model" model=/home/strike/.ollama/models/blobs/sha256-87010705f7c9be45e9a53a89aa34bdcd07039e1aedef71086204aefd0c023643
time=2025-11-03T04:36:00.678-05:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="45.3 GiB" free_swap="46.6 GiB"
time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=memory.go:198 msg=evaluating library=ROCm gpu_count=1 available="[2.9 GiB]"
time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.vision.block_count default=0
time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128
time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128
time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128
time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128
time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:611 msg="default cache size estimate" "attention MiB"=5280 "attention bytes"=5536481280 "recurrent MiB"=0 "recurrent bytes"=0
time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=memory.go:198 msg=evaluating library=ROCm gpu_count=1 available="[2.9 GiB]"
time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.vision.block_count default=0
time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128
time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128
time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128
time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128
time=2025-11-03T04:36:00.682-05:00 level=DEBUG source=ggml.go:611 msg="default cache size estimate" "attention MiB"=5280 "attention bytes"=5536481280 "recurrent MiB"=0 "recurrent bytes"=0
time=2025-11-03T04:36:00.682-05:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=67 layers.offload=5 layers.split=[5] memory.available="[2.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.3 GiB" memory.required.partial="2.6 GiB" memory.required.kv="5.2 GiB" memory.required.allocations="[2.6 GiB]" memory.weights.total="13.9 GiB" memory.weights.repeating="13.8 GiB" memory.weights.nonrepeating="128.2 MiB" memory.graph.full="368.0 MiB" memory.graph.partial="444.1 MiB"
time=2025-11-03T04:36:00.683-05:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:24 GPULayers:5[ID:GPU-b3d5d3574c66244c Layers:5(61..65)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) (0000:04:00.0) - 15760 MiB free
time=2025-11-03T04:36:00.684-05:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-03T04:36:00.684-05:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 25 key-value pairs and 597 tensors from /home/strike/.ollama/models/blobs/sha256-87010705f7c9be45e9a53a89aa34bdcd07039e1aedef71086204aefd0c023643 (version GGUF V3 (latest))

OS

Linux

GPU

AMD

CPU

Intel

Ollama version

ollama version is 0.12.9

Originally created by @StrykeSlammerII on GitHub (Nov 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12922 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I've had some trouble nailing down exactly what's occuring, so I'll be happy to try additional testing to get additional logs. In the log below, I started with a model that fit entirely in 13.1GB VRAM. Then I changed to a model that does not fit in VRAM, but Ollama only finds `memory.available="[2.9 GiB]"` Even if Ollama isn't able to use all the memory it finds, it should be able to release and refind the memory it was using previously. radeontop shows VRAM usage dropping as expected when the first model is unloaded. Stopping Ollama server and restarting it shows ``` time=2025-11-03T04:52:00.226-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b3d5d3574c66244c filtered_id="" library=ROCm compute=gfx1200 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama driver=60443.48 pci_id=0000:04:00.0 type=discrete total="15.9 GiB" available="15.4 GiB" ``` and radeontop shows 2GB VRAM used when Ollama is stopped. Restarting Ollama and starting with the larger model first eventually gives ``` time=2025-11-03T04:54:02.896-05:00 level=DEBUG source=sched.go:505 msg="finished setting up" runner.name=registry.ollama.ai/library/FI-oe:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="20.3 GiB" runner.vram="15.2 GiB" ``` and radeontop shows 15.nn GB VRAM used, as expected, rather than ~4GB used when switching from the small model to that same large model. This is a AMD RX 9060 XT GPU, 16 MB VRAM. Note: Previously I had some issues caused by leftover files from a manual install. We thought I found and cleared all of those, but there's the potential that some leftover settings or other system issues may be the root cause. ### Relevant log output ```shell time=2025-11-03T04:35:49.665-05:00 level=INFO source=server.go:483 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B" time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:799 msg="found an idle runner to unload" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:229 msg="resetting model to expire immediately to make room" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 refCount=0 time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:240 msg="waiting for pending requests to complete and unload to occur" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:304 msg="runner expired event received" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:319 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=sched.go:342 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/FI:latest runner.inference="[{ID:GPU-b3d5d3574c66244c Library:ROCm}]" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 runner.num_ctx=26107 time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory" time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=server.go:1735 msg="llamarunner free vram reporting not supported" time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=runner.go:315 msg="existing runner discovery took" duration=2.811µs time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2025-11-03T04:35:49.665-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33727" time=2025-11-03T04:35:49.665-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c time=2025-11-03T04:35:49.681-05:00 level=INFO source=runner.go:910 msg="starting go runner" time=2025-11-03T04:35:49.681-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama time=2025-11-03T04:35:52.668-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout" time=2025-11-03T04:35:52.668-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.00300616s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] time=2025-11-03T04:35:52.668-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values" time=2025-11-03T04:35:52.668-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=3.003102165s time=2025-11-03T04:35:52.716-05:00 level=DEBUG source=server.go:1699 msg="stopping llama server" pid=3295827 time=2025-11-03T04:35:52.716-05:00 level=DEBUG source=server.go:1705 msg="waiting for llama server to exit" pid=3295827 time=2025-11-03T04:35:52.778-05:00 level=DEBUG source=server.go:1709 msg="llama server stopped" pid=3295827 time=2025-11-03T04:35:52.778-05:00 level=DEBUG source=sched.go:351 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 time=2025-11-03T04:35:52.919-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory" time=2025-11-03T04:35:52.919-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2025-11-03T04:35:52.920-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34349" time=2025-11-03T04:35:52.920-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx1200 (0x1200), VMM: no, Wave Size: 32, ID: GPU-b3d5d3574c66244c load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so time=2025-11-03T04:35:55.138-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-11-03T04:35:55.139-05:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:42803" time=2025-11-03T04:35:55.920-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout" time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.000487808s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] time=2025-11-03T04:35:55.920-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values" time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=3.000748018s time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory" time=2025-11-03T04:35:55.920-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2025-11-03T04:35:55.921-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44781" time=2025-11-03T04:35:55.921-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c time=2025-11-03T04:35:57.669-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout" time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=1.749525577s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] time=2025-11-03T04:35:57.670-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values" time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=1.74964644s time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=sched.go:694 msg="gpu VRAM usage didn't recover within timeout" seconds=8.004390743 free_before="2.9 GiB" free_now="2.9 GiB" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=sched.go:354 msg="sending an unloaded event" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=sched.go:246 msg="unload completed" runner.size="13.1 GiB" runner.vram="13.1 GiB" runner.parallel=1 runner.pid=3295827 runner.model=/home/strike/.ollama/models/blobs/sha256-6d066cd9848bf6d450119f1f56bfb9bcae51d576ed489f013af43f8f47b17ac0 time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:267 msg="refreshing free memory" time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=runner.go:331 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2025-11-03T04:35:57.670-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43191" time=2025-11-03T04:35:57.670-05:00 level=DEBUG source=server.go:401 msg=subprocess OLLAMA_FLASH_ATTENTION=1 OLLAMA_DEBUG=1 ROCM_PATH=/opt/rocm PATH=/home/strike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/usr/lib/ollama: OLLAMA_LIBRARY_PATH=/usr/lib/ollama: ROCR_VISIBLE_DEVICES=GPU-b3d5d3574c66244c time=2025-11-03T04:36:00.671-05:00 level=INFO source=runner.go:498 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] error="failed to finish discovery before timeout" time=2025-11-03T04:36:00.671-05:00 level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.001587802s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:GPU-b3d5d3574c66244c] time=2025-11-03T04:36:00.671-05:00 level=WARN source=runner.go:358 msg="unable to refresh free memory, using old values" time=2025-11-03T04:36:00.671-05:00 level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=3.001659036s time=2025-11-03T04:36:00.674-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-03T04:36:00.674-05:00 level=DEBUG source=sched.go:204 msg="loading first model" model=/home/strike/.ollama/models/blobs/sha256-87010705f7c9be45e9a53a89aa34bdcd07039e1aedef71086204aefd0c023643 time=2025-11-03T04:36:00.678-05:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="45.3 GiB" free_swap="46.6 GiB" time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=memory.go:198 msg=evaluating library=ROCm gpu_count=1 available="[2.9 GiB]" time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.vision.block_count default=0 time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128 time=2025-11-03T04:36:00.679-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128 time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128 time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128 time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:611 msg="default cache size estimate" "attention MiB"=5280 "attention bytes"=5536481280 "recurrent MiB"=0 "recurrent bytes"=0 time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=memory.go:198 msg=evaluating library=ROCm gpu_count=1 available="[2.9 GiB]" time=2025-11-03T04:36:00.680-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.vision.block_count default=0 time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128 time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128 time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=128 time=2025-11-03T04:36:00.681-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.value_length default=128 time=2025-11-03T04:36:00.682-05:00 level=DEBUG source=ggml.go:611 msg="default cache size estimate" "attention MiB"=5280 "attention bytes"=5536481280 "recurrent MiB"=0 "recurrent bytes"=0 time=2025-11-03T04:36:00.682-05:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=67 layers.offload=5 layers.split=[5] memory.available="[2.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.3 GiB" memory.required.partial="2.6 GiB" memory.required.kv="5.2 GiB" memory.required.allocations="[2.6 GiB]" memory.weights.total="13.9 GiB" memory.weights.repeating="13.8 GiB" memory.weights.nonrepeating="128.2 MiB" memory.graph.full="368.0 MiB" memory.graph.partial="444.1 MiB" time=2025-11-03T04:36:00.683-05:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:24 GPULayers:5[ID:GPU-b3d5d3574c66244c Layers:5(61..65)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) (0000:04:00.0) - 15760 MiB free time=2025-11-03T04:36:00.684-05:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-03T04:36:00.684-05:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 25 key-value pairs and 597 tensors from /home/strike/.ollama/models/blobs/sha256-87010705f7c9be45e9a53a89aa34bdcd07039e1aedef71086204aefd0c023643 (version GGUF V3 (latest)) ``` ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version ollama version is 0.12.9
GiteaMirror added the amdbuglinux labels 2026-04-29 08:17:56 -05:00
Author
Owner

@StrykeSlammerII commented on GitHub (Nov 7, 2025):

Confirmed 0.12.10 uses VRAM as expected when switching models.

Thanks! Closing this issue as resolved.

As a clarifying note--in 0.12.9, I had to stop and restart the ollama server in order for it to find current VRAM usage. Leaving the server up and unused overnight still resulted in stale VRAM reporting.

<!-- gh-comment-id:3503937956 --> @StrykeSlammerII commented on GitHub (Nov 7, 2025): Confirmed 0.12.10 uses VRAM as expected when switching models. Thanks! Closing this issue as resolved. As a clarifying note--in 0.12.9, I had to stop and restart the ollama server in order for it to find current VRAM usage. Leaving the server up and unused overnight still resulted in stale VRAM reporting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55082