[GH-ISSUE #12528] Jetson Thor memory release issue part II #34074

Closed
opened 2026-04-22 17:18:59 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @acochrane on GitHub (Oct 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12528

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

This is a very similar issue to #12283
Ollama on Jetson Thor with unified memory doesn't seem to release memory as the models are forgotten.

The memory used when a model is loaded doesn't become free when the model is unloaded.
When a new model is loaded, or even the same model is loaded after ollama ps shows an empty list, the memory remains 'used' in free -g

This is with ollama built from the the git tag: v0.12.4-rc6.

I don't see a marked difference in behavior from the version found in the docker container posted here.

After triggering a load with openwebui, we see the following

root@granite:/home/user/src/ollama# ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:120b f7f8e2f8f4e0 65 GB 100% GPU 8192 27 seconds from now
root@granite:/home/user/src/ollama# ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
root@granite:/home/user/src/ollama# free -g
total used free shared buff/cache available
Mem: 122 64 3 0 55 58
Swap: 0 0 0
root@granite:/home/user/src/ollama# echo 3 > /proc/sys/vm/drop_caches
root@granite:/home/user/src/ollama# free -g
total used free shared buff/cache available
Mem: 122 3 119 0 0 119
Swap: 0 0 0

Again, triggering a load with openwebui causes the memory to become used.

root@granite:/home/user/src/ollama# free -g
total used free shared buff/cache available
Mem: 122 65 29 0 28 57
Swap: 0 0 0
root@granite:/home/user/src/ollama# ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:120b f7f8e2f8f4e0 65 GB 100% GPU 8192 4 minutes from now
root@granite:/home/user/src/ollama# ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:120b f7f8e2f8f4e0 65 GB 100% GPU 8192 2 minutes from now

After the model becomes forgotten, a query to the same model results

root@granite:/home/user/src/ollama# ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:120b f7f8e2f8f4e0 66 GB 6%/94% CPU/GPU 8192 4 minutes from now
root@granite:/home/user/src/ollama# free -g
total used free shared buff/cache available
Mem: 122 69 0 0 53 52
Swap: 0 0 0
root@granite:/home/user/src/ollama#

I think ollama should trigger a 'free' of the used memory when the model is forgotten. I think it has something to do with the log message, "gpu VRAM usage didn't recover within timeout".

Relevant log output

user@granite:~/src/ollama$ OLLAMA_HOST=0.0.0.0:11434 go run . serve
time=2025-10-07T15:07:43.685-06:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/user/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-07T15:07:43.690-06:00 level=INFO source=images.go:522 msg="total blobs: 30"
time=2025-10-07T15:07:43.691-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/me                   --> github.com/ollama/ollama/server.(*Server).WhoamiHandler-fm (5 handlers)
[GIN-debug] POST   /api/signout              --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers)
[GIN-debug] DELETE /api/user/keys/:encodedKey --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-10-07T15:07:43.691-06:00 level=INFO source=routes.go:1528 msg="Listening on [::]:11434 (version 0.0.0)"
time=2025-10-07T15:07:43.692-06:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-07T15:07:44.199-06:00 level=INFO source=types.go:111 msg="inference compute" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA compute=11.0 name=CUDA0 description="NVIDIA Thor" libdirs=ollama driver=13.0 pci_id=01:00.0 type=iGPU total="122.8 GiB" available="118.9 GiB"
time=2025-10-07T15:08:22.943-06:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-07T15:08:22.943-06:00 level=INFO source=server.go:395 msg="starting runner" cmd="/home/user/.cache/go-build/3d/3d2f3031738f0c1c4f2b38dcb0c518e8f59167b12b842e502d44b5e0e6834553-d/ollama runner --ollama-engine --model /home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 34813"
time=2025-10-07T15:08:22.944-06:00 level=INFO source=server.go:670 msg="loading model" "model layers"=37 requested=-1
time=2025-10-07T15:08:22.944-06:00 level=INFO source=server.go:676 msg="system memory" total="122.8 GiB" free="118.6 GiB" free_swap="0 B"
time=2025-10-07T15:08:22.944-06:00 level=INFO source=server.go:684 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA available="118.0 GiB" free="118.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-07T15:08:22.956-06:00 level=INFO source=runner.go:1299 msg="starting ollama engine"
time=2025-10-07T15:08:22.959-06:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:34813"
time=2025-10-07T15:08:22.967-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:08:23.030-06:00 level=INFO source=ggml.go:133 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
load_backend: loaded CUDA backend from /home/user/src/ollama/build/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /home/user/src/ollama/build/lib/ollama/libggml-cpu.so
time=2025-10-07T15:08:23.130-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-10-07T15:08:23.367-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=ggml.go:477 msg="offloading 36 repeating layers to GPU"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=ggml.go:483 msg="offloading output layer to GPU"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=ggml.go:488 msg="offloaded 37/37 layers to GPU"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="59.8 GiB"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="450.0 MiB"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="129.8 MiB"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:238 msg="total memory" size="61.4 GiB"
time=2025-10-07T15:08:24.844-06:00 level=INFO source=sched.go:480 msg="loaded runners" count=1
time=2025-10-07T15:08:24.845-06:00 level=INFO source=server.go:1266 msg="waiting for llama runner to start responding"
time=2025-10-07T15:08:24.845-06:00 level=INFO source=server.go:1300 msg="waiting for server to become available" status="llm server loading model"
time=2025-10-07T15:08:57.243-06:00 level=INFO source=server.go:1304 msg="llama runner started in 34.30 seconds"
[GIN] 2025/10/07 - 15:09:21 | 200 |     110.426µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/07 - 15:10:44 | 200 |         2m21s |   192.168.1.244 | POST     "/api/chat"
[GIN] 2025/10/07 - 15:11:10 | 200 | 26.534440002s |   192.168.1.244 | POST     "/api/chat"
[GIN] 2025/10/07 - 15:11:36 | 200 |      32.602µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/07 - 15:11:36 | 200 |     723.689µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/10/07 - 15:15:43 | 200 |      42.565µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/07 - 15:15:43 | 200 |      50.361µs |       127.0.0.1 | GET      "/api/ps"
time=2025-10-07T15:16:15.712-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.012599326 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1273064 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
time=2025-10-07T15:16:15.961-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.26236215 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1273064 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
time=2025-10-07T15:16:16.212-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.512861199 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1273064 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
[GIN] 2025/10/07 - 15:20:04 | 200 |      33.694µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/07 - 15:20:04 | 200 |      21.676µs |       127.0.0.1 | GET      "/api/ps"
time=2025-10-07T15:21:37.070-06:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-07T15:21:37.070-06:00 level=INFO source=server.go:395 msg="starting runner" cmd="/home/user/.cache/go-build/3d/3d2f3031738f0c1c4f2b38dcb0c518e8f59167b12b842e502d44b5e0e6834553-d/ollama runner --ollama-engine --model /home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 34147"
time=2025-10-07T15:21:37.071-06:00 level=INFO source=server.go:670 msg="loading model" "model layers"=37 requested=-1
time=2025-10-07T15:21:37.071-06:00 level=INFO source=server.go:676 msg="system memory" total="122.8 GiB" free="118.5 GiB" free_swap="0 B"
time=2025-10-07T15:21:37.071-06:00 level=INFO source=server.go:684 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA available="118.0 GiB" free="118.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-07T15:21:37.082-06:00 level=INFO source=runner.go:1299 msg="starting ollama engine"
time=2025-10-07T15:21:37.086-06:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:34147"
time=2025-10-07T15:21:37.093-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:21:37.165-06:00 level=INFO source=ggml.go:133 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
load_backend: loaded CUDA backend from /home/user/src/ollama/build/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /home/user/src/ollama/build/lib/ollama/libggml-cpu.so
time=2025-10-07T15:21:37.269-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-10-07T15:21:37.503-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="59.8 GiB"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="450.0 MiB"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="129.8 MiB"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=ggml.go:477 msg="offloading 36 repeating layers to GPU"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=ggml.go:483 msg="offloading output layer to GPU"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=ggml.go:488 msg="offloaded 37/37 layers to GPU"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:238 msg="total memory" size="61.4 GiB"
time=2025-10-07T15:21:39.020-06:00 level=INFO source=sched.go:480 msg="loaded runners" count=1
time=2025-10-07T15:21:39.021-06:00 level=INFO source=server.go:1266 msg="waiting for llama runner to start responding"
time=2025-10-07T15:21:39.021-06:00 level=INFO source=server.go:1300 msg="waiting for server to become available" status="llm server loading model"
[GIN] 2025/10/07 - 15:21:59 | 200 |      34.518µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/07 - 15:21:59 | 200 |      38.824µs |       127.0.0.1 | GET      "/api/ps"
time=2025-10-07T15:22:12.423-06:00 level=INFO source=server.go:1304 msg="llama runner started in 35.35 seconds"
[GIN] 2025/10/07 - 15:24:06 | 200 |         2m31s |   192.168.1.244 | POST     "/api/chat"
time=2025-10-07T15:24:06.601-06:00 level=WARN source=runner.go:160 msg="truncating input prompt" limit=8192 prompt=8735 keep=4 new=8192
[GIN] 2025/10/07 - 15:26:13 | 200 |          2m7s |   192.168.1.244 | POST     "/api/chat"
[GIN] 2025/10/07 - 15:28:27 | 200 |       41.75µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/07 - 15:28:27 | 200 |      51.463µs |       127.0.0.1 | GET      "/api/ps"
time=2025-10-07T15:31:18.735-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.012886652 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1283466 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
time=2025-10-07T15:31:18.985-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.26342789 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1283466 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
time=2025-10-07T15:31:19.234-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.512590357 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1283466 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
time=2025-10-07T15:31:50.363-06:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-07T15:31:50.363-06:00 level=INFO source=server.go:395 msg="starting runner" cmd="/home/user/.cache/go-build/3d/3d2f3031738f0c1c4f2b38dcb0c518e8f59167b12b842e502d44b5e0e6834553-d/ollama runner --ollama-engine --model /home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 42911"
time=2025-10-07T15:31:50.364-06:00 level=INFO source=server.go:670 msg="loading model" "model layers"=37 requested=-1
time=2025-10-07T15:31:50.364-06:00 level=INFO source=server.go:676 msg="system memory" total="122.8 GiB" free="58.2 GiB" free_swap="0 B"
time=2025-10-07T15:31:50.364-06:00 level=INFO source=server.go:684 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA available="57.7 GiB" free="58.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-07T15:31:50.375-06:00 level=INFO source=runner.go:1299 msg="starting ollama engine"
time=2025-10-07T15:31:50.379-06:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:42911"
time=2025-10-07T15:31:50.386-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:31:50.464-06:00 level=INFO source=ggml.go:133 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
load_backend: loaded CUDA backend from /home/user/src/ollama/build/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /home/user/src/ollama/build/lib/ollama/libggml-cpu.so
time=2025-10-07T15:31:50.566-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-10-07T15:31:50.781-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:35[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:31:50.881-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:35[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:35[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=ggml.go:477 msg="offloading 35 repeating layers to GPU"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=ggml.go:488 msg="offloaded 35/37 layers to GPU"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="57.1 GiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="3.8 GiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="441.0 MiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:222 msg="kv cache" device=CPU size="9.0 MiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="139.1 MiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="109.2 MiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:238 msg="total memory" size="61.5 GiB"
time=2025-10-07T15:31:51.532-06:00 level=INFO source=sched.go:480 msg="loaded runners" count=1
time=2025-10-07T15:31:51.532-06:00 level=INFO source=server.go:1266 msg="waiting for llama runner to start responding"
time=2025-10-07T15:31:51.533-06:00 level=INFO source=server.go:1300 msg="waiting for server to become available" status="llm server loading model"
[GIN] 2025/10/07 - 15:32:10 | 200 |      55.408µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/07 - 15:32:10 | 200 |       37.25µs |       127.0.0.1 | GET      "/api/ps"
time=2025-10-07T15:32:25.696-06:00 level=INFO source=server.go:1304 msg="llama runner started in 35.33 seconds"

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

v0.12.4-rc6

Originally created by @acochrane on GitHub (Oct 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12528 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? This is a very similar issue to #12283 Ollama on Jetson Thor with unified memory doesn't seem to release memory as the models are forgotten. The memory used when a model is loaded doesn't become free when the model is unloaded. When a new model is loaded, or even the same model is loaded after `ollama ps` shows an empty list, the memory remains 'used' in `free -g` This is with ollama built from the the git tag: v0.12.4-rc6. I don't see a marked difference in behavior from the version found in the docker container posted [here](https://www.jetson-ai-lab.com/tutorial_ollama.html). After triggering a load with openwebui, we see the following > root@granite:/home/user/src/ollama# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:120b f7f8e2f8f4e0 65 GB 100% GPU 8192 27 seconds from now root@granite:/home/user/src/ollama# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL root@granite:/home/user/src/ollama# free -g total used free shared buff/cache available Mem: 122 64 3 0 55 58 Swap: 0 0 0 root@granite:/home/user/src/ollama# echo 3 > /proc/sys/vm/drop_caches root@granite:/home/user/src/ollama# free -g total used free shared buff/cache available Mem: 122 3 119 0 0 119 Swap: 0 0 0 Again, triggering a load with openwebui causes the memory to become used. > root@granite:/home/user/src/ollama# free -g total used free shared buff/cache available Mem: 122 65 29 0 28 57 Swap: 0 0 0 root@granite:/home/user/src/ollama# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:120b f7f8e2f8f4e0 65 GB 100% GPU 8192 4 minutes from now root@granite:/home/user/src/ollama# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:120b f7f8e2f8f4e0 65 GB 100% GPU 8192 2 minutes from now After the model becomes forgotten, a query to the same model results > root@granite:/home/user/src/ollama# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:120b f7f8e2f8f4e0 66 GB 6%/94% CPU/GPU 8192 4 minutes from now root@granite:/home/user/src/ollama# free -g total used free shared buff/cache available Mem: 122 69 0 0 53 52 Swap: 0 0 0 root@granite:/home/user/src/ollama# I think ollama should trigger a 'free' of the used memory when the model is forgotten. I think it has something to do with the log message, "gpu VRAM usage didn't recover within timeout". ### Relevant log output ```shell user@granite:~/src/ollama$ OLLAMA_HOST=0.0.0.0:11434 go run . serve time=2025-10-07T15:07:43.685-06:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/user/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-07T15:07:43.690-06:00 level=INFO source=images.go:522 msg="total blobs: 30" time=2025-10-07T15:07:43.691-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/me --> github.com/ollama/ollama/server.(*Server).WhoamiHandler-fm (5 handlers) [GIN-debug] POST /api/signout --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers) [GIN-debug] DELETE /api/user/keys/:encodedKey --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-10-07T15:07:43.691-06:00 level=INFO source=routes.go:1528 msg="Listening on [::]:11434 (version 0.0.0)" time=2025-10-07T15:07:43.692-06:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-07T15:07:44.199-06:00 level=INFO source=types.go:111 msg="inference compute" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA compute=11.0 name=CUDA0 description="NVIDIA Thor" libdirs=ollama driver=13.0 pci_id=01:00.0 type=iGPU total="122.8 GiB" available="118.9 GiB" time=2025-10-07T15:08:22.943-06:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-07T15:08:22.943-06:00 level=INFO source=server.go:395 msg="starting runner" cmd="/home/user/.cache/go-build/3d/3d2f3031738f0c1c4f2b38dcb0c518e8f59167b12b842e502d44b5e0e6834553-d/ollama runner --ollama-engine --model /home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 34813" time=2025-10-07T15:08:22.944-06:00 level=INFO source=server.go:670 msg="loading model" "model layers"=37 requested=-1 time=2025-10-07T15:08:22.944-06:00 level=INFO source=server.go:676 msg="system memory" total="122.8 GiB" free="118.6 GiB" free_swap="0 B" time=2025-10-07T15:08:22.944-06:00 level=INFO source=server.go:684 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA available="118.0 GiB" free="118.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-07T15:08:22.956-06:00 level=INFO source=runner.go:1299 msg="starting ollama engine" time=2025-10-07T15:08:22.959-06:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:34813" time=2025-10-07T15:08:22.967-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:08:23.030-06:00 level=INFO source=ggml.go:133 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 load_backend: loaded CUDA backend from /home/user/src/ollama/build/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /home/user/src/ollama/build/lib/ollama/libggml-cpu.so time=2025-10-07T15:08:23.130-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-10-07T15:08:23.367-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:08:24.844-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:08:24.844-06:00 level=INFO source=ggml.go:477 msg="offloading 36 repeating layers to GPU" time=2025-10-07T15:08:24.844-06:00 level=INFO source=ggml.go:483 msg="offloading output layer to GPU" time=2025-10-07T15:08:24.844-06:00 level=INFO source=ggml.go:488 msg="offloaded 37/37 layers to GPU" time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="59.8 GiB" time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB" time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="450.0 MiB" time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="129.8 MiB" time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB" time=2025-10-07T15:08:24.844-06:00 level=INFO source=device.go:238 msg="total memory" size="61.4 GiB" time=2025-10-07T15:08:24.844-06:00 level=INFO source=sched.go:480 msg="loaded runners" count=1 time=2025-10-07T15:08:24.845-06:00 level=INFO source=server.go:1266 msg="waiting for llama runner to start responding" time=2025-10-07T15:08:24.845-06:00 level=INFO source=server.go:1300 msg="waiting for server to become available" status="llm server loading model" time=2025-10-07T15:08:57.243-06:00 level=INFO source=server.go:1304 msg="llama runner started in 34.30 seconds" [GIN] 2025/10/07 - 15:09:21 | 200 | 110.426µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/07 - 15:10:44 | 200 | 2m21s | 192.168.1.244 | POST "/api/chat" [GIN] 2025/10/07 - 15:11:10 | 200 | 26.534440002s | 192.168.1.244 | POST "/api/chat" [GIN] 2025/10/07 - 15:11:36 | 200 | 32.602µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/07 - 15:11:36 | 200 | 723.689µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/10/07 - 15:15:43 | 200 | 42.565µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/07 - 15:15:43 | 200 | 50.361µs | 127.0.0.1 | GET "/api/ps" time=2025-10-07T15:16:15.712-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.012599326 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1273064 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 time=2025-10-07T15:16:15.961-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.26236215 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1273064 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 time=2025-10-07T15:16:16.212-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.512861199 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1273064 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 [GIN] 2025/10/07 - 15:20:04 | 200 | 33.694µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/07 - 15:20:04 | 200 | 21.676µs | 127.0.0.1 | GET "/api/ps" time=2025-10-07T15:21:37.070-06:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-07T15:21:37.070-06:00 level=INFO source=server.go:395 msg="starting runner" cmd="/home/user/.cache/go-build/3d/3d2f3031738f0c1c4f2b38dcb0c518e8f59167b12b842e502d44b5e0e6834553-d/ollama runner --ollama-engine --model /home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 34147" time=2025-10-07T15:21:37.071-06:00 level=INFO source=server.go:670 msg="loading model" "model layers"=37 requested=-1 time=2025-10-07T15:21:37.071-06:00 level=INFO source=server.go:676 msg="system memory" total="122.8 GiB" free="118.5 GiB" free_swap="0 B" time=2025-10-07T15:21:37.071-06:00 level=INFO source=server.go:684 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA available="118.0 GiB" free="118.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-07T15:21:37.082-06:00 level=INFO source=runner.go:1299 msg="starting ollama engine" time=2025-10-07T15:21:37.086-06:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:34147" time=2025-10-07T15:21:37.093-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:21:37.165-06:00 level=INFO source=ggml.go:133 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 load_backend: loaded CUDA backend from /home/user/src/ollama/build/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /home/user/src/ollama/build/lib/ollama/libggml-cpu.so time=2025-10-07T15:21:37.269-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-10-07T15:21:37.503-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:21:39.020-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="59.8 GiB" time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB" time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="450.0 MiB" time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="129.8 MiB" time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB" time=2025-10-07T15:21:39.020-06:00 level=INFO source=ggml.go:477 msg="offloading 36 repeating layers to GPU" time=2025-10-07T15:21:39.020-06:00 level=INFO source=ggml.go:483 msg="offloading output layer to GPU" time=2025-10-07T15:21:39.020-06:00 level=INFO source=ggml.go:488 msg="offloaded 37/37 layers to GPU" time=2025-10-07T15:21:39.020-06:00 level=INFO source=device.go:238 msg="total memory" size="61.4 GiB" time=2025-10-07T15:21:39.020-06:00 level=INFO source=sched.go:480 msg="loaded runners" count=1 time=2025-10-07T15:21:39.021-06:00 level=INFO source=server.go:1266 msg="waiting for llama runner to start responding" time=2025-10-07T15:21:39.021-06:00 level=INFO source=server.go:1300 msg="waiting for server to become available" status="llm server loading model" [GIN] 2025/10/07 - 15:21:59 | 200 | 34.518µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/07 - 15:21:59 | 200 | 38.824µs | 127.0.0.1 | GET "/api/ps" time=2025-10-07T15:22:12.423-06:00 level=INFO source=server.go:1304 msg="llama runner started in 35.35 seconds" [GIN] 2025/10/07 - 15:24:06 | 200 | 2m31s | 192.168.1.244 | POST "/api/chat" time=2025-10-07T15:24:06.601-06:00 level=WARN source=runner.go:160 msg="truncating input prompt" limit=8192 prompt=8735 keep=4 new=8192 [GIN] 2025/10/07 - 15:26:13 | 200 | 2m7s | 192.168.1.244 | POST "/api/chat" [GIN] 2025/10/07 - 15:28:27 | 200 | 41.75µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/07 - 15:28:27 | 200 | 51.463µs | 127.0.0.1 | GET "/api/ps" time=2025-10-07T15:31:18.735-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.012886652 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1283466 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 time=2025-10-07T15:31:18.985-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.26342789 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1283466 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 time=2025-10-07T15:31:19.234-06:00 level=WARN source=sched.go:654 msg="gpu VRAM usage didn't recover within timeout" seconds=5.512590357 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=1283466 runner.model=/home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 time=2025-10-07T15:31:50.363-06:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-07T15:31:50.363-06:00 level=INFO source=server.go:395 msg="starting runner" cmd="/home/user/.cache/go-build/3d/3d2f3031738f0c1c4f2b38dcb0c518e8f59167b12b842e502d44b5e0e6834553-d/ollama runner --ollama-engine --model /home/user/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 42911" time=2025-10-07T15:31:50.364-06:00 level=INFO source=server.go:670 msg="loading model" "model layers"=37 requested=-1 time=2025-10-07T15:31:50.364-06:00 level=INFO source=server.go:676 msg="system memory" total="122.8 GiB" free="58.2 GiB" free_swap="0 B" time=2025-10-07T15:31:50.364-06:00 level=INFO source=server.go:684 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 library=CUDA available="57.7 GiB" free="58.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-07T15:31:50.375-06:00 level=INFO source=runner.go:1299 msg="starting ollama engine" time=2025-10-07T15:31:50.379-06:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:42911" time=2025-10-07T15:31:50.386-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:31:50.464-06:00 level=INFO source=ggml.go:133 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 load_backend: loaded CUDA backend from /home/user/src/ollama/build/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /home/user/src/ollama/build/lib/ollama/libggml-cpu.so time=2025-10-07T15:31:50.566-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-10-07T15:31:50.781-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:35[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:31:50.881-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:35[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:31:51.532-06:00 level=INFO source=runner.go:1172 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:14 GPULayers:35[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-07T15:31:51.532-06:00 level=INFO source=ggml.go:477 msg="offloading 35 repeating layers to GPU" time=2025-10-07T15:31:51.532-06:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU" time=2025-10-07T15:31:51.532-06:00 level=INFO source=ggml.go:488 msg="offloaded 35/37 layers to GPU" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="57.1 GiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="3.8 GiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="441.0 MiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:222 msg="kv cache" device=CPU size="9.0 MiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="139.1 MiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="109.2 MiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=device.go:238 msg="total memory" size="61.5 GiB" time=2025-10-07T15:31:51.532-06:00 level=INFO source=sched.go:480 msg="loaded runners" count=1 time=2025-10-07T15:31:51.532-06:00 level=INFO source=server.go:1266 msg="waiting for llama runner to start responding" time=2025-10-07T15:31:51.533-06:00 level=INFO source=server.go:1300 msg="waiting for server to become available" status="llm server loading model" [GIN] 2025/10/07 - 15:32:10 | 200 | 55.408µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/07 - 15:32:10 | 200 | 37.25µs | 127.0.0.1 | GET "/api/ps" time=2025-10-07T15:32:25.696-06:00 level=INFO source=server.go:1304 msg="llama runner started in 35.33 seconds" ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version v0.12.4-rc6
GiteaMirror added the nvidiabug labels 2026-04-22 17:19:00 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 7, 2025):

This is page cache. The kernel keeps recently used data around in a cache in case it needs to be used again. It is not committed to any process and is not in use - the kernel will discard the cache contents if it needs free pages to load a new process.

<!-- gh-comment-id:3378904606 --> @rick-github commented on GitHub (Oct 7, 2025): This is [page cache](https://en.wikipedia.org/wiki/Page_cache). The kernel keeps recently used data around in a cache in case it needs to be used again. It is not committed to any process and is not in use - the kernel will discard the cache contents if it needs free pages to load a new process.
Author
Owner

@acochrane commented on GitHub (Oct 7, 2025):

Thanks @rick-github I accept that the way memory allocation is tracked is more complicated than just (Allocated/Not Allocated). Particularly in these iGPU systems.

So after reading @johnnynunez posts on the referenced issue and thinking about this being just page cache, I'm wondering if a proper fix would be to discount that page cache from the 'used' memory that is collected for the calculation. I thought this is what @dhiltgen did, in his commit but it doesn't seem to solve the problem for me :/

I'll spend a little more time with the docs tonight and see if I can crank out something that doesn't break with openwebui.

<!-- gh-comment-id:3378952087 --> @acochrane commented on GitHub (Oct 7, 2025): Thanks @rick-github I accept that the way memory allocation is tracked is more complicated than just (Allocated/Not Allocated). Particularly in these iGPU systems. So after reading @johnnynunez posts on the referenced issue and thinking about this being just page cache, I'm wondering if a proper fix would be to discount that page cache from the 'used' memory that is collected for the calculation. I thought this is what @dhiltgen did, in his [commit](https://github.com/ollama/ollama/commit/e4340667e33e0efa5dee471917d71ad6011e59ba) but it doesn't seem to solve the problem for me :/ I'll spend a little more time with the [docs](https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/#memory-types-table) tonight and see if I can crank out something that doesn't break with openwebui.
Author
Owner

@johnnynunez commented on GitHub (Oct 7, 2025):

Thanks @rick-github I accept that the way memory allocation is tracked is more complicated than just (Allocated/Not Allocated). Particularly in these iGPU systems.

So after reading @johnnynunez posts on the referenced issue and thinking about this being just page cache, I'm wondering if a proper fix would be to discount that page cache from the 'used' memory that is collected for the calculation. I thought this is what @dhiltgen did, in his commit but it doesn't seem to solve the problem for me :/

I'll spend a little more time with the docs tonight and see if I can crank out something that doesn't break with openwebui.

temporal fix:

sudo sysctl -w vm.drop_caches=3
<!-- gh-comment-id:3378968638 --> @johnnynunez commented on GitHub (Oct 7, 2025): > Thanks [@rick-github](https://github.com/rick-github) I accept that the way memory allocation is tracked is more complicated than just (Allocated/Not Allocated). Particularly in these iGPU systems. > > So after reading [@johnnynunez](https://github.com/johnnynunez) posts on the referenced issue and thinking about this being just page cache, I'm wondering if a proper fix would be to discount that page cache from the 'used' memory that is collected for the calculation. I thought this is what [@dhiltgen](https://github.com/dhiltgen) did, in his [commit](https://github.com/ollama/ollama/commit/e4340667e33e0efa5dee471917d71ad6011e59ba) but it doesn't seem to solve the problem for me :/ > > I'll spend a little more time with the [docs](https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/#memory-types-table) tonight and see if I can crank out something that doesn't break with openwebui. temporal fix: ``` sudo sysctl -w vm.drop_caches=3 ```
Author
Owner

@acochrane commented on GitHub (Oct 7, 2025):

For sure!
It's just that I have to run that command every time I spend 4 minutes doing something else and the model becomes, "forgotten".

Who can live like that?

<!-- gh-comment-id:3378971897 --> @acochrane commented on GitHub (Oct 7, 2025): For sure! It's just that I have to run that command every time I spend 4 minutes doing something else and the model becomes, "forgotten". Who can live like that?
Author
Owner

@rick-github commented on GitHub (Oct 7, 2025):

The model can be kept loaded by setting keep alive.

<!-- gh-comment-id:3378975042 --> @rick-github commented on GitHub (Oct 7, 2025): The model can be kept loaded by setting [keep alive](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately).
Author
Owner

@acochrane commented on GitHub (Oct 9, 2025):

Tested basically just ignoring the gpu.FreeMemory counter to see if the paged memory of an unloaded runner could be used in starting a new runner. Became slightly familiar with the code.

Turns out, even after the runner has left the active process list, it's memory isn't returned to the /proc/meminfo MemFree, which I think everyone knew, e.g #12528. But also, trying to start a second runner with 'gpu' memory when the first runners memory pages haven't been manually dropped, results in the second runner failing and the openwebui-ollama connection hangs failing to receive a response, no further chat api calls seem to leave the openwebui app.

I'm guessing the second runner segfaults. But given the page cache explanation by @rick-github I would expect the OS to handle dropping the cache when the new runner is started.

Is this something that could be addressed by more aggressive garbage collection in go? Or is there need for a more fundamental paradigm shift for user-code memory management with the new iGPU capabilities? I haven't grokked everything in the NVIDIA porting considerations, but it looks like they're talking about things changing specifically with Thor.

<!-- gh-comment-id:3386556314 --> @acochrane commented on GitHub (Oct 9, 2025): Tested basically just ignoring the gpu.FreeMemory counter to see if the paged memory of an unloaded runner could be used in starting a new runner. Became slightly familiar with the code. Turns out, even after the runner has left the active process list, it's memory isn't returned to the /proc/meminfo MemFree, which I think everyone knew, e.g [#12528](https://github.com/ollama/ollama/issues/12528#issuecomment-3378904606). But also, trying to start a second runner with 'gpu' memory when the first runners memory pages haven't been manually dropped, results in the second runner failing and the openwebui-ollama connection hangs failing to receive a response, no further chat api calls seem to leave the openwebui app. I'm guessing the second runner segfaults. But given the page cache explanation by @rick-github I would expect the OS to handle dropping the cache when the new runner is started. Is this something that could be addressed by more aggressive garbage collection in go? Or is there need for a more fundamental paradigm shift for user-code memory management with the new iGPU capabilities? I haven't grokked everything in the NVIDIA [porting considerations](https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/#porting-considerations), but it looks like they're talking about things changing specifically with Thor.
Author
Owner

@acochrane commented on GitHub (Oct 9, 2025):

Also, I looked into the value in MemAvailable like @johnnynunez said here_ but it doesn't appear to take into account the reduced memory use of the runner process ending. So maybe there is a memory-leak-like issue.

I wanted to just count up the available memory like he suggested but none of the previously used ways of counting memory seem to be tracking correctly the pages of the ended runner as becoming 'available'.

<!-- gh-comment-id:3386671403 --> @acochrane commented on GitHub (Oct 9, 2025): Also, I looked into the value in MemAvailable like @johnnynunez said [here](https://github.com/ollama/ollama/issues/12283#issuecomment-3336897509)_ but it doesn't appear to take into account the reduced memory use of the runner process ending. So maybe there is a memory-leak-like issue. I wanted to just count up the available memory like he suggested but none of the previously used ways of counting memory seem to be tracking correctly the pages of the ended runner as becoming 'available'.
Author
Owner

@ghost commented on GitHub (Oct 13, 2025):

I tried VLLM docker to run models on thor, the memory keeps occupying memory even the VLLM serve process completely killed.
It might be the jetson thor's system issue, I thought it shall be Nvidia duty to fix it(or provide a solution).

<!-- gh-comment-id:3397353259 --> @ghost commented on GitHub (Oct 13, 2025): I tried VLLM docker to run models on thor, the memory keeps occupying memory even the VLLM serve process completely killed. It might be the jetson thor's system issue, I thought it shall be Nvidia duty to fix it(or provide a solution).
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2025):

We've been overhauling the GPU discovery and VRAM detection logic in preparation for adding Vulkan support. Please give 0.12.10 a try and let us know if you're seeing an improvement in the ability to load multiple models in sequence where prior models need to be unloaded. If the VRAM reporting is still laggy, please share an updated server log so we can take a look.

<!-- gh-comment-id:3494135423 --> @dhiltgen commented on GitHub (Nov 5, 2025): We've been overhauling the GPU discovery and VRAM detection logic in preparation for adding Vulkan support. Please give 0.12.10 a try and let us know if you're seeing an improvement in the ability to load multiple models in sequence where prior models need to be unloaded. If the VRAM reporting is still laggy, please share an updated server log so we can take a look.
Author
Owner

@johnnynunez commented on GitHub (Nov 6, 2025):

We've been overhauling the GPU discovery and VRAM detection logic in preparation for adding Vulkan support. Please give 0.12.10 a try and let us know if you're seeing an improvement in the ability to load multiple models in sequence where prior models need to be unloaded. If the VRAM reporting is still laggy, please share an updated server log so we can take a look.

i can confirm that it is working well right now

<!-- gh-comment-id:3494792769 --> @johnnynunez commented on GitHub (Nov 6, 2025): > We've been overhauling the GPU discovery and VRAM detection logic in preparation for adding Vulkan support. Please give 0.12.10 a try and let us know if you're seeing an improvement in the ability to load multiple models in sequence where prior models need to be unloaded. If the VRAM reporting is still laggy, please share an updated server log so we can take a look. i can confirm that it is working well right now
Author
Owner

@dhiltgen commented on GitHub (Nov 6, 2025):

I'm going to close this one now. If anyone is still seeing scheduling problems on the Thor after upgrading to 0.12.10 please share updated server logs and I'll reopen.

<!-- gh-comment-id:3498577900 --> @dhiltgen commented on GitHub (Nov 6, 2025): I'm going to close this one now. If anyone is still seeing scheduling problems on the Thor after upgrading to 0.12.10 please share updated server logs and I'll reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34074