[GH-ISSUE #12283] Jetson Thor memory release issue #54678

Closed
opened 2026-04-29 06:52:39 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @mcr-ksh on GitHub (Sep 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12283

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When not cleaning up memory on the jetson thor the memory is stays occupied and the calculus of available memory is wrong:
model requires more system memory (59.8 GiB) than is available (59.8 GiB)

while there is actually enough memory available that just needs to be reclaimed.

when using echo 3 > /proc/sys/vm/drop_caches the system frees up the memory and ollama is able to run properly.

OLLAMA_NEW_ENGINE=1
OLLAMA_MAX_LOADED_MODELS=1
OLLAMA_NOHISTORY=1
OLLAMA_NEW_ESTIMATES=1
OLLAMA_LOAD_TIMEOUT=30m
OLLAMA_DEBUG=0
OLLAMA_NEW_ENGINE=1
OLLAMA_KEEP_ALIVE=24h
OLLAMA_FLASH_ATTENTION=1

Relevant log output

Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices:
Sep 14 13:52:26 jetson-thor ollama[3778]:   Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so
Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.348+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=WARN source=server.go:956 msg="model request too large for system" requested="59.8 GiB" available="59.8 GiB" total="122.8 GiB" free="59.8 GiB" swap="0 B"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=sched.go:441 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 error="model requires more system memory (59.8 GiB) than is available (59.8 GiB)"
Sep 14 13:52:26 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:52:26 | 500 |  6.667039965s |    192.168.1.17 | POST     "/api/chat"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:200 msg="model wants flash attention"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:217 msg="enabling flash attention"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.749+02:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 40043"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.751+02:00 level=INFO source=server.go:672 msg="loading model" "model layers"=37 requested=-1
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.765+02:00 level=INFO source=runner.go:1254 msg="starting ollama engine"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.775+02:00 level=INFO source=runner.go:1289 msg="Server listening on 127.0.0.1:40043"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:678 msg="system memory" total="122.8 GiB" free="120.4 GiB" free_swap="0 B"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 available="120.6 GiB" free="121.0 GiB" minimum="457.0 MiB" overhead="0 B"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.936+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.026+02:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices:
Sep 14 13:53:12 jetson-thor ollama[3778]:   Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so
Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so
Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.175+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.303+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:487 msg="offloading 36 repeating layers to GPU"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:498 msg="offloaded 37/37 layers to GPU"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Sep 14 13:53:46 jetson-thor ollama[3778]: time=2025-09-14T13:53:46.513+02:00 level=INFO source=server.go:1289 msg="llama runner started in 34.76 seconds"
Sep 14 13:54:12 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:12 | 200 |          1m1s |    192.168.1.17 | POST     "/api/chat"
Sep 14 13:54:27 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:27 | 200 | 15.230073248s |    192.168.1.17 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

master - direct build from src tag: v0.11.11-rc2

Originally created by @mcr-ksh on GitHub (Sep 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12283 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When not cleaning up memory on the jetson thor the memory is stays occupied and the calculus of available memory is wrong: `model requires more system memory (59.8 GiB) than is available (59.8 GiB)` while there is actually enough memory available that just needs to be reclaimed. when using `echo 3 > /proc/sys/vm/drop_caches` the system frees up the memory and ollama is able to run properly. ``` OLLAMA_NEW_ENGINE=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_NOHISTORY=1 OLLAMA_NEW_ESTIMATES=1 OLLAMA_LOAD_TIMEOUT=30m OLLAMA_DEBUG=0 OLLAMA_NEW_ENGINE=1 OLLAMA_KEEP_ALIVE=24h OLLAMA_FLASH_ATTENTION=1 ``` ### Relevant log output ```shell Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices: Sep 14 13:52:26 jetson-thor ollama[3778]: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.348+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=WARN source=server.go:956 msg="model request too large for system" requested="59.8 GiB" available="59.8 GiB" total="122.8 GiB" free="59.8 GiB" swap="0 B" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB" Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=sched.go:441 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 error="model requires more system memory (59.8 GiB) than is available (59.8 GiB)" Sep 14 13:52:26 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:52:26 | 500 | 6.667039965s | 192.168.1.17 | POST "/api/chat" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:200 msg="model wants flash attention" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:217 msg="enabling flash attention" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.749+02:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 40043" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.751+02:00 level=INFO source=server.go:672 msg="loading model" "model layers"=37 requested=-1 Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.765+02:00 level=INFO source=runner.go:1254 msg="starting ollama engine" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.775+02:00 level=INFO source=runner.go:1289 msg="Server listening on 127.0.0.1:40043" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:678 msg="system memory" total="122.8 GiB" free="120.4 GiB" free_swap="0 B" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 available="120.6 GiB" free="121.0 GiB" minimum="457.0 MiB" overhead="0 B" Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.936+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.026+02:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices: Sep 14 13:53:12 jetson-thor ollama[3778]: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.175+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.303+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:487 msg="offloading 36 repeating layers to GPU" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:498 msg="offloaded 37/37 layers to GPU" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Sep 14 13:53:46 jetson-thor ollama[3778]: time=2025-09-14T13:53:46.513+02:00 level=INFO source=server.go:1289 msg="llama runner started in 34.76 seconds" Sep 14 13:54:12 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:12 | 200 | 1m1s | 192.168.1.17 | POST "/api/chat" Sep 14 13:54:27 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:27 | 200 | 15.230073248s | 192.168.1.17 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version master - direct build from src tag: v0.11.11-rc2
GiteaMirror added the bugnvidia labels 2026-04-29 06:52:39 -05:00
Author
Owner

@johnnynunez commented on GitHub (Sep 15, 2025):

What is the issue?

When not cleaning up memory on the jetson thor the memory is stays occupied and the calculus of available memory is wrong: model requires more system memory (59.8 GiB) than is available (59.8 GiB)

while there is actually enough memory available that just needs to be reclaimed.

when using echo 3 > /proc/sys/vm/drop_caches the system frees up the memory and ollama is able to run properly.

OLLAMA_NEW_ENGINE=1
OLLAMA_MAX_LOADED_MODELS=1
OLLAMA_NOHISTORY=1
OLLAMA_NEW_ESTIMATES=1
OLLAMA_LOAD_TIMEOUT=30m
OLLAMA_DEBUG=0
OLLAMA_NEW_ENGINE=1
OLLAMA_KEEP_ALIVE=24h
OLLAMA_FLASH_ATTENTION=1

Relevant log output

Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices:
Sep 14 13:52:26 jetson-thor ollama[3778]: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so
Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.348+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=WARN source=server.go:956 msg="model request too large for system" requested="59.8 GiB" available="59.8 GiB" total="122.8 GiB" free="59.8 GiB" swap="0 B"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB"
Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=sched.go:441 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 error="model requires more system memory (59.8 GiB) than is available (59.8 GiB)"
Sep 14 13:52:26 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:52:26 | 500 | 6.667039965s | 192.168.1.17 | POST "/api/chat"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:200 msg="model wants flash attention"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:217 msg="enabling flash attention"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.749+02:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 40043"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.751+02:00 level=INFO source=server.go:672 msg="loading model" "model layers"=37 requested=-1
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.765+02:00 level=INFO source=runner.go:1254 msg="starting ollama engine"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.775+02:00 level=INFO source=runner.go:1289 msg="Server listening on 127.0.0.1:40043"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:678 msg="system memory" total="122.8 GiB" free="120.4 GiB" free_swap="0 B"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 available="120.6 GiB" free="121.0 GiB" minimum="457.0 MiB" overhead="0 B"
Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.936+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.026+02:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices:
Sep 14 13:53:12 jetson-thor ollama[3778]: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600
Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so
Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so
Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.175+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.303+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:487 msg="offloading 36 repeating layers to GPU"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:498 msg="offloaded 37/37 layers to GPU"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Sep 14 13:53:46 jetson-thor ollama[3778]: time=2025-09-14T13:53:46.513+02:00 level=INFO source=server.go:1289 msg="llama runner started in 34.76 seconds"
Sep 14 13:54:12 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:12 | 200 | 1m1s | 192.168.1.17 | POST "/api/chat"
Sep 14 13:54:27 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:27 | 200 | 15.230073248s | 192.168.1.17 | POST "/api/chat"

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

master - direct build from src tag: v0.11.11-rc2

hello,
system memory from thor and spark is different.

example: https://github.com/sgl-project/sglang/pull/9911

<!-- gh-comment-id:3293748049 --> @johnnynunez commented on GitHub (Sep 15, 2025): > ### What is the issue? > When not cleaning up memory on the jetson thor the memory is stays occupied and the calculus of available memory is wrong: `model requires more system memory (59.8 GiB) than is available (59.8 GiB)` > > while there is actually enough memory available that just needs to be reclaimed. > > when using `echo 3 > /proc/sys/vm/drop_caches` the system frees up the memory and ollama is able to run properly. > > ``` > OLLAMA_NEW_ENGINE=1 > OLLAMA_MAX_LOADED_MODELS=1 > OLLAMA_NOHISTORY=1 > OLLAMA_NEW_ESTIMATES=1 > OLLAMA_LOAD_TIMEOUT=30m > OLLAMA_DEBUG=0 > OLLAMA_NEW_ENGINE=1 > OLLAMA_KEEP_ALIVE=24h > OLLAMA_FLASH_ATTENTION=1 > ``` > > ### Relevant log output > Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no > Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no > Sep 14 13:52:26 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices: > Sep 14 13:52:26 jetson-thor ollama[3778]: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 > Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so > Sep 14 13:52:26 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.348+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=WARN source=server.go:956 msg="model request too large for system" requested="59.8 GiB" available="59.8 GiB" total="122.8 GiB" free="59.8 GiB" swap="0 B" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.461+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB" > Sep 14 13:52:26 jetson-thor ollama[3778]: time=2025-09-14T13:52:26.462+02:00 level=INFO source=sched.go:441 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 error="model requires more system memory (59.8 GiB) than is available (59.8 GiB)" > Sep 14 13:52:26 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:52:26 | 500 | 6.667039965s | 192.168.1.17 | POST "/api/chat" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:200 msg="model wants flash attention" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.748+02:00 level=INFO source=server.go:217 msg="enabling flash attention" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.749+02:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 40043" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.751+02:00 level=INFO source=server.go:672 msg="loading model" "model layers"=37 requested=-1 > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.765+02:00 level=INFO source=runner.go:1254 msg="starting ollama engine" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.775+02:00 level=INFO source=runner.go:1289 msg="Server listening on 127.0.0.1:40043" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:678 msg="system memory" total="122.8 GiB" free="120.4 GiB" free_swap="0 B" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.935+02:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 available="120.6 GiB" free="121.0 GiB" minimum="457.0 MiB" overhead="0 B" > Sep 14 13:53:11 jetson-thor ollama[3778]: time=2025-09-14T13:53:11.936+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" > Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.026+02:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 > Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no > Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no > Sep 14 13:53:12 jetson-thor ollama[3778]: ggml_cuda_init: found 1 CUDA devices: > Sep 14 13:53:12 jetson-thor ollama[3778]: Device 0: NVIDIA Thor, compute capability 11.0, VMM: yes, ID: GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 > Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cuda.so > Sep 14 13:53:12 jetson-thor ollama[3778]: load_backend: loaded CPU backend from /usr/local/lib/ollama/cuda_sbsa/libggml-cpu.so > Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.175+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=1100 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) > Sep 14 13:53:12 jetson-thor ollama[3778]: time=2025-09-14T13:53:12.303+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=runner.go:1173 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12472 KvCacheType: NumThreads:14 GPULayers:37[ID:GPU-a7c66ad2-6dbb-0ab8-c1a2-37ba6dba3600 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:487 msg="offloading 36 repeating layers to GPU" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.861+02:00 level=INFO source=ggml.go:498 msg="offloaded 37/37 layers to GPU" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="59.8 GiB" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="603.0 MiB" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="126.0 MiB" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.862+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=backend.go:342 msg="total memory" size="61.6 GiB" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" > Sep 14 13:53:13 jetson-thor ollama[3778]: time=2025-09-14T13:53:13.863+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" > Sep 14 13:53:46 jetson-thor ollama[3778]: time=2025-09-14T13:53:46.513+02:00 level=INFO source=server.go:1289 msg="llama runner started in 34.76 seconds" > Sep 14 13:54:12 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:12 | 200 | 1m1s | 192.168.1.17 | POST "/api/chat" > Sep 14 13:54:27 jetson-thor ollama[3778]: [GIN] 2025/09/14 - 13:54:27 | 200 | 15.230073248s | 192.168.1.17 | POST "/api/chat" > ### OS > Linux > > ### GPU > Nvidia > > ### CPU > Other > > ### Ollama version > master - direct build from src tag: v0.11.11-rc2 hello, system memory from thor and spark is different. example: https://github.com/sgl-project/sglang/pull/9911
Author
Owner

@pdevine commented on GitHub (Sep 15, 2025):

cc @dhiltgen

<!-- gh-comment-id:3293857245 --> @pdevine commented on GitHub (Sep 15, 2025): cc @dhiltgen
Author
Owner

@OriNachum commented on GitHub (Sep 25, 2025):

I confirm to experience failure over time and use as well.

It happens to me both on Jetson AGX Orin and Jetson Thor with model gpt-oss:20b

<!-- gh-comment-id:3335142952 --> @OriNachum commented on GitHub (Sep 25, 2025): I confirm to experience failure over time and use as well. It happens to me both on Jetson AGX Orin and Jetson Thor with model gpt-oss:20b
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2025):

There appears to be a CUDA bug on these iGPU systems where cuMemGetInfo is reporting the linux kernel's "free" memory stat without taking buff/cache into consideration, so once the kernel caches warm up, it looks like very little memory is available for the GPU.

<!-- gh-comment-id:3335204754 --> @dhiltgen commented on GitHub (Sep 25, 2025): There appears to be a CUDA bug on these iGPU systems where `cuMemGetInfo` is reporting the linux kernel's "free" memory stat without taking buff/cache into consideration, so once the kernel caches warm up, it looks like very little memory is available for the GPU.
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2025):

Until NVIDIA fixes the CUDA bug or we find a workaround, you can set num_gpu to a larger number to bypass the free VRAM checking logic and force more layers to be loaded. (note if you push too hard you may hit an OOM crash though)

e.g.,

% ollama run gpt-oss:20b
>>> /set parameter num_gpu 99
Set parameter 'num_gpu' to '99'
>>> /set verbose
Set 'verbose' mode.
>>>
<!-- gh-comment-id:3335538798 --> @dhiltgen commented on GitHub (Sep 25, 2025): Until NVIDIA fixes the CUDA bug or we find a workaround, you can set `num_gpu` to a larger number to bypass the free VRAM checking logic and force more layers to be loaded. (note if you push too hard you may hit an OOM crash though) e.g., ``` % ollama run gpt-oss:20b >>> /set parameter num_gpu 99 Set parameter 'num_gpu' to '99' >>> /set verbose Set 'verbose' mode. >>> ```
Author
Owner

@OriNachum commented on GitHub (Sep 25, 2025):

@dhiltgen this is during the active terminal session.

I use it as a server called from n8n, and there are many sessions.

Would that persist to the api calls?

<!-- gh-comment-id:3335715129 --> @OriNachum commented on GitHub (Sep 25, 2025): @dhiltgen this is during the active terminal session. I use it as a server called from n8n, and there are many sessions. Would that persist to the api calls?
Author
Owner

@pdevine commented on GitHub (Sep 25, 2025):

@OriNachum You can create a Modelfile which looks like:

FROM gpt-oss:20b
PARAMETER num_gpu 99

and then ollama create -f Modelfile gpt-oss:20b-offload. To run it ollama run gpt-oss:20b-offload.

I went ahead and pushed a model w/ the setting which you can run: ollama run pdevine/gpt-oss:20b-offload. If you've already pulled the gpt-oss:20b model it'll reuse the weights so it should be fast to pull.

<!-- gh-comment-id:3336007765 --> @pdevine commented on GitHub (Sep 25, 2025): @OriNachum You can create a Modelfile which looks like: ``` FROM gpt-oss:20b PARAMETER num_gpu 99 ``` and then `ollama create -f Modelfile gpt-oss:20b-offload`. To run it `ollama run gpt-oss:20b-offload`. I went ahead and pushed a model w/ the setting which you can run: `ollama run pdevine/gpt-oss:20b-offload`. If you've already pulled the `gpt-oss:20b` model it'll reuse the weights so it should be fast to pull.
Author
Owner

@OriNachum commented on GitHub (Sep 26, 2025):

I confirm download was instant, I'll check and update if the issue persists

<!-- gh-comment-id:3336699642 --> @OriNachum commented on GitHub (Sep 26, 2025): I confirm download was instant, I'll check and update if the issue persists
Author
Owner

@johnnynunez commented on GitHub (Sep 26, 2025):

@dhiltgen

Thanks for the investigation and workaround. Just to clarify: the problem shows up on unified-memory devices (UMA/iGPU) because cuMemGetInfo is returning the kernel’s MemFree value, which doesn’t discount cached pages. That makes it look like there’s very little memory left once the page cache fills, even though MemAvailable is still high.

On discrete GPUs this isn’t an issue, but on UMA systems frameworks that rely on cuMemGetInfo can fail unnecessarily.

A couple of mitigations we’ve seen work:
• Bypassing the VRAM check (as you suggested with num_gpu override).
• Or, on UMA devices, querying MemAvailable from /proc/meminfo instead of cuMemGetInfo. That’s the workaround other runtimes (like vLLM/SGLang) have adopted until CUDA changes the behavior.

in sglang:

try:
            prop = torch.cuda.get_device_properties(gpu_id)
            if prop.is_integrated:
                free_gpu_memory = psutil.virtual_memory().available
            else:
                free_gpu_memory, _ = torch.cuda.mem_get_info(gpu_id)
        except Exception as e:
            print(f"Error querying device properties: {e}. Falling back to system memory.")
            free_gpu_memory = psutil.virtual_memory().available

in vllm:

def _get_device_sm():
    if torch.cuda.is_available():
        major, minor = torch.cuda.get_device_capability()
        return major * 10 + minor
    return 0


@dataclass
class MemorySnapshot:
    """Memory snapshot."""
            "allocated_bytes.all.peak", 0)

        self.free_memory, self.total_memory = torch.cuda.mem_get_info()
        shared_sysmem_device_mem_sms = (110, 121)  # Thor, Spark
        if _get_device_sm() in shared_sysmem_device_mem_sms:
            # On these devices, which use sysmem as device mem, torch.cuda.mem_get_info()
            # only reports "free" memory, which can be lower than what is actually
            # available due to not including cache memory. So we use the system available
            # memory metric instead.
            self.free_memory = psutil.virtual_memory().available
        self.cuda_memory = self.total_memory - self.free_memory
<!-- gh-comment-id:3336897509 --> @johnnynunez commented on GitHub (Sep 26, 2025): @dhiltgen Thanks for the investigation and workaround. Just to clarify: the problem shows up on unified-memory devices (UMA/iGPU) because ```cuMemGetInfo``` is returning the kernel’s ```MemFree``` value, which doesn’t discount cached pages. That makes it look like there’s very little memory left once the page cache fills, even though MemAvailable is still high. On discrete GPUs this isn’t an issue, but on UMA systems frameworks that rely on cuMemGetInfo can fail unnecessarily. A couple of mitigations we’ve seen work: • Bypassing the VRAM check (as you suggested with num_gpu override). • Or, on UMA devices, querying MemAvailable from /proc/meminfo instead of cuMemGetInfo. That’s the workaround other runtimes (like vLLM/SGLang) have adopted until CUDA changes the behavior. in sglang: ```bash try: prop = torch.cuda.get_device_properties(gpu_id) if prop.is_integrated: free_gpu_memory = psutil.virtual_memory().available else: free_gpu_memory, _ = torch.cuda.mem_get_info(gpu_id) except Exception as e: print(f"Error querying device properties: {e}. Falling back to system memory.") free_gpu_memory = psutil.virtual_memory().available ``` in vllm: ```bash def _get_device_sm(): if torch.cuda.is_available(): major, minor = torch.cuda.get_device_capability() return major * 10 + minor return 0 @dataclass class MemorySnapshot: """Memory snapshot.""" "allocated_bytes.all.peak", 0) self.free_memory, self.total_memory = torch.cuda.mem_get_info() shared_sysmem_device_mem_sms = (110, 121) # Thor, Spark if _get_device_sm() in shared_sysmem_device_mem_sms: # On these devices, which use sysmem as device mem, torch.cuda.mem_get_info() # only reports "free" memory, which can be lower than what is actually # available due to not including cache memory. So we use the system available # memory metric instead. self.free_memory = psutil.virtual_memory().available self.cuda_memory = self.total_memory - self.free_memory ```
Author
Owner

@johnnynunez commented on GitHub (Oct 2, 2025):

@dhiltgen
On Orin, Thor and Spark platforms, where both CPU and GPU rely on system memory, the cudaMemGetInfo and cuMemGetInfo function shows the amount of free system memory rather than what’s actually available. There’s also a comprehensive reference page that explains how you can compute the proper value yourself.
https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/#estimating-total-allocatable-device-memory-on-an-integrated-gpu-device

<!-- gh-comment-id:3363073168 --> @johnnynunez commented on GitHub (Oct 2, 2025): @dhiltgen On Orin, Thor and Spark platforms, where both CPU and GPU rely on system memory, the ```cudaMemGetInfo``` and ```cuMemGetInfo``` function shows the amount of free system memory rather than what’s actually available. There’s also a comprehensive reference page that explains how you can compute the proper value yourself. https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/#estimating-total-allocatable-device-memory-on-an-integrated-gpu-device
Author
Owner

@dhiltgen commented on GitHub (Oct 4, 2025):

Fixed in v0.12.4-rc6

<!-- gh-comment-id:3368548735 --> @dhiltgen commented on GitHub (Oct 4, 2025): Fixed in [v0.12.4-rc6](https://github.com/ollama/ollama/releases/tag/v0.12.4-rc6)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54678