[GH-ISSUE #11354] Mistral-small3.1:latest crashes with OOM #33249

Closed
opened 2026-04-22 15:45:39 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @sammyf on GitHub (Jul 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11354

What is the issue?

Mistral-small3.1, freshly pulled, doesn't find any VRAM on CUDA0 (RTX3090/24GB) and then crashes with an out of memory error on CUDA1 (RTXA1000/8GB) while trying to allocate 9GB.

Other models don't seem to suffer from this (qwen2.5-coder:32b with 32k context loads just fine :

NAME                         ID              SIZE     PROCESSOR         UNTIL
qwen2.5-coder-abl:32b-32k    47e43500b8ef    32 GB    3%/97% CPU/GPU    Forever

Relevant log output

░░ Subject: A start job for unit ollama.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit ollama.service has finished successfully.
░░
░░ The job identifier is 193.
Jul 10 08:38:14 neo-bandito ollama[3198]: time=2025-07-10T08:38:14.297+02:00 level=INFO source=routes.go
:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OV
ERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLA
MA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OL
LAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEO
UT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:10 OLLAMA_MODELS:/media/GLIMSPANKY/ollama/models/ OL
LAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NU
M_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* h
ttp://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0
http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA
_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.471+02:00 level=INFO source=images.go
:476 msg="total blobs: 125"
Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.472+02:00 level=INFO source=images.go
:483 msg="total unused blobs removed: 0"
Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.473+02:00 level=INFO source=routes.go
:1288 msg="Listening on [::]:11434 (version 0.9.3)"
Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.475+02:00 level=INFO source=gpu.go:21
7 msg="looking for compatible GPUs"
Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.920+02:00 level=INFO source=types.go:
130 msg="inference compute" id=GPU-a9be7ece-3ea9-9a38-3a55-5aad9943f497 library=cuda variant=v12 compute
=8.6 driver=12.9 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.2 GiB"
Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.920+02:00 level=INFO source=types.go:
130 msg="inference compute" id=GPU-7db6777e-b194-eee2-c132-cea3c32e6d0a library=cuda variant=v12 compute
=8.6 driver=12.9 name="NVIDIA RTX A1000" total="7.7 GiB" available="7.6 GiB"

OS

Archlinux, updated. 64GB RAM

GPU

RTX 3090 (24GB)
RTX A1000 (8GB)

CPU

intel I10700K

Ollama version

0.9.3

Originally created by @sammyf on GitHub (Jul 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11354 ### What is the issue? Mistral-small3.1, freshly pulled, doesn't find any VRAM on CUDA0 (RTX3090/24GB) and then crashes with an out of memory error on CUDA1 (RTXA1000/8GB) while trying to allocate 9GB. Other models don't seem to suffer from this (qwen2.5-coder:32b with 32k context loads just fine : ``` NAME ID SIZE PROCESSOR UNTIL qwen2.5-coder-abl:32b-32k 47e43500b8ef 32 GB 3%/97% CPU/GPU Forever ``` ### Relevant log output ```shell ░░ Subject: A start job for unit ollama.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A start job for unit ollama.service has finished successfully. ░░ ░░ The job identifier is 193. Jul 10 08:38:14 neo-bandito ollama[3198]: time=2025-07-10T08:38:14.297+02:00 level=INFO source=routes.go :1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OV ERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLA MA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OL LAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEO UT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:10 OLLAMA_MODELS:/media/GLIMSPANKY/ollama/models/ OL LAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NU M_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* h ttp://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA _SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.471+02:00 level=INFO source=images.go :476 msg="total blobs: 125" Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.472+02:00 level=INFO source=images.go :483 msg="total unused blobs removed: 0" Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.473+02:00 level=INFO source=routes.go :1288 msg="Listening on [::]:11434 (version 0.9.3)" Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.475+02:00 level=INFO source=gpu.go:21 7 msg="looking for compatible GPUs" Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.920+02:00 level=INFO source=types.go: 130 msg="inference compute" id=GPU-a9be7ece-3ea9-9a38-3a55-5aad9943f497 library=cuda variant=v12 compute =8.6 driver=12.9 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.2 GiB" Jul 10 08:38:16 neo-bandito ollama[3198]: time=2025-07-10T08:38:16.920+02:00 level=INFO source=types.go: 130 msg="inference compute" id=GPU-7db6777e-b194-eee2-c132-cea3c32e6d0a library=cuda variant=v12 compute =8.6 driver=12.9 name="NVIDIA RTX A1000" total="7.7 GiB" available="7.6 GiB" ``` ### OS Archlinux, updated. 64GB RAM ### GPU RTX 3090 (24GB) RTX A1000 (8GB) ### CPU intel I10700K ### Ollama version 0.9.3
GiteaMirror added the bug label 2026-04-22 15:45:39 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 10, 2025):

Full log would make it easier to debug.

<!-- gh-comment-id:3056080183 --> @rick-github commented on GitHub (Jul 10, 2025): Full log would make it easier to debug.
Author
Owner

@sammyf commented on GitHub (Jul 10, 2025):

sorry ... I used the wrong params for journalctl ... here is the complete log (without the recurring ollama ps calls)

mistral-small-bug.txt

░░ Subject: A stop job for unit ollama.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A stop job for unit ollama.service has begun execution.
░░ 
░░ The job identifier is 7541.
Jul 10 09:52:17 neo-bandito systemd[1]: ollama.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit ollama.service has successfully entered the 'dead' state.
Jul 10 09:52:17 neo-bandito systemd[1]: Stopped Ollama Service.
░░ Subject: A stop job for unit ollama.service has finished
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A stop job for unit ollama.service has finished.
░░ 
░░ The job identifier is 7541 and the job result is done.
Jul 10 09:52:17 neo-bandito systemd[1]: ollama.service: Consumed 1.512s CPU time, 36.6M memory peak.
░░ Subject: Resources consumed by unit runtime
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit ollama.service completed and consumed the indicated resources.
Jul 10 09:52:17 neo-bandito systemd[1]: Started Ollama Service.
░░ Subject: A start job for unit ollama.service has finished x
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A start job for unit ollama.service has finished successfully.
░░ 
░░ The job identifier is 7541.
Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.603+02:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:10 OLLAMA_MODELS:/media/GLIMSPANKY/ollama/models/ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.607+02:00 level=INFO source=images.go:476 msg="total blobs: 125"
Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.608+02:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.608+02:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.3)"
Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.608+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Jul 10 09:52:18 neo-bandito ollama[110918]: time=2025-07-10T09:52:18.137+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a9be7ece-3ea9-9a38-3a55-5aad9943f497 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="22.2 GiB"
Jul 10 09:52:18 neo-bandito ollama[110918]: time=2025-07-10T09:52:18.137+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-7db6777e-b194-eee2-c132-cea3c32e6d0a library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA RTX A1000" total="7.7 GiB" available="7.6 GiB"

{OLLAMA PS ommited}

Jul 10 09:52:30 neo-bandito ollama[110918]: time=2025-07-10T09:52:30.528+02:00 level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc library=cuda parallel=1 required="24.5 GiB"
Jul 10 09:52:30 neo-bandito ollama[110918]: time=2025-07-10T09:52:30.792+02:00 level=INFO source=server.go:135 msg="system memory" total="62.6 GiB" free="54.7 GiB" free_swap="123.0 GiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.069+02:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=21,20 memory.available="[22.2 GiB 7.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.5 GiB" memory.required.partial="24.5 GiB" memory.required.kv="160.0 MiB" memory.required.allocations="[17.2 GiB 7.3 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="106.7 MiB" memory.graph.partial="106.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.070+02:00 level=INFO source=server.go:218 msg="enabling flash attention"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.097+02:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 41 --threads 8 --flash-attn --kv-cache-type q4_0 --parallel 1 --tensor-split 21,20 --port 43803"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.098+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.098+02:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.098+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.104+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.105+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:43803"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.132+02:00 level=INFO source=ggml.go:92 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43
Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_cuda_init: found 2 CUDA devices:
Jul 10 09:52:31 neo-bandito ollama[110918]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Jul 10 09:52:31 neo-bandito ollama[110918]:   Device 1: NVIDIA RTX A1000, compute capability 8.6, VMM: yes
Jul 10 09:52:31 neo-bandito ollama[110918]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Jul 10 09:52:31 neo-bandito ollama[110918]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.257+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.343+02:00 level=INFO source=ggml.go:359 msg="model weights" buffer=CUDA1 size="7.2 GiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.343+02:00 level=INFO source=ggml.go:359 msg="model weights" buffer=CPU size="525.0 MiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.343+02:00 level=INFO source=ggml.go:359 msg="model weights" buffer=CUDA0 size="6.7 GiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.350+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9337.48 MiB on device 1: cudaMalloc failed: out of memory
Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.566+02:00 level=INFO source=ggml.go:648 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.566+02:00 level=INFO source=ggml.go:648 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="9.1 GiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.566+02:00 level=INFO source=ggml.go:648 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
Jul 10 09:52:31 neo-bandito ollama[110918]: panic: insufficient memory - required allocations: {InputWeights:550502400A CPU:{Name:CPU UUID: Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} GPUs:[{Name:CUDA0 UUID:GPU-a9be7ece-3ea9-9a38-3a55-5aad9943f497 Weights:[363438080A 363438080A 363438080A 363438080A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} {Name:CUDA1 UUID:GPU-7db6777e-b194-eee2-c132-cea3c32e6d0a Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 363438080A 363438080A 363438080A 363438080A 363438080A 1255526400A] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:9791055360F}]}
Jul 10 09:52:31 neo-bandito ollama[110918]: goroutine 24 [running]:
Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/ml/backend/ggml.(*Context).Reserve(0xc0011281c0)
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/ml/backend/ggml/ggml.go:653 +0x756
Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0xc0005099e8?, {0x556d4d266790, 0xc0004da3f0}, {0x556d4d26aa70, 0xc001129d40}, {0x556d4d275940, 0xc00012d8d8}, 0x1)
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/runner/ollamarunner/multimodal.go:98 +0x2a4
Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0xc000509cc8, {0x556d4d266790, 0xc0004da3f0}, {0x556d4d26aa70, 0xc001129d40}, {0xc00111e080, 0x1, 0x556d4d0a4f00?}, 0x1)
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xe5
Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000621d40)
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:796 +0x70e
Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000621d40, {0x7fff3800aa3d?, 0x0?}, {0x8, 0x0, 0x29, {0xc000681748, 0x2, 0x2}, 0x1}, ...)
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270
Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000621d40, {0x556d4d262880, 0xc0000fd8b0}, {0x7fff3800aa3d?, 0x0?}, {0x8, 0x0, 0x29, {0xc000681748, 0x2, ...}, ...}, ...)
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8
Jul 10 09:52:31 neo-bandito ollama[110918]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
Jul 10 09:52:31 neo-bandito ollama[110918]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.605+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.663+02:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2"
Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.856+02:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360"
Jul 10 09:52:31 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:31 | 500 |  2.499247167s |       127.0.0.1 | POST     "/api/generate"
Jul 10 09:52:32 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:32 | 200 |       44.32µs |   192.168.0.100 | GET      "/api/ps"
Jul 10 09:52:32 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:32 | 200 |      10.009µs |   192.168.0.100 | GET      "/api/ps"
Jul 10 09:52:34 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:34 | 200 |      17.027µs |   192.168.0.100 | GET      "/api/ps"
Jul 10 09:52:34 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:34 | 200 |      13.177µs |   192.168.0.100 | GET      "/api/ps"
Jul 10 09:52:36 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:36 | 200 |      17.258µs |   192.168.0.100 | GET      "/api/ps"
Jul 10 09:52:36 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:36 | 200 |      15.162µs |   192.168.0.100 | GET      "/api/ps"
Jul 10 09:52:37 neo-bandito ollama[110918]: time=2025-07-10T09:52:37.051+02:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.194809959 runner.size="24.5 GiB" runner.vram="24.5 GiB" runner.parallel=1 runner.pid=111331 runner.model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc
Jul 10 09:52:37 neo-bandito ollama[110918]: time=2025-07-10T09:52:37.314+02:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.45856796 runner.size="24.5 GiB" runner.vram="24.5 GiB" runner.parallel=1 runner.pid=111331 runner.model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc
Jul 10 09:52:37 neo-bandito ollama[110918]: time=2025-07-10T09:52:37.575+02:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.719491803 runner.size="24.5 GiB" runner.vram="24.5 GiB" runner.parallel=1 runner.pid=111331 runner.model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc

<!-- gh-comment-id:3056167043 --> @sammyf commented on GitHub (Jul 10, 2025): sorry ... I used the wrong params for journalctl ... here is the complete log (without the recurring ```ollama ps``` calls) [mistral-small-bug.txt](https://github.com/user-attachments/files/21156693/mistral-small-bug.txt) ```bash ░░ Subject: A stop job for unit ollama.service has begun execution ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A stop job for unit ollama.service has begun execution. ░░ ░░ The job identifier is 7541. Jul 10 09:52:17 neo-bandito systemd[1]: ollama.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ The unit ollama.service has successfully entered the 'dead' state. Jul 10 09:52:17 neo-bandito systemd[1]: Stopped Ollama Service. ░░ Subject: A stop job for unit ollama.service has finished ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A stop job for unit ollama.service has finished. ░░ ░░ The job identifier is 7541 and the job result is done. Jul 10 09:52:17 neo-bandito systemd[1]: ollama.service: Consumed 1.512s CPU time, 36.6M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ The unit ollama.service completed and consumed the indicated resources. Jul 10 09:52:17 neo-bandito systemd[1]: Started Ollama Service. ░░ Subject: A start job for unit ollama.service has finished x ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A start job for unit ollama.service has finished successfully. ░░ ░░ The job identifier is 7541. Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.603+02:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:10 OLLAMA_MODELS:/media/GLIMSPANKY/ollama/models/ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.607+02:00 level=INFO source=images.go:476 msg="total blobs: 125" Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.608+02:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.608+02:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.3)" Jul 10 09:52:17 neo-bandito ollama[110918]: time=2025-07-10T09:52:17.608+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Jul 10 09:52:18 neo-bandito ollama[110918]: time=2025-07-10T09:52:18.137+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a9be7ece-3ea9-9a38-3a55-5aad9943f497 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="22.2 GiB" Jul 10 09:52:18 neo-bandito ollama[110918]: time=2025-07-10T09:52:18.137+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-7db6777e-b194-eee2-c132-cea3c32e6d0a library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA RTX A1000" total="7.7 GiB" available="7.6 GiB" {OLLAMA PS ommited} Jul 10 09:52:30 neo-bandito ollama[110918]: time=2025-07-10T09:52:30.528+02:00 level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc library=cuda parallel=1 required="24.5 GiB" Jul 10 09:52:30 neo-bandito ollama[110918]: time=2025-07-10T09:52:30.792+02:00 level=INFO source=server.go:135 msg="system memory" total="62.6 GiB" free="54.7 GiB" free_swap="123.0 GiB" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.069+02:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=21,20 memory.available="[22.2 GiB 7.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.5 GiB" memory.required.partial="24.5 GiB" memory.required.kv="160.0 MiB" memory.required.allocations="[17.2 GiB 7.3 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="106.7 MiB" memory.graph.partial="106.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.070+02:00 level=INFO source=server.go:218 msg="enabling flash attention" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.097+02:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 41 --threads 8 --flash-attn --kv-cache-type q4_0 --parallel 1 --tensor-split 21,20 --port 43803" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.098+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.098+02:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.098+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.104+02:00 level=INFO source=runner.go:925 msg="starting ollama engine" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.105+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:43803" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.132+02:00 level=INFO source=ggml.go:92 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_cuda_init: found 2 CUDA devices: Jul 10 09:52:31 neo-bandito ollama[110918]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Jul 10 09:52:31 neo-bandito ollama[110918]: Device 1: NVIDIA RTX A1000, compute capability 8.6, VMM: yes Jul 10 09:52:31 neo-bandito ollama[110918]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so Jul 10 09:52:31 neo-bandito ollama[110918]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.257+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.343+02:00 level=INFO source=ggml.go:359 msg="model weights" buffer=CUDA1 size="7.2 GiB" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.343+02:00 level=INFO source=ggml.go:359 msg="model weights" buffer=CPU size="525.0 MiB" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.343+02:00 level=INFO source=ggml.go:359 msg="model weights" buffer=CUDA0 size="6.7 GiB" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.350+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9337.48 MiB on device 1: cudaMalloc failed: out of memory Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360 Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.566+02:00 level=INFO source=ggml.go:648 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.566+02:00 level=INFO source=ggml.go:648 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="9.1 GiB" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.566+02:00 level=INFO source=ggml.go:648 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" Jul 10 09:52:31 neo-bandito ollama[110918]: panic: insufficient memory - required allocations: {InputWeights:550502400A CPU:{Name:CPU UUID: Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} GPUs:[{Name:CUDA0 UUID:GPU-a9be7ece-3ea9-9a38-3a55-5aad9943f497 Weights:[363438080A 363438080A 363438080A 363438080A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} {Name:CUDA1 UUID:GPU-7db6777e-b194-eee2-c132-cea3c32e6d0a Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 320184320A 320184320A 363438080A 363438080A 363438080A 363438080A 363438080A 363438080A 1255526400A] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:9791055360F}]} Jul 10 09:52:31 neo-bandito ollama[110918]: goroutine 24 [running]: Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/ml/backend/ggml.(*Context).Reserve(0xc0011281c0) Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/ml/backend/ggml/ggml.go:653 +0x756 Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0xc0005099e8?, {0x556d4d266790, 0xc0004da3f0}, {0x556d4d26aa70, 0xc001129d40}, {0x556d4d275940, 0xc00012d8d8}, 0x1) Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner/multimodal.go:98 +0x2a4 Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0xc000509cc8, {0x556d4d266790, 0xc0004da3f0}, {0x556d4d26aa70, 0xc001129d40}, {0xc00111e080, 0x1, 0x556d4d0a4f00?}, 0x1) Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xe5 Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000621d40) Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner/runner.go:796 +0x70e Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000621d40, {0x7fff3800aa3d?, 0x0?}, {0x8, 0x0, 0x29, {0xc000681748, 0x2, 0x2}, 0x1}, ...) Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270 Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000621d40, {0x556d4d262880, 0xc0000fd8b0}, {0x7fff3800aa3d?, 0x0?}, {0x8, 0x0, 0x29, {0xc000681748, 0x2, ...}, ...}, ...) Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8 Jul 10 09:52:31 neo-bandito ollama[110918]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 Jul 10 09:52:31 neo-bandito ollama[110918]: github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11 Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.605+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.663+02:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2" Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.856+02:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360" Jul 10 09:52:31 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:31 | 500 | 2.499247167s | 127.0.0.1 | POST "/api/generate" Jul 10 09:52:32 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:32 | 200 | 44.32µs | 192.168.0.100 | GET "/api/ps" Jul 10 09:52:32 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:32 | 200 | 10.009µs | 192.168.0.100 | GET "/api/ps" Jul 10 09:52:34 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:34 | 200 | 17.027µs | 192.168.0.100 | GET "/api/ps" Jul 10 09:52:34 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:34 | 200 | 13.177µs | 192.168.0.100 | GET "/api/ps" Jul 10 09:52:36 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:36 | 200 | 17.258µs | 192.168.0.100 | GET "/api/ps" Jul 10 09:52:36 neo-bandito ollama[110918]: [GIN] 2025/07/10 - 09:52:36 | 200 | 15.162µs | 192.168.0.100 | GET "/api/ps" Jul 10 09:52:37 neo-bandito ollama[110918]: time=2025-07-10T09:52:37.051+02:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.194809959 runner.size="24.5 GiB" runner.vram="24.5 GiB" runner.parallel=1 runner.pid=111331 runner.model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc Jul 10 09:52:37 neo-bandito ollama[110918]: time=2025-07-10T09:52:37.314+02:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.45856796 runner.size="24.5 GiB" runner.vram="24.5 GiB" runner.parallel=1 runner.pid=111331 runner.model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc Jul 10 09:52:37 neo-bandito ollama[110918]: time=2025-07-10T09:52:37.575+02:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.719491803 runner.size="24.5 GiB" runner.vram="24.5 GiB" runner.parallel=1 runner.pid=111331 runner.model=/media/GLIMSPANKY/ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc ```
Author
Owner

@rick-github commented on GitHub (Jul 10, 2025):

Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.069+02:00 level=INFO source=server.go:175 msg=offload
 library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=21,20 memory.available="[22.2 GiB 7.6 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="24.5 GiB" memory.required.partial="24.5 GiB" memory.required.kv="160.0 MiB"
 memory.required.allocations="[17.2 GiB 7.3 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB"
 memory.weights.nonrepeating="360.0 MiB" memory.graph.full="106.7 MiB" memory.graph.partial="106.7 MiB"
 projector.weights="769.3 MiB" projector.graph="8.8 GiB"
Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9337.48 MiB on device 1: cudaMalloc failed: out of memory

It looks like the server and the runner have different ideas on how much VRAM is required to host the model. The server has estimated that it needs 7.6GiB on device 1, yet the runner tries to allocate 9.1GiB. The memory estimation logic is being re-worked in #11090 so hopefully this will fixed in a release or two. In the meantime, see here for ways to prevent an OOM.

<!-- gh-comment-id:3056595255 --> @rick-github commented on GitHub (Jul 10, 2025): ``` Jul 10 09:52:31 neo-bandito ollama[110918]: time=2025-07-10T09:52:31.069+02:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=21,20 memory.available="[22.2 GiB 7.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.5 GiB" memory.required.partial="24.5 GiB" memory.required.kv="160.0 MiB" memory.required.allocations="[17.2 GiB 7.3 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="106.7 MiB" memory.graph.partial="106.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" Jul 10 09:52:31 neo-bandito ollama[110918]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9337.48 MiB on device 1: cudaMalloc failed: out of memory ``` It looks like the server and the runner have different ideas on how much VRAM is required to host the model. The server has estimated that it needs 7.6GiB on device 1, yet the runner tries to allocate 9.1GiB. The memory estimation logic is being re-worked in #11090 so hopefully this will fixed in a release or two. In the meantime, see [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288) for ways to prevent an OOM.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33249