[GH-ISSUE #10852] ollama upgrade 0.7.1. breaks mistral-small3.1:24b-instruct-2503-q4_K_M #32886

Closed
opened 2026-04-22 14:48:04 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @foanthoanGH on GitHub (May 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10852

What is the issue?

After the ollama upgrade 0.7.1. this is the error when running the above mention model:

ollama run mistral-small3.1:24b-instruct-2503-q4_K_M
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @foanthoanGH on GitHub (May 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10852 ### What is the issue? After the ollama upgrade 0.7.1. this is the error when running the above mention model: ollama run mistral-small3.1:24b-instruct-2503-q4_K_M Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360 ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the memorybug labels 2026-04-22 14:48:05 -05:00
Author
Owner

@MarkWard0110 commented on GitHub (May 26, 2025):

I get the same and recorded that here too
https://github.com/ollama/ollama/issues/10553

<!-- gh-comment-id:2910641393 --> @MarkWard0110 commented on GitHub (May 26, 2025): I get the same and recorded that here too https://github.com/ollama/ollama/issues/10553
Author
Owner

@phr0gz commented on GitHub (May 31, 2025):

Same issue with 3 GPUs, Using docker, Ollama version 0.9.0, working good with gemma but not mistral-small3 :
With: mistral-small3.1:latest
time=2025-05-31T00:05:46.977Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-31T00:05:46.984Z level=INFO source=images.go:479 msg="total blobs: 23" time=2025-05-31T00:05:46.984Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-05-31T00:05:46.985Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-05-31T00:05:46.985Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-31T00:05:47.581Z level=INFO source=types.go:130 msg="inference compute" id=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.1 GiB" time=2025-05-31T00:05:47.581Z level=INFO source=types.go:130 msg="inference compute" id=GPU-52aab516-8e4c-8a18-1ec5-ea4e93119667 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="13.7 GiB" time=2025-05-31T00:05:47.581Z level=INFO source=types.go:130 msg="inference compute" id=GPU-4ccbba1c-2f53-b260-f75e-ad10640a3cc0 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Ti" total="15.5 GiB" available="15.4 GiB" [GIN] 2025/05/31 - 09:27:03 | 200 | 2.954145ms | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:27:03 | 200 | 73.571µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:27:04 | 200 | 680.159µs | 172.17.0.1 | GET "/api/version" [GIN] 2025/05/31 - 09:27:55 | 200 | 1.170923ms | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:27:55 | 200 | 26.97µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:28:48 | 200 | 808.928µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:28:48 | 200 | 43.631µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:28:57 | 200 | 744.249µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:29:09 | 200 | 565.982574ms | 172.17.0.1 | POST "/api/pull" time=2025-05-31T09:29:10.332Z level=INFO source=download.go:177 msg="downloading c5ad996bda6e in 1 556 B part(s)" [GIN] 2025/05/31 - 09:29:11 | 200 | 1.72544728s | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:29:11 | 200 | 423.362537ms | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:29:12 | 200 | 422.135132ms | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:29:12 | 200 | 471.318665ms | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:37:52 | 200 | 42.591µs | 172.17.0.1 | GET "/api/version" [GIN] 2025/05/31 - 09:37:53 | 200 | 784.93µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:37:53 | 200 | 30.111µs | 172.17.0.1 | GET "/api/ps" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.089Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.103Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.114Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-05-31T09:38:05.166Z level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc library=cuda parallel=2 required="28.8 GiB" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.174Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.179Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.184Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-05-31T09:38:05.184Z level=INFO source=server.go:135 msg="system memory" total="23.5 GiB" free="20.2 GiB" free_swap="6.9 GiB" time=2025-05-31T09:38:05.185Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=15,13,13 memory.available="[23.1 GiB 15.4 GiB 13.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="28.8 GiB" memory.required.partial="28.8 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[16.3 GiB 6.5 GiB 6.1 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" time=2025-05-31T09:38:05.222Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --no-mmap --parallel 2 --tensor-split 15,13,13 --port 44989" time=2025-05-31T09:38:05.223Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-31T09:38:05.223Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-31T09:38:05.223Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T09:38:05.241Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-05-31T09:38:05.242Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:44989" time=2025-05-31T09:38:05.285Z level=INFO source=ggml.go:92 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-31T09:38:05.365Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-31T09:38:05.367Z level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="14.4 GiB" time=2025-05-31T09:38:05.475Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-31T09:38:05.776Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="9.0 GiB" time=2025-05-31T09:38:06.372Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="9.0 GiB" time=2025-05-31T09:38:15.767Z level=INFO source=server.go:630 msg="llama runner started in 10.54 seconds" [GIN] 2025/05/31 - 09:38:48 | 200 | 42.998980387s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 09:39:13 | 200 | 24.927407081s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 09:39:32 | 200 | 18.951985694s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 09:41:52 | 200 | 2.984947ms | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:41:52 | 200 | 77.911µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:41:53 | 200 | 768.509µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:41:53 | 200 | 52.1µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:42:08 | 200 | 798.62µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:42:08 | 200 | 57.511µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:42:11 | 200 | 1.286197ms | 172.17.0.1 | GET "/api/tags"

With mistral-small3.1:24b-instruct-2503-q8_0

time=2025-05-31T10:00:06.626Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-31T10:00:06.627Z level=INFO source=images.go:479 msg="total blobs: 23" time=2025-05-31T10:00:06.628Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-05-31T10:00:06.628Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-05-31T10:00:06.628Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-31T10:00:07.160Z level=INFO source=types.go:130 msg="inference compute" id=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.1 GiB" time=2025-05-31T10:00:07.160Z level=INFO source=types.go:130 msg="inference compute" id=GPU-52aab516-8e4c-8a18-1ec5-ea4e93119667 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="13.7 GiB" time=2025-05-31T10:00:07.160Z level=INFO source=types.go:130 msg="inference compute" id=GPU-4ccbba1c-2f53-b260-f75e-ad10640a3cc0 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Ti" total="15.5 GiB" available="15.4 GiB" [GIN] 2025/05/31 - 10:02:06 | 200 | 919.621µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:02:06 | 200 | 57.281µs | 172.17.0.1 | GET "/api/ps" time=2025-05-31T10:02:11.264Z level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 library=cuda parallel=2 required="39.2 GiB" time=2025-05-31T10:02:11.700Z level=INFO source=server.go:135 msg="system memory" total="23.5 GiB" free="21.1 GiB" free_swap="5.8 GiB" time=2025-05-31T10:02:11.701Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=15,13,13 memory.available="[23.1 GiB 15.4 GiB 13.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="39.2 GiB" memory.required.partial="39.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[20.2 GiB 9.5 GiB 9.5 GiB]" memory.weights.total="22.8 GiB" memory.weights.repeating="22.2 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" time=2025-05-31T10:02:11.742Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --no-mmap --parallel 2 --tensor-split 15,13,13 --port 37617" time=2025-05-31T10:02:11.742Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-31T10:02:11.742Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-31T10:02:11.742Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T10:02:11.755Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-05-31T10:02:11.756Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:37617" time=2025-05-31T10:02:11.798Z level=INFO source=ggml.go:92 msg="" architecture=mistral3 file_type=Q8_0 name="" description="" num_tensors=585 num_key_values=43 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes Device 2: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes time=2025-05-31T10:02:11.994Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-31T10:02:12.084Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA2 size="8.1 GiB" time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="680.0 MiB" time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="8.3 GiB" time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA1 size="7.2 GiB" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9337.48 MiB on device 2: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA2 buffer of size 9791055360 time=2025-05-31T10:02:12.585Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B" time=2025-05-31T10:02:12.586Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="0 B" time=2025-05-31T10:02:12.586Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA2 buffer_type=CUDA2 size="9.1 GiB" time=2025-05-31T10:02:12.586Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" panic: insufficient memory - required allocations: {InputWeights:713031680A CPU:{Name:CPU Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} GPUs:[{Name:CUDA0 Weights:[595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} {Name:CUDA1 Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} {Name:CUDA2 Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 1591070720A] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:9791055360F}]} goroutine 13 [running]: github.com/ollama/ollama/ml/backend/ggml.(*Context).Reserve(0xc00051a1c0) github.com/ollama/ollama/ml/backend/ggml/ggml.go:643 +0x756 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0xc0005919f8?, {0x5da9505847e8, 0xc0004d2510}, {0x5da950588808, 0xc0010bbec0}, {0x5da9505928e8, 0xc0002c1de8}, 0x1) github.com/ollama/ollama/runner/ollamarunner/multimodal.go:98 +0x2a4 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0xc000591cd8, {0x5da9505847e8, 0xc0004d2510}, {0x5da950588808, 0xc0010bbec0}, {0xc0010b00a0, 0x1, 0x5da9503c9900?}, 0x1) github.com/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xe5 github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000551c20) github.com/ollama/ollama/runner/ollamarunner/runner.go:796 +0x70e github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000551c20, {0x7ffd20a29cb8?, 0x0?}, {0x10, 0x0, 0x29, {0xc0003209d0, 0x3, 0x3}, 0x0}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270 github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000551c20, {0x5da950580990, 0xc0000fd7c0}, {0x7ffd20a29cb8?, 0x0?}, {0x10, 0x0, 0x29, {0xc0003209d0, 0x3, ...}, ...}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11 time=2025-05-31T10:02:12.747Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T10:02:12.904Z level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2" time=2025-05-31T10:02:12.998Z level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA2 buffer of size 9791055360" [GIN] 2025/05/31 - 10:02:12 | 500 | 2.280541974s | 172.17.0.1 | POST "/api/chat" time=2025-05-31T10:02:18.014Z level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.015909647 runner.size="39.2 GiB" runner.vram="39.2 GiB" runner.parallel=2 runner.pid=42 runner.model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 time=2025-05-31T10:02:18.432Z level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.434247087 runner.size="39.2 GiB" runner.vram="39.2 GiB" runner.parallel=2 runner.pid=42 runner.model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 time=2025-05-31T10:02:18.849Z level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.851670497 runner.size="39.2 GiB" runner.vram="39.2 GiB" runner.parallel=2 runner.pid=42 runner.model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0

<!-- gh-comment-id:2924838342 --> @phr0gz commented on GitHub (May 31, 2025): Same issue with 3 GPUs, Using docker, Ollama version 0.9.0, working good with gemma but not mistral-small3 : **With: mistral-small3.1:latest** `time=2025-05-31T00:05:46.977Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-31T00:05:46.984Z level=INFO source=images.go:479 msg="total blobs: 23" time=2025-05-31T00:05:46.984Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-05-31T00:05:46.985Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-05-31T00:05:46.985Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-31T00:05:47.581Z level=INFO source=types.go:130 msg="inference compute" id=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.1 GiB" time=2025-05-31T00:05:47.581Z level=INFO source=types.go:130 msg="inference compute" id=GPU-52aab516-8e4c-8a18-1ec5-ea4e93119667 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="13.7 GiB" time=2025-05-31T00:05:47.581Z level=INFO source=types.go:130 msg="inference compute" id=GPU-4ccbba1c-2f53-b260-f75e-ad10640a3cc0 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Ti" total="15.5 GiB" available="15.4 GiB" [GIN] 2025/05/31 - 09:27:03 | 200 | 2.954145ms | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:27:03 | 200 | 73.571µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:27:04 | 200 | 680.159µs | 172.17.0.1 | GET "/api/version" [GIN] 2025/05/31 - 09:27:55 | 200 | 1.170923ms | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:27:55 | 200 | 26.97µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:28:48 | 200 | 808.928µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:28:48 | 200 | 43.631µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:28:57 | 200 | 744.249µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:29:09 | 200 | 565.982574ms | 172.17.0.1 | POST "/api/pull" time=2025-05-31T09:29:10.332Z level=INFO source=download.go:177 msg="downloading c5ad996bda6e in 1 556 B part(s)" [GIN] 2025/05/31 - 09:29:11 | 200 | 1.72544728s | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:29:11 | 200 | 423.362537ms | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:29:12 | 200 | 422.135132ms | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:29:12 | 200 | 471.318665ms | 172.17.0.1 | POST "/api/pull" [GIN] 2025/05/31 - 09:37:52 | 200 | 42.591µs | 172.17.0.1 | GET "/api/version" [GIN] 2025/05/31 - 09:37:53 | 200 | 784.93µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:37:53 | 200 | 30.111µs | 172.17.0.1 | GET "/api/ps" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.089Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.103Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.114Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-05-31T09:38:05.166Z level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc library=cuda parallel=2 required="28.8 GiB" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.174Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.179Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-05-31T09:38:05.184Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-05-31T09:38:05.184Z level=INFO source=server.go:135 msg="system memory" total="23.5 GiB" free="20.2 GiB" free_swap="6.9 GiB" time=2025-05-31T09:38:05.185Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=15,13,13 memory.available="[23.1 GiB 15.4 GiB 13.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="28.8 GiB" memory.required.partial="28.8 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[16.3 GiB 6.5 GiB 6.1 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" time=2025-05-31T09:38:05.222Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --no-mmap --parallel 2 --tensor-split 15,13,13 --port 44989" time=2025-05-31T09:38:05.223Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-31T09:38:05.223Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-31T09:38:05.223Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T09:38:05.241Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-05-31T09:38:05.242Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:44989" time=2025-05-31T09:38:05.285Z level=INFO source=ggml.go:92 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-31T09:38:05.365Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-31T09:38:05.367Z level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="14.4 GiB" time=2025-05-31T09:38:05.475Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-31T09:38:05.776Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="9.0 GiB" time=2025-05-31T09:38:06.372Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="9.0 GiB" time=2025-05-31T09:38:15.767Z level=INFO source=server.go:630 msg="llama runner started in 10.54 seconds" [GIN] 2025/05/31 - 09:38:48 | 200 | 42.998980387s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 09:39:13 | 200 | 24.927407081s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 09:39:32 | 200 | 18.951985694s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 09:41:52 | 200 | 2.984947ms | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:41:52 | 200 | 77.911µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:41:53 | 200 | 768.509µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:41:53 | 200 | 52.1µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:42:08 | 200 | 798.62µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 09:42:08 | 200 | 57.511µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 09:42:11 | 200 | 1.286197ms | 172.17.0.1 | GET "/api/tags" ` **With mistral-small3.1:24b-instruct-2503-q8_0** `time=2025-05-31T10:00:06.626Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-31T10:00:06.627Z level=INFO source=images.go:479 msg="total blobs: 23" time=2025-05-31T10:00:06.628Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-05-31T10:00:06.628Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-05-31T10:00:06.628Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-31T10:00:07.160Z level=INFO source=types.go:130 msg="inference compute" id=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.1 GiB" time=2025-05-31T10:00:07.160Z level=INFO source=types.go:130 msg="inference compute" id=GPU-52aab516-8e4c-8a18-1ec5-ea4e93119667 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="13.7 GiB" time=2025-05-31T10:00:07.160Z level=INFO source=types.go:130 msg="inference compute" id=GPU-4ccbba1c-2f53-b260-f75e-ad10640a3cc0 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Ti" total="15.5 GiB" available="15.4 GiB" [GIN] 2025/05/31 - 10:02:06 | 200 | 919.621µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:02:06 | 200 | 57.281µs | 172.17.0.1 | GET "/api/ps" time=2025-05-31T10:02:11.264Z level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 library=cuda parallel=2 required="39.2 GiB" time=2025-05-31T10:02:11.700Z level=INFO source=server.go:135 msg="system memory" total="23.5 GiB" free="21.1 GiB" free_swap="5.8 GiB" time=2025-05-31T10:02:11.701Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split=15,13,13 memory.available="[23.1 GiB 15.4 GiB 13.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="39.2 GiB" memory.required.partial="39.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[20.2 GiB 9.5 GiB 9.5 GiB]" memory.weights.total="22.8 GiB" memory.weights.repeating="22.2 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" time=2025-05-31T10:02:11.742Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --no-mmap --parallel 2 --tensor-split 15,13,13 --port 37617" time=2025-05-31T10:02:11.742Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-31T10:02:11.742Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-31T10:02:11.742Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T10:02:11.755Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-05-31T10:02:11.756Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:37617" time=2025-05-31T10:02:11.798Z level=INFO source=ggml.go:92 msg="" architecture=mistral3 file_type=Q8_0 name="" description="" num_tensors=585 num_key_values=43 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes Device 2: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes time=2025-05-31T10:02:11.994Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-31T10:02:12.084Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA2 size="8.1 GiB" time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="680.0 MiB" time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="8.3 GiB" time=2025-05-31T10:02:12.197Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA1 size="7.2 GiB" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9337.48 MiB on device 2: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA2 buffer of size 9791055360 time=2025-05-31T10:02:12.585Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B" time=2025-05-31T10:02:12.586Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="0 B" time=2025-05-31T10:02:12.586Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA2 buffer_type=CUDA2 size="9.1 GiB" time=2025-05-31T10:02:12.586Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" panic: insufficient memory - required allocations: {InputWeights:713031680A CPU:{Name:CPU Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} GPUs:[{Name:CUDA0 Weights:[595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} {Name:CUDA1 Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:0A} {Name:CUDA2 Weights:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 595435520A 1591070720A] Cache:[0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U] Graph:9791055360F}]} goroutine 13 [running]: github.com/ollama/ollama/ml/backend/ggml.(*Context).Reserve(0xc00051a1c0) github.com/ollama/ollama/ml/backend/ggml/ggml.go:643 +0x756 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0xc0005919f8?, {0x5da9505847e8, 0xc0004d2510}, {0x5da950588808, 0xc0010bbec0}, {0x5da9505928e8, 0xc0002c1de8}, 0x1) github.com/ollama/ollama/runner/ollamarunner/multimodal.go:98 +0x2a4 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0xc000591cd8, {0x5da9505847e8, 0xc0004d2510}, {0x5da950588808, 0xc0010bbec0}, {0xc0010b00a0, 0x1, 0x5da9503c9900?}, 0x1) github.com/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xe5 github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000551c20) github.com/ollama/ollama/runner/ollamarunner/runner.go:796 +0x70e github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000551c20, {0x7ffd20a29cb8?, 0x0?}, {0x10, 0x0, 0x29, {0xc0003209d0, 0x3, 0x3}, 0x0}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270 github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000551c20, {0x5da950580990, 0xc0000fd7c0}, {0x7ffd20a29cb8?, 0x0?}, {0x10, 0x0, 0x29, {0xc0003209d0, 0x3, ...}, ...}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11 time=2025-05-31T10:02:12.747Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T10:02:12.904Z level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2" time=2025-05-31T10:02:12.998Z level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA2 buffer of size 9791055360" [GIN] 2025/05/31 - 10:02:12 | 500 | 2.280541974s | 172.17.0.1 | POST "/api/chat" time=2025-05-31T10:02:18.014Z level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.015909647 runner.size="39.2 GiB" runner.vram="39.2 GiB" runner.parallel=2 runner.pid=42 runner.model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 time=2025-05-31T10:02:18.432Z level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.434247087 runner.size="39.2 GiB" runner.vram="39.2 GiB" runner.parallel=2 runner.pid=42 runner.model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 time=2025-05-31T10:02:18.849Z level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.851670497 runner.size="39.2 GiB" runner.vram="39.2 GiB" runner.parallel=2 runner.pid=42 runner.model=/root/.ollama/models/blobs/sha256-de0f4b9634e4bb82a84dd0a376c4a6787dbf4ce5b52a62e39be103bc9c8245d0 `
Author
Owner

@phr0gz commented on GitHub (May 31, 2025):

The Ollama container has been rebooted for each test.
And here is the working test with Gemma,
With mix_77/gemma3-qat-tools:27b:
time=2025-05-31T10:07:23.587Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-31T10:07:23.588Z level=INFO source=images.go:479 msg="total blobs: 23" time=2025-05-31T10:07:23.588Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-05-31T10:07:23.589Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-05-31T10:07:23.589Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-31T10:07:24.116Z level=INFO source=types.go:130 msg="inference compute" id=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.1 GiB" time=2025-05-31T10:07:24.116Z level=INFO source=types.go:130 msg="inference compute" id=GPU-52aab516-8e4c-8a18-1ec5-ea4e93119667 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="13.7 GiB" time=2025-05-31T10:07:24.116Z level=INFO source=types.go:130 msg="inference compute" id=GPU-4ccbba1c-2f53-b260-f75e-ad10640a3cc0 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Ti" total="15.5 GiB" available="15.4 GiB" [GIN] 2025/05/31 - 10:07:25 | 200 | 931.681µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:07:25 | 200 | 102.231µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 10:07:29 | 200 | 923.902µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:07:29 | 200 | 30.6µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 10:07:36 | 200 | 808.98µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:07:36 | 200 | 19.681µs | 172.17.0.1 | GET "/api/ps" time=2025-05-31T10:08:05.665Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 gpu=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 parallel=2 available=24760745984 required="20.7 GiB" time=2025-05-31T10:08:06.107Z level=INFO source=server.go:135 msg="system memory" total="23.5 GiB" free="21.1 GiB" free_swap="5.8 GiB" time=2025-05-31T10:08:06.109Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.7 GiB" memory.required.partial="20.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.7 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-31T10:08:06.152Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 16 --parallel 2 --port 33303" time=2025-05-31T10:08:06.153Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-31T10:08:06.153Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-31T10:08:06.153Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T10:08:06.165Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-05-31T10:08:06.166Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:33303" time=2025-05-31T10:08:06.205Z level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-31T10:08:06.302Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-31T10:08:06.404Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-31T10:08:06.408Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="16.8 GiB" time=2025-05-31T10:08:06.408Z level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="2.6 GiB" time=2025-05-31T10:08:06.632Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-05-31T10:08:06.632Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-05-31T10:08:06.652Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-05-31T10:08:06.652Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" time=2025-05-31T10:08:10.418Z level=INFO source=server.go:630 msg="llama runner started in 4.27 seconds" [GIN] 2025/05/31 - 10:08:14 | 200 | 9.40841991s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 10:08:15 | 200 | 1.044863778s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 10:08:16 | 200 | 863.764169ms | 172.17.0.1 | POST "/api/chat"

<!-- gh-comment-id:2924857881 --> @phr0gz commented on GitHub (May 31, 2025): The Ollama container has been rebooted for each test. And here is the working test with Gemma, **With mix_77/gemma3-qat-tools:27b:** `time=2025-05-31T10:07:23.587Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-31T10:07:23.588Z level=INFO source=images.go:479 msg="total blobs: 23" time=2025-05-31T10:07:23.588Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-05-31T10:07:23.589Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-05-31T10:07:23.589Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-31T10:07:24.116Z level=INFO source=types.go:130 msg="inference compute" id=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.6 GiB" available="23.1 GiB" time=2025-05-31T10:07:24.116Z level=INFO source=types.go:130 msg="inference compute" id=GPU-52aab516-8e4c-8a18-1ec5-ea4e93119667 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="13.7 GiB" time=2025-05-31T10:07:24.116Z level=INFO source=types.go:130 msg="inference compute" id=GPU-4ccbba1c-2f53-b260-f75e-ad10640a3cc0 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Ti" total="15.5 GiB" available="15.4 GiB" [GIN] 2025/05/31 - 10:07:25 | 200 | 931.681µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:07:25 | 200 | 102.231µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 10:07:29 | 200 | 923.902µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:07:29 | 200 | 30.6µs | 172.17.0.1 | GET "/api/ps" [GIN] 2025/05/31 - 10:07:36 | 200 | 808.98µs | 172.17.0.1 | GET "/api/tags" [GIN] 2025/05/31 - 10:07:36 | 200 | 19.681µs | 172.17.0.1 | GET "/api/ps" time=2025-05-31T10:08:05.665Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 gpu=GPU-a000b5fa-b553-4068-d239-bbebd2e92d97 parallel=2 available=24760745984 required="20.7 GiB" time=2025-05-31T10:08:06.107Z level=INFO source=server.go:135 msg="system memory" total="23.5 GiB" free="21.1 GiB" free_swap="5.8 GiB" time=2025-05-31T10:08:06.109Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.7 GiB" memory.required.partial="20.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.7 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-31T10:08:06.152Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 16 --parallel 2 --port 33303" time=2025-05-31T10:08:06.153Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-31T10:08:06.153Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-31T10:08:06.153Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-31T10:08:06.165Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-05-31T10:08:06.166Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:33303" time=2025-05-31T10:08:06.205Z level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-31T10:08:06.302Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-31T10:08:06.404Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-31T10:08:06.408Z level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="16.8 GiB" time=2025-05-31T10:08:06.408Z level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="2.6 GiB" time=2025-05-31T10:08:06.632Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-05-31T10:08:06.632Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-05-31T10:08:06.652Z level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-05-31T10:08:06.652Z level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" time=2025-05-31T10:08:10.418Z level=INFO source=server.go:630 msg="llama runner started in 4.27 seconds" [GIN] 2025/05/31 - 10:08:14 | 200 | 9.40841991s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 10:08:15 | 200 | 1.044863778s | 172.17.0.1 | POST "/api/chat" [GIN] 2025/05/31 - 10:08:16 | 200 | 863.764169ms | 172.17.0.1 | POST "/api/chat"`
Author
Owner

@heapsoftware commented on GitHub (Jun 26, 2025):

I have the same issue here with mistral-small3.1 using two 5090s and num_ctx at 25000:
500: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360

It appears its not splitting the model between the two cards and trying to fit it into one, as ctx at 24000 works and put only on one card.

<!-- gh-comment-id:3008610692 --> @heapsoftware commented on GitHub (Jun 26, 2025): I have the same issue here with mistral-small3.1 using two 5090s and num_ctx at 25000: 500: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9791055360 It appears its not splitting the model between the two cards and trying to fit it into one, as ctx at 24000 works and put only on one card.
Author
Owner

@jessegross commented on GitHub (Sep 24, 2025):

I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.

<!-- gh-comment-id:3330142425 --> @jessegross commented on GitHub (Sep 24, 2025): I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32886