[GH-ISSUE #13235] VRAM runs out when loading models one after another #8751

Open
opened 2026-04-12 21:31:10 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @tekuusne on GitHub (Nov 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13235

What is the issue?

I have a python program that I use to test models, it loads installed models one after another and runs prompts on them. The problem is that the first few models load onto the gpu just fine, but (in this case) the third one gets like half cpu half gpu and at this point I stopped the program because it would take too long to complete. The log seems to say it runs out of memory, but I don't know what to do next. Doing 'service ollama restart' clears the situation and the first few models again load fine.

It's essentially the same problem as this guy is/was having https://github.com/ollama/ollama/issues/7606#issuecomment-2815582029 except that I'm on linux and nvidia.

This used to work until I reinstalled recently, the earlier system was last updated in May and ran whatever Ollama version was current then. Ollama (and openwebUI) are installed in an LXC container with 70 GB of ram and three 3060s passed into it. My environment variables are pretty much stock I think, except for OLLAMA_KEEP_ALIVE which is set to keep models loaded forever.

Relevant log output

Nov 25 06:42:32 ai systemd[1]: Starting ollama.service - Ollama Service...
Nov 25 06:42:32 ai systemd[1]: Started ollama.service - Ollama Service.
Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.422+02:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.435+02:00 level=INFO source=images.go:522 msg="total blobs: 111"
Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.437+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.438+02:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"
Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.439+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.439+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44713"
Nov 25 06:42:33 ai ollama[95]: time=2025-11-25T06:42:33.309+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45837"
Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:01:00.0 type=discrete total="12.0 GiB" available="11.6 GiB"
Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a filter_id="" library=CUDA compute=8.6 name=CUDA1 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:02:00.0 type=discrete total="12.0 GiB" available="11.5 GiB"
Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b filter_id="" library=CUDA compute=8.6 name=CUDA2 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:03:00.0 type=discrete total="12.0 GiB" available="11.5 GiB"
Nov 25 06:43:06 ai ollama[95]: [GIN] 2025/11/25 - 06:43:06 | 200 |    6.540316ms |    192.168.1.16 | GET      "/api/tags"
Nov 25 06:43:06 ai ollama[95]: time=2025-11-25T06:43:06.270+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43265"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.115+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.235+02:00 level=INFO source=server.go:209 msg="enabling flash attention"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4c7fee11ee9e3b139575eedb4cd68521729ece7fc0a356150a6672e773c607ea --port 46035"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="69.1 GiB" free_swap="256.0 MiB"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="11.2 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.249+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.249+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:46035"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.258+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.302+02:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=1166 num_key_values=40
Nov 25 06:43:07 ai ollama[95]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
Nov 25 06:43:07 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 25 06:43:07 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 25 06:43:07 ai ollama[95]: ggml_cuda_init: found 3 CUDA devices:
Nov 25 06:43:07 ai ollama[95]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c
Nov 25 06:43:07 ai ollama[95]:   Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a
Nov 25 06:43:07 ai ollama[95]:   Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b
Nov 25 06:43:07 ai ollama[95]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.583+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 25 06:43:08 ai ollama[95]: time=2025-11-25T06:43:08.685+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:15(0..14) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:27(15..41) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:23(42..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:09 ai ollama[95]: time=2025-11-25T06:43:09.788+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:26(0..25) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:26(26..51) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(52..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:10 ai ollama[95]: time=2025-11-25T06:43:10.516+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:26(0..25) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:26(26..51) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(52..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.514+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:26(0..25) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:26(26..51) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(52..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="7.1 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.9 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="5.1 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.5 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="2.5 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="1.2 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.5 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="1.3 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="4.7 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="79.1 MiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:272 msg="total memory" size="33.3 GiB"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Nov 25 06:43:15 ai ollama[95]: time=2025-11-25T06:43:15.275+02:00 level=INFO source=server.go:1332 msg="llama runner started in 8.04 seconds"
Nov 25 06:43:27 ai ollama[95]: [GIN] 2025/11/25 - 06:43:27 | 200 | 21.216349643s |    192.168.1.16 | POST     "/api/generate"
Nov 25 06:43:27 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 329252864 total: 12884901888
Nov 25 06:43:27 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 838598656 total: 12884901888
Nov 25 06:43:27 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 595329024 total: 12884901888
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.588+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.607+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA total="12.0 GiB" available="314.0 MiB"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.607+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA total="12.0 GiB" available="799.8 MiB"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.607+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA total="12.0 GiB" available="567.8 MiB"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.676+02:00 level=INFO source=server.go:209 msg="enabling flash attention"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.676+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --port 42159"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.3 GiB" free_swap="256.0 MiB"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="0 B" free="314.0 MiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="342.8 MiB" free="799.8 MiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="110.8 MiB" free="567.8 MiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.690+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.690+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:42159"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.698+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.734+02:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 32B" description="" num_tensors=707 num_key_values=28
Nov 25 06:43:27 ai ollama[95]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
Nov 25 06:43:27 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 25 06:43:27 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 25 06:43:27 ai ollama[95]: ggml_cuda_init: found 3 CUDA devices:
Nov 25 06:43:27 ai ollama[95]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c
Nov 25 06:43:27 ai ollama[95]:   Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a
Nov 25 06:43:27 ai ollama[95]:   Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b
Nov 25 06:43:27 ai ollama[95]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.844+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=server.go:974 msg="model requires more memory than is currently available, evicting a model to make space" "loaded layers"=0
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="18.4 GiB"
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB"
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="8.0 GiB"
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="380.0 MiB"
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:272 msg="total memory" size="27.2 GiB"
Nov 25 06:43:28 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 329252864 total: 12884901888
Nov 25 06:43:28 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 699006976 total: 12884901888
Nov 25 06:43:28 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 595329024 total: 12884901888
Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.562+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35899"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.093+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44821"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.637+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.8 GiB" free_swap="256.0 MiB"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="11.2 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="10.9 GiB" free="11.4 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1
Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.704+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:22(22..43) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:21(44..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.205+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:22(22..43) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:21(44..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.434+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:22(22..43) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:21(44..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.434+02:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="6.2 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.2 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="6.0 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.8 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="2.5 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="2.8 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="380.0 MiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="334.0 MiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="334.0 MiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:272 msg="total memory" size="27.8 GiB"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Nov 25 06:43:33 ai ollama[95]: time=2025-11-25T06:43:33.945+02:00 level=INFO source=server.go:1332 msg="llama runner started in 6.27 seconds"
Nov 25 06:44:02 ai ollama[95]: [GIN] 2025/11/25 - 06:44:02 | 200 | 35.066372633s |    192.168.1.16 | POST     "/api/generate"
Nov 25 06:44:02 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 2378170368 total: 12884901888
Nov 25 06:44:02 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 2621177856 total: 12884901888
Nov 25 06:44:02 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 2562457600 total: 12884901888
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.692+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.720+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA total="12.0 GiB" available="2.2 GiB"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.720+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA total="12.0 GiB" available="2.4 GiB"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.720+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA total="12.0 GiB" available="2.4 GiB"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.813+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-41a5b0c36a28a3a0480ce2e4007d3a21e3298be70e2b9a103960581412997dca --port 34781"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.4 GiB" free_swap="256.0 MiB"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="1.8 GiB" free="2.2 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="2.0 GiB" free="2.4 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="1.9 GiB" free="2.4 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.827+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.827+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:34781"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.835+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.889+02:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43
Nov 25 06:44:02 ai ollama[95]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
Nov 25 06:44:02 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 25 06:44:02 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 25 06:44:02 ai ollama[95]: ggml_cuda_init: found 3 CUDA devices:
Nov 25 06:44:02 ai ollama[95]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c
Nov 25 06:44:02 ai ollama[95]:   Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a
Nov 25 06:44:02 ai ollama[95]:   Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b
Nov 25 06:44:02 ai ollama[95]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.000+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=server.go:974 msg="model requires more memory than is currently available, evicting a model to make space" "loaded layers"=9
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="13.8 GiB"
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB"
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="1.2 GiB"
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="9.2 GiB"
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:272 msg="total memory" size="24.6 GiB"
Nov 25 06:44:03 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 2378170368 total: 12884901888
Nov 25 06:44:04 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 2477391872 total: 12884901888
Nov 25 06:44:04 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 2562457600 total: 12884901888
Nov 25 06:44:04 ai ollama[95]: time=2025-11-25T06:44:04.322+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39177"
Nov 25 06:44:04 ai ollama[95]: time=2025-11-25T06:44:04.867+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34571"
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.396+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.6 GiB" free_swap="256.0 MiB"
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="11.2 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="10.9 GiB" free="11.4 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.483+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:19(22..40) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:06 ai ollama[95]: time=2025-11-25T06:44:06.470+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:20(0..19) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:3(20..22) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.060+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:20(0..19) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:3(20..22) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:07 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 1: cudaMalloc failed: out of memory
Nov 25 06:44:07 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9646586240
Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.656+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10
Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20
Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30
Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40
Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:40[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:20(0..19) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:20(20..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:08 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Nov 25 06:44:08 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Nov 25 06:44:08 ai ollama[95]: time=2025-11-25T06:44:08.337+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:40[ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:20(0..19) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:20(20..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:09 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
Nov 25 06:44:09 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
Nov 25 06:44:09 ai ollama[95]: time=2025-11-25T06:44:09.029+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50
Nov 25 06:44:09 ai ollama[95]: time=2025-11-25T06:44:09.029+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:33[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:17(7..23) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:16(24..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:09 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Nov 25 06:44:09 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Nov 25 06:44:09 ai ollama[95]: time=2025-11-25T06:44:09.623+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:33[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:17(7..23) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:16(24..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:10 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
Nov 25 06:44:10 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
Nov 25 06:44:10 ai ollama[95]: time=2025-11-25T06:44:10.202+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.60
Nov 25 06:44:10 ai ollama[95]: time=2025-11-25T06:44:10.202+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:26[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:13(14..26) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(27..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:10 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Nov 25 06:44:10 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Nov 25 06:44:10 ai ollama[95]: time=2025-11-25T06:44:10.776+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:25[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(15..27) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:12(28..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:11 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
Nov 25 06:44:11 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
Nov 25 06:44:11 ai ollama[95]: time=2025-11-25T06:44:11.332+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.70
Nov 25 06:44:11 ai ollama[95]: time=2025-11-25T06:44:11.332+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:19[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:10(21..30) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:9(31..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:11 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Nov 25 06:44:11 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Nov 25 06:44:11 ai ollama[95]: time=2025-11-25T06:44:11.897+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:18[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:9(22..30) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:9(31..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:12 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
Nov 25 06:44:12 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
Nov 25 06:44:12 ai ollama[95]: time=2025-11-25T06:44:12.457+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.80
Nov 25 06:44:12 ai ollama[95]: time=2025-11-25T06:44:12.458+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:11[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:6(29..34) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:5(35..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:13 ai ollama[95]: time=2025-11-25T06:44:13.533+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:8[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:3(32..34) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:5(35..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:14 ai ollama[95]: time=2025-11-25T06:44:14.386+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:7[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:5(33..37) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:2(38..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:7[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:5(33..37) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:2(38..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=ggml.go:482 msg="offloading 7 repeating layers to GPU"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=ggml.go:494 msg="offloaded 7/41 layers to GPU"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="1.6 GiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="678.8 MiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="11.8 GiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="160.0 MiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="64.0 MiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.0 GiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="9.3 GiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="826.0 MiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:272 msg="total memory" size="25.5 GiB"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.671+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Nov 25 06:44:17 ai ollama[95]: time=2025-11-25T06:44:17.180+02:00 level=INFO source=server.go:1332 msg="llama runner started in 14.37 seconds"
Nov 25 06:44:27 ai ollama[95]: [GIN] 2025/11/25 - 06:44:27 | 200 | 25.576482109s |    192.168.1.16 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.13.0

Originally created by @tekuusne on GitHub (Nov 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13235 ### What is the issue? I have a python program that I use to test models, it loads installed models one after another and runs prompts on them. The problem is that the first few models load onto the gpu just fine, but (in this case) the third one gets like half cpu half gpu and at this point I stopped the program because it would take too long to complete. The log seems to say it runs out of memory, but I don't know what to do next. Doing 'service ollama restart' clears the situation and the first few models again load fine. It's essentially the same problem as this guy is/was having https://github.com/ollama/ollama/issues/7606#issuecomment-2815582029 except that I'm on linux and nvidia. This used to work until I reinstalled recently, the earlier system was last updated in May and ran whatever Ollama version was current then. Ollama (and openwebUI) are installed in an LXC container with 70 GB of ram and three 3060s passed into it. My environment variables are pretty much stock I think, except for OLLAMA_KEEP_ALIVE which is set to keep models loaded forever. ### Relevant log output ```shell Nov 25 06:42:32 ai systemd[1]: Starting ollama.service - Ollama Service... Nov 25 06:42:32 ai systemd[1]: Started ollama.service - Ollama Service. Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.422+02:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.435+02:00 level=INFO source=images.go:522 msg="total blobs: 111" Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.437+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.438+02:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.439+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Nov 25 06:42:32 ai ollama[95]: time=2025-11-25T06:42:32.439+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44713" Nov 25 06:42:33 ai ollama[95]: time=2025-11-25T06:42:33.309+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45837" Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:01:00.0 type=discrete total="12.0 GiB" available="11.6 GiB" Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a filter_id="" library=CUDA compute=8.6 name=CUDA1 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:02:00.0 type=discrete total="12.0 GiB" available="11.5 GiB" Nov 25 06:42:34 ai ollama[95]: time=2025-11-25T06:42:34.120+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b filter_id="" library=CUDA compute=8.6 name=CUDA2 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:03:00.0 type=discrete total="12.0 GiB" available="11.5 GiB" Nov 25 06:43:06 ai ollama[95]: [GIN] 2025/11/25 - 06:43:06 | 200 | 6.540316ms | 192.168.1.16 | GET "/api/tags" Nov 25 06:43:06 ai ollama[95]: time=2025-11-25T06:43:06.270+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43265" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.115+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.235+02:00 level=INFO source=server.go:209 msg="enabling flash attention" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4c7fee11ee9e3b139575eedb4cd68521729ece7fc0a356150a6672e773c607ea --port 46035" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="69.1 GiB" free_swap="256.0 MiB" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="11.2 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.236+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1 Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.249+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.249+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:46035" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.258+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.302+02:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=1166 num_key_values=40 Nov 25 06:43:07 ai ollama[95]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so Nov 25 06:43:07 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 25 06:43:07 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 25 06:43:07 ai ollama[95]: ggml_cuda_init: found 3 CUDA devices: Nov 25 06:43:07 ai ollama[95]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Nov 25 06:43:07 ai ollama[95]: Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Nov 25 06:43:07 ai ollama[95]: Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Nov 25 06:43:07 ai ollama[95]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so Nov 25 06:43:07 ai ollama[95]: time=2025-11-25T06:43:07.583+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 25 06:43:08 ai ollama[95]: time=2025-11-25T06:43:08.685+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:15(0..14) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:27(15..41) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:23(42..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:09 ai ollama[95]: time=2025-11-25T06:43:09.788+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:26(0..25) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:26(26..51) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(52..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:10 ai ollama[95]: time=2025-11-25T06:43:10.516+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:26(0..25) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:26(26..51) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(52..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.514+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:25600 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:26(0..25) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:26(26..51) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(52..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="7.1 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.9 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="5.1 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.5 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="2.5 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="1.2 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.5 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="1.3 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="4.7 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="79.1 MiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=device.go:272 msg="total memory" size="33.3 GiB" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Nov 25 06:43:11 ai ollama[95]: time=2025-11-25T06:43:11.515+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Nov 25 06:43:15 ai ollama[95]: time=2025-11-25T06:43:15.275+02:00 level=INFO source=server.go:1332 msg="llama runner started in 8.04 seconds" Nov 25 06:43:27 ai ollama[95]: [GIN] 2025/11/25 - 06:43:27 | 200 | 21.216349643s | 192.168.1.16 | POST "/api/generate" Nov 25 06:43:27 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 329252864 total: 12884901888 Nov 25 06:43:27 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 838598656 total: 12884901888 Nov 25 06:43:27 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 595329024 total: 12884901888 Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.588+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.607+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA total="12.0 GiB" available="314.0 MiB" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.607+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA total="12.0 GiB" available="799.8 MiB" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.607+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA total="12.0 GiB" available="567.8 MiB" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.676+02:00 level=INFO source=server.go:209 msg="enabling flash attention" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.676+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --port 42159" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.3 GiB" free_swap="256.0 MiB" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="0 B" free="314.0 MiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="342.8 MiB" free="799.8 MiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="110.8 MiB" free="567.8 MiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.677+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1 Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.690+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.690+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:42159" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.698+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.734+02:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 32B" description="" num_tensors=707 num_key_values=28 Nov 25 06:43:27 ai ollama[95]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so Nov 25 06:43:27 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 25 06:43:27 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 25 06:43:27 ai ollama[95]: ggml_cuda_init: found 3 CUDA devices: Nov 25 06:43:27 ai ollama[95]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Nov 25 06:43:27 ai ollama[95]: Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Nov 25 06:43:27 ai ollama[95]: Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Nov 25 06:43:27 ai ollama[95]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so Nov 25 06:43:27 ai ollama[95]: time=2025-11-25T06:43:27.844+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=server.go:974 msg="model requires more memory than is currently available, evicting a model to make space" "loaded layers"=0 Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="18.4 GiB" Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB" Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="8.0 GiB" Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="380.0 MiB" Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.200+02:00 level=INFO source=device.go:272 msg="total memory" size="27.2 GiB" Nov 25 06:43:28 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 329252864 total: 12884901888 Nov 25 06:43:28 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 699006976 total: 12884901888 Nov 25 06:43:28 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 595329024 total: 12884901888 Nov 25 06:43:28 ai ollama[95]: time=2025-11-25T06:43:28.562+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35899" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.093+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44821" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.637+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.8 GiB" free_swap="256.0 MiB" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="11.2 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="10.9 GiB" free="11.4 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.703+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1 Nov 25 06:43:29 ai ollama[95]: time=2025-11-25T06:43:29.704+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:22(22..43) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:21(44..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.205+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:22(22..43) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:21(44..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.434+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:1 GPULayers:65[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:22(22..43) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:21(44..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.434+02:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="6.2 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.2 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="6.0 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.8 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="2.5 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="2.8 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="380.0 MiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="334.0 MiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="334.0 MiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=device.go:272 msg="total memory" size="27.8 GiB" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Nov 25 06:43:30 ai ollama[95]: time=2025-11-25T06:43:30.435+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Nov 25 06:43:33 ai ollama[95]: time=2025-11-25T06:43:33.945+02:00 level=INFO source=server.go:1332 msg="llama runner started in 6.27 seconds" Nov 25 06:44:02 ai ollama[95]: [GIN] 2025/11/25 - 06:44:02 | 200 | 35.066372633s | 192.168.1.16 | POST "/api/generate" Nov 25 06:44:02 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 2378170368 total: 12884901888 Nov 25 06:44:02 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 2621177856 total: 12884901888 Nov 25 06:44:02 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 2562457600 total: 12884901888 Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.692+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.720+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA total="12.0 GiB" available="2.2 GiB" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.720+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA total="12.0 GiB" available="2.4 GiB" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.720+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA total="12.0 GiB" available="2.4 GiB" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.813+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-41a5b0c36a28a3a0480ce2e4007d3a21e3298be70e2b9a103960581412997dca --port 34781" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.4 GiB" free_swap="256.0 MiB" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="1.8 GiB" free="2.2 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="2.0 GiB" free="2.4 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="1.9 GiB" free="2.4 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.814+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.827+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.827+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:34781" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.835+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:02 ai ollama[95]: time=2025-11-25T06:44:02.889+02:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 Nov 25 06:44:02 ai ollama[95]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so Nov 25 06:44:02 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 25 06:44:02 ai ollama[95]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 25 06:44:02 ai ollama[95]: ggml_cuda_init: found 3 CUDA devices: Nov 25 06:44:02 ai ollama[95]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Nov 25 06:44:02 ai ollama[95]: Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Nov 25 06:44:02 ai ollama[95]: Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Nov 25 06:44:02 ai ollama[95]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.000+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=server.go:974 msg="model requires more memory than is currently available, evicting a model to make space" "loaded layers"=9 Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="13.8 GiB" Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB" Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="1.2 GiB" Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="9.2 GiB" Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" Nov 25 06:44:03 ai ollama[95]: time=2025-11-25T06:44:03.961+02:00 level=INFO source=device.go:272 msg="total memory" size="24.6 GiB" Nov 25 06:44:03 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c utilizing NVML memory reporting free: 2378170368 total: 12884901888 Nov 25 06:44:04 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a utilizing NVML memory reporting free: 2477391872 total: 12884901888 Nov 25 06:44:04 ai ollama[95]: ggml_backend_cuda_device_get_memory device GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b utilizing NVML memory reporting free: 2562457600 total: 12884901888 Nov 25 06:44:04 ai ollama[95]: time=2025-11-25T06:44:04.322+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39177" Nov 25 06:44:04 ai ollama[95]: time=2025-11-25T06:44:04.867+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34571" Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.396+02:00 level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:443 msg="system memory" total="70.0 GiB" free="68.6 GiB" free_swap="256.0 MiB" Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c library=CUDA available="11.2 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a library=CUDA available="10.9 GiB" free="11.4 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b library=CUDA available="11.1 GiB" free="11.5 GiB" minimum="457.0 MiB" overhead="0 B" Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.482+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 Nov 25 06:44:05 ai ollama[95]: time=2025-11-25T06:44:05.483+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:22(0..21) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:19(22..40) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:06 ai ollama[95]: time=2025-11-25T06:44:06.470+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:20(0..19) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:3(20..22) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.060+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:41[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:20(0..19) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:3(20..22) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:07 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 1: cudaMalloc failed: out of memory Nov 25 06:44:07 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9646586240 Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.656+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10 Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20 Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30 Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40 Nov 25 06:44:07 ai ollama[95]: time=2025-11-25T06:44:07.657+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:40[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:20(0..19) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:20(20..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:08 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Nov 25 06:44:08 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Nov 25 06:44:08 ai ollama[95]: time=2025-11-25T06:44:08.337+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:40[ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:20(0..19) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:20(20..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:09 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory Nov 25 06:44:09 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 Nov 25 06:44:09 ai ollama[95]: time=2025-11-25T06:44:09.029+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50 Nov 25 06:44:09 ai ollama[95]: time=2025-11-25T06:44:09.029+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:33[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:17(7..23) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:16(24..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:09 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Nov 25 06:44:09 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Nov 25 06:44:09 ai ollama[95]: time=2025-11-25T06:44:09.623+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:33[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:17(7..23) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:16(24..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:10 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory Nov 25 06:44:10 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 Nov 25 06:44:10 ai ollama[95]: time=2025-11-25T06:44:10.202+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.60 Nov 25 06:44:10 ai ollama[95]: time=2025-11-25T06:44:10.202+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:26[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:13(14..26) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(27..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:10 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Nov 25 06:44:10 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Nov 25 06:44:10 ai ollama[95]: time=2025-11-25T06:44:10.776+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:25[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:13(15..27) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:12(28..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:11 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory Nov 25 06:44:11 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 Nov 25 06:44:11 ai ollama[95]: time=2025-11-25T06:44:11.332+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.70 Nov 25 06:44:11 ai ollama[95]: time=2025-11-25T06:44:11.332+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:19[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:10(21..30) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:9(31..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:11 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Nov 25 06:44:11 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Nov 25 06:44:11 ai ollama[95]: time=2025-11-25T06:44:11.897+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:18[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:9(22..30) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:9(31..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:12 ai ollama[95]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory Nov 25 06:44:12 ai ollama[95]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 Nov 25 06:44:12 ai ollama[95]: time=2025-11-25T06:44:12.457+02:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.80 Nov 25 06:44:12 ai ollama[95]: time=2025-11-25T06:44:12.458+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:11[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:6(29..34) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:5(35..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:13 ai ollama[95]: time=2025-11-25T06:44:13.533+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:8[ ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:3(32..34) ID:GPU-36a9a6f1-a68e-ae93-d815-6f050b22059a Layers:5(35..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:14 ai ollama[95]: time=2025-11-25T06:44:14.386+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:7[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:5(33..37) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:2(38..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:1 GPULayers:7[ID:GPU-bc9adf67-f08e-231c-0903-6b889fb2e23c Layers:5(33..37) ID:GPU-d5b44455-b37d-3d61-cf22-0ca947bfed7b Layers:2(38..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=ggml.go:482 msg="offloading 7 repeating layers to GPU" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=ggml.go:494 msg="offloaded 7/41 layers to GPU" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="1.6 GiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="678.8 MiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="11.8 GiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="160.0 MiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="64.0 MiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.0 GiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="9.3 GiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="826.0 MiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=device.go:272 msg="total memory" size="25.5 GiB" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.665+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Nov 25 06:44:15 ai ollama[95]: time=2025-11-25T06:44:15.671+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Nov 25 06:44:17 ai ollama[95]: time=2025-11-25T06:44:17.180+02:00 level=INFO source=server.go:1332 msg="llama runner started in 14.37 seconds" Nov 25 06:44:27 ai ollama[95]: [GIN] 2025/11/25 - 06:44:27 | 200 | 25.576482109s | 192.168.1.16 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-04-12 21:31:10 -05:00
Author
Owner

@tekuusne commented on GitHub (Nov 25, 2025):

Addendum: removing the "offending" model from the list (Mistral Small 24b in this case) allows my program to run further, until it attempts to load a 36b model and crashes Ollama with a segmentation fault. Probably related?

ollama-crash.txt

<!-- gh-comment-id:3573862602 --> @tekuusne commented on GitHub (Nov 25, 2025): Addendum: removing the "offending" model from the list (Mistral Small 24b in this case) allows my program to run further, until it attempts to load a 36b model and crashes Ollama with a segmentation fault. Probably related? [ollama-crash.txt](https://github.com/user-attachments/files/23738956/ollama-crash.txt)
Author
Owner

@tekuusne commented on GitHub (Nov 25, 2025):

Another addendum: I just learned about OLLAMA_MAX_LOADED_MODELS and I'll try to set it to 1 when I get the chance, but the server is busy right now so it will have to wait. Perhaps Ollama isn't flushing and unloading the models after all, even though I assumed it did? We will see.

<!-- gh-comment-id:3573941645 --> @tekuusne commented on GitHub (Nov 25, 2025): Another addendum: I just learned about OLLAMA_MAX_LOADED_MODELS and I'll try to set it to 1 when I get the chance, but the server is busy right now so it will have to wait. Perhaps Ollama isn't flushing and unloading the models after all, even though I assumed it did? We will see.
Author
Owner

@tekuusne commented on GitHub (Nov 25, 2025):

Update: OLLAMA_MAX_LOADED_MODELS didn't really help, it still crashes. And eventually something weird happens with the memory of the container, where Ollama thinks there isn't enough system ram despite plenty being available. A reboot of the container fixes this, just restarting Ollama doesn't.

Image
<!-- gh-comment-id:3576671310 --> @tekuusne commented on GitHub (Nov 25, 2025): Update: OLLAMA_MAX_LOADED_MODELS didn't really help, it still crashes. And eventually something weird happens with the memory of the container, where Ollama thinks there isn't enough system ram despite plenty being available. A reboot of the container fixes this, just restarting Ollama doesn't. <img width="1095" height="155" alt="Image" src="https://github.com/user-attachments/assets/0894274d-f669-4956-a7f3-c3068dc9aaf2" />
Author
Owner

@jessegross commented on GitHub (Nov 25, 2025):

For the VRAM usage, what does nvidia-smi show at the time you try to load a model and it only gets partially on the GPU? The logs show that free VRAM goes down as more models get loaded and even taking that into account, memory allocations fail. Previous models are getting evicted as needed to make space.

The system memory usage is related to the filesystem buffer cache. You can see that the free memory reported is the same as what Ollama shows. The differences between that and available space is the caches.

<!-- gh-comment-id:3577666367 --> @jessegross commented on GitHub (Nov 25, 2025): For the VRAM usage, what does nvidia-smi show at the time you try to load a model and it only gets partially on the GPU? The logs show that free VRAM goes down as more models get loaded and even taking that into account, memory allocations fail. Previous models are getting evicted as needed to make space. The system memory usage is related to the filesystem buffer cache. You can see that the free memory reported is the same as what Ollama shows. The differences between that and available space is the caches.
Author
Owner

@tekuusne commented on GitHub (Nov 28, 2025):

Watching nvidia-smi as the third model is loading, I see the amoutn of loaded memory sort of bounce between the gpus until it settles. Hopefully the video comes through.

Is the filesystem buffer cache a zfs thing? I can ditch zfs if it will help.

https://github.com/user-attachments/assets/99de1f48-f6e4-4c9d-8087-6ea947751ccf

<!-- gh-comment-id:3590606872 --> @tekuusne commented on GitHub (Nov 28, 2025): Watching nvidia-smi as the third model is loading, I see the amoutn of loaded memory sort of bounce between the gpus until it settles. Hopefully the video comes through. Is the filesystem buffer cache a zfs thing? I can ditch zfs if it will help. https://github.com/user-attachments/assets/99de1f48-f6e4-4c9d-8087-6ea947751ccf
Author
Owner

@jessegross commented on GitHub (Dec 2, 2025):

Yes, the buffer cache usage is likely related to ZFS - people have reported similar issues on ZFS before.

<!-- gh-comment-id:3603505686 --> @jessegross commented on GitHub (Dec 2, 2025): Yes, the buffer cache usage is likely related to ZFS - people have reported similar issues on ZFS before.
Author
Owner

@tekuusne commented on GitHub (Feb 22, 2026):

After a server upgrade, I can report that on 0.16.3 (and perhaps even earlier versions) the VRAM problem isn't there anymore. The models seem to load and unload just fine.

The system memory/buffer cache issue however remains even without zfs. Maybe it's best to use Ollama only with models that fit entirely in VRAM, as I don't see a way to change how it checks for available memory.

<!-- gh-comment-id:3941607469 --> @tekuusne commented on GitHub (Feb 22, 2026): After a server upgrade, I can report that on 0.16.3 (and perhaps even earlier versions) the VRAM problem isn't there anymore. The models seem to load and unload just fine. The system memory/buffer cache issue however remains even without zfs. Maybe it's best to use Ollama only with models that fit entirely in VRAM, as I don't see a way to change how it checks for available memory.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8751