[GH-ISSUE #13303] failed to commit memory for model (ministral-3:14b) #70847

Closed
opened 2026-05-04 23:11:31 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @aole on GitHub (Dec 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13303

What is the issue?

I have following configuration on Win 11 PC

  1. RTX 4060 Ti 16GB
  2. RTX 2080 Super 8 GB
  3. RAM 32 GB

PS C:\Users\bhupe> ollama run ministral-3:14b
pulling manifest
pulling 9026d5ef829c: 100% ▕██████████████████████████████████████████████████████████▏ 9.1 GB
pulling 6db27cd4e277: 100% ▕██████████████████████████████████████████████████████████▏ 695 B
pulling dadd338c55cb: 100% ▕██████████████████████████████████████████████████████████▏ 2.4 KB
pulling e0daf17ff83e: 100% ▕██████████████████████████████████████████████████████████▏ 21 B
pulling 213955d84df2: 100% ▕██████████████████████████████████████████████████████████▏ 515 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: failed to commit memory for model
PS C:\Users\bhupe> ollama --version
ollama version is 0.13.1
PS C:\Users\bhupe>

Relevant log output

time=2025-12-02T17:06:28.427-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bhupe\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-12-02T17:06:28.438-05:00 level=INFO source=images.go:522 msg="total blobs: 22"
time=2025-12-02T17:06:28.441-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-12-02T17:06:28.451-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)"
time=2025-12-02T17:06:28.451-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-02T17:06:28.476-05:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2025-12-02T17:06:28.492-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10224"
time=2025-12-02T17:06:28.994-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10232"
time=2025-12-02T17:06:29.436-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10239"
time=2025-12-02T17:06:29.641-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3217"
time=2025-12-02T17:06:29.642-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3218"
time=2025-12-02T17:06:29.642-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3219"
time=2025-12-02T17:06:29.642-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3220"
time=2025-12-02T17:06:30.399-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-f8657c39-1806-f26f-e294-a51dcd5da96b filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Ti" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="16.0 GiB" available="15.7 GiB"
time=2025-12-02T17:06:30.400-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 filter_id="" library=CUDA compute=7.5 name=CUDA1 description="NVIDIA GeForce RTX 2080 SUPER" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:02:00.0 type=discrete total="8.0 GiB" available="6.9 GiB"
[GIN] 2025/12/02 - 17:06:30 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/02 - 17:06:30 | 200 |      7.4301ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/12/02 - 17:06:39 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/02 - 17:06:39 | 200 |     76.7538ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/02 - 17:06:39 | 200 |     74.9959ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-02T17:06:39.898-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3251"
time=2025-12-02T17:06:40.330-05:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-12-02T17:06:40.330-05:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-12-02T17:06:40.441-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\bhupe\\.ollama\\models\\blobs\\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 3257"
time=2025-12-02T17:06:40.450-05:00 level=INFO source=sched.go:443 msg="system memory" total="31.9 GiB" free="21.9 GiB" free_swap="30.4 GiB"
time=2025-12-02T17:06:40.450-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-f8657c39-1806-f26f-e294-a51dcd5da96b library=CUDA available="15.3 GiB" free="15.7 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-02T17:06:40.450-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 library=CUDA available="6.5 GiB" free="6.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-02T17:06:40.450-05:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
time=2025-12-02T17:06:40.493-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-02T17:06:40.496-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:3257"
time=2025-12-02T17:06:40.503-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:40.543-05:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45
load_backend: loaded CPU backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes, ID: GPU-f8657c39-1806-f26f-e294-a51dcd5da96b
  Device 1: NVIDIA GeForce RTX 2080 SUPER, compute capability 7.5, VMM: yes, ID: GPU-1a96ab85-adf6-988f-3fed-dc0004723a16
load_backend: loaded CUDA backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-12-02T17:06:40.743-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-12-02T17:06:41.561-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:42.037-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9646586240
time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10
time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20
time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30
time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40
time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50
time=2025-12-02T17:06:43.282-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:40[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:40(0..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:06:44.373-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
time=2025-12-02T17:06:45.355-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:39[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:39(1..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:06:46.236-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:38[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:38(2..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:06:47.184-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:37(3..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:06:48.090-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:36(4..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
time=2025-12-02T17:06:48.804-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:35[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:35(5..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
time=2025-12-02T17:06:49.497-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:34[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:34(6..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
time=2025-12-02T17:06:50.233-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:33(7..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:50.976-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:32[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:32(8..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:51.619-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:31[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:31(9..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:52.359-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:30[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:30(10..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:53.110-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:29[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:29(11..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:53.866-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:28[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:28(12..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:54.617-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:27[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:27(13..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:55.355-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:26[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:26(14..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:56.109-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:25(15..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:56.841-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:24(16..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:57.594-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:23[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(17..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:58.348-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:22[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:22(18..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:59.090-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:21[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:21(19..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:06:59.849-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:20[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:20(20..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:07:00.593-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:19[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:19(21..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:07:01.341-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:18[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:18(22..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:07:02.100-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:17(23..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:07:02.844-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
time=2025-12-02T17:07:03.965-05:00 level=WARN source=server.go:839 msg="failed to commit memory for model" memory.InputWeights=377487360 memory.CPU.Weights="[194068480 194068480 194068480 194068480 194068480 172441600 172441600 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1396150272]" memory.CUDA1.ID=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 memory.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 192716800 192716800 194068480 192716800 192716800 0]" memory.CUDA1.Graph=9668469760
time=2025-12-02T17:07:03.965-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="2.9 GiB"
time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="5.6 GiB"
time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="9.0 GiB"
time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:272 msg="total memory" size="17.5 GiB"
time=2025-12-02T17:07:03.965-05:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\bhupe\.ollama\models\blobs\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 error="failed to commit memory for model"
time=2025-12-02T17:07:04.098-05:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"
[GIN] 2025/12/02 - 17:07:04 | 500 |   24.3437775s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/12/02 - 17:08:14 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/02 - 17:08:14 | 200 |     63.4564ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/02 - 17:08:14 | 200 |      3.6238ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/12/02 - 17:08:15 | 200 |    938.1251ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2025/12/02 - 17:08:22 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/02 - 17:08:22 | 404 |      3.6157ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-02T17:08:22.811-05:00 level=INFO source=download.go:177 msg="downloading 9026d5ef829c in 16 567 MB part(s)"
time=2025-12-02T17:13:14.091-05:00 level=INFO source=download.go:177 msg="downloading dadd338c55cb in 1 2.4 KB part(s)"
time=2025-12-02T17:13:15.347-05:00 level=INFO source=download.go:177 msg="downloading e0daf17ff83e in 1 21 B part(s)"
time=2025-12-02T17:13:16.561-05:00 level=INFO source=download.go:177 msg="downloading 213955d84df2 in 1 515 B part(s)"
[GIN] 2025/12/02 - 17:13:45 | 200 |         5m23s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/12/02 - 17:13:45 | 200 |     81.6246ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/02 - 17:13:45 | 200 |     83.0003ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-02T17:13:45.806-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 5414"
time=2025-12-02T17:13:46.282-05:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-12-02T17:13:46.283-05:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-12-02T17:13:46.411-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\bhupe\\.ollama\\models\\blobs\\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 5421"
time=2025-12-02T17:13:46.419-05:00 level=INFO source=sched.go:443 msg="system memory" total="31.9 GiB" free="22.0 GiB" free_swap="30.5 GiB"
time=2025-12-02T17:13:46.419-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-f8657c39-1806-f26f-e294-a51dcd5da96b library=CUDA available="15.3 GiB" free="15.7 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-02T17:13:46.419-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 library=CUDA available="6.6 GiB" free="7.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-02T17:13:46.419-05:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
time=2025-12-02T17:13:46.466-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-02T17:13:46.469-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:5421"
time=2025-12-02T17:13:46.473-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:46.515-05:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45
load_backend: loaded CPU backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes, ID: GPU-f8657c39-1806-f26f-e294-a51dcd5da96b
  Device 1: NVIDIA GeForce RTX 2080 SUPER, compute capability 7.5, VMM: yes, ID: GPU-1a96ab85-adf6-988f-3fed-dc0004723a16
load_backend: loaded CUDA backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-12-02T17:13:46.749-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-12-02T17:13:47.606-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:48.119-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9646586240
time=2025-12-02T17:13:49.322-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10
time=2025-12-02T17:13:49.322-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20
time=2025-12-02T17:13:49.323-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30
time=2025-12-02T17:13:49.323-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40
time=2025-12-02T17:13:49.323-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50
time=2025-12-02T17:13:49.323-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:40[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:40(0..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:13:50.457-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
time=2025-12-02T17:13:51.370-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:39[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:39(1..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:13:52.267-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:38[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:38(2..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:13:53.174-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:37(3..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
time=2025-12-02T17:13:54.091-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:36(4..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
time=2025-12-02T17:13:54.797-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:35[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:35(5..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
time=2025-12-02T17:13:55.518-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:34[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:34(6..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
time=2025-12-02T17:13:56.276-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:33(7..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:57.009-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:32[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:32(8..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:57.648-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:31[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:31(9..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:58.393-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:30[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:30(10..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:59.149-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:29[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:29(11..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:13:59.917-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:28[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:28(12..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:00.720-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:27[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:27(13..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:01.500-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:26[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:26(14..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:02.284-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:25(15..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:03.101-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:24(16..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:03.897-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:23[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(17..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:04.678-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:22[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:22(18..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:05.552-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:21[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:21(19..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:06.446-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:20[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:20(20..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:07.296-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:19[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:19(21..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:08.061-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:18[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:18(22..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:08.936-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:17(23..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:09.826-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760
time=2025-12-02T17:14:11.130-05:00 level=WARN source=server.go:839 msg="failed to commit memory for model" memory.InputWeights=377487360 memory.CPU.Weights="[194068480 194068480 194068480 194068480 194068480 172441600 172441600 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1396150272]" memory.CUDA1.ID=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 memory.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 192716800 192716800 194068480 192716800 192716800 0]" memory.CUDA1.Graph=9668469760
time=2025-12-02T17:14:11.136-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="2.9 GiB"
time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="5.6 GiB"
time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="9.0 GiB"
time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:272 msg="total memory" size="17.5 GiB"
time=2025-12-02T17:14:11.136-05:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\bhupe\.ollama\models\blobs\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 error="failed to commit memory for model"
time=2025-12-02T17:14:11.289-05:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"
[GIN] 2025/12/02 - 17:14:11 | 500 |   25.6453802s |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.13.1

Originally created by @aole on GitHub (Dec 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13303 ### What is the issue? I have following configuration on Win 11 PC 1. RTX 4060 Ti 16GB 2. RTX 2080 Super 8 GB 3. RAM 32 GB PS C:\Users\bhupe> ollama run ministral-3:14b pulling manifest pulling 9026d5ef829c: 100% ▕██████████████████████████████████████████████████████████▏ 9.1 GB pulling 6db27cd4e277: 100% ▕██████████████████████████████████████████████████████████▏ 695 B pulling dadd338c55cb: 100% ▕██████████████████████████████████████████████████████████▏ 2.4 KB pulling e0daf17ff83e: 100% ▕██████████████████████████████████████████████████████████▏ 21 B pulling 213955d84df2: 100% ▕██████████████████████████████████████████████████████████▏ 515 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: failed to commit memory for model PS C:\Users\bhupe> ollama --version ollama version is 0.13.1 PS C:\Users\bhupe> ### Relevant log output ```shell time=2025-12-02T17:06:28.427-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bhupe\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-12-02T17:06:28.438-05:00 level=INFO source=images.go:522 msg="total blobs: 22" time=2025-12-02T17:06:28.441-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-12-02T17:06:28.451-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)" time=2025-12-02T17:06:28.451-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-02T17:06:28.476-05:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-12-02T17:06:28.492-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10224" time=2025-12-02T17:06:28.994-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10232" time=2025-12-02T17:06:29.436-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10239" time=2025-12-02T17:06:29.641-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3217" time=2025-12-02T17:06:29.642-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3218" time=2025-12-02T17:06:29.642-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3219" time=2025-12-02T17:06:29.642-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3220" time=2025-12-02T17:06:30.399-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-f8657c39-1806-f26f-e294-a51dcd5da96b filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Ti" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="16.0 GiB" available="15.7 GiB" time=2025-12-02T17:06:30.400-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 filter_id="" library=CUDA compute=7.5 name=CUDA1 description="NVIDIA GeForce RTX 2080 SUPER" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:02:00.0 type=discrete total="8.0 GiB" available="6.9 GiB" [GIN] 2025/12/02 - 17:06:30 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/02 - 17:06:30 | 200 | 7.4301ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/12/02 - 17:06:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/02 - 17:06:39 | 200 | 76.7538ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/02 - 17:06:39 | 200 | 74.9959ms | 127.0.0.1 | POST "/api/show" time=2025-12-02T17:06:39.898-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3251" time=2025-12-02T17:06:40.330-05:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-12-02T17:06:40.330-05:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-12-02T17:06:40.441-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\bhupe\\.ollama\\models\\blobs\\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 3257" time=2025-12-02T17:06:40.450-05:00 level=INFO source=sched.go:443 msg="system memory" total="31.9 GiB" free="21.9 GiB" free_swap="30.4 GiB" time=2025-12-02T17:06:40.450-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-f8657c39-1806-f26f-e294-a51dcd5da96b library=CUDA available="15.3 GiB" free="15.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-02T17:06:40.450-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 library=CUDA available="6.5 GiB" free="6.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-02T17:06:40.450-05:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 time=2025-12-02T17:06:40.493-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-02T17:06:40.496-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:3257" time=2025-12-02T17:06:40.503-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:40.543-05:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45 load_backend: loaded CPU backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes, ID: GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Device 1: NVIDIA GeForce RTX 2080 SUPER, compute capability 7.5, VMM: yes, ID: GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 load_backend: loaded CUDA backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-12-02T17:06:40.743-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-12-02T17:06:41.561-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:42.037-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9646586240 time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10 time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20 time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30 time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40 time=2025-12-02T17:06:43.282-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50 time=2025-12-02T17:06:43.282-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:40[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:40(0..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:06:44.373-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 time=2025-12-02T17:06:45.355-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:39[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:39(1..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:06:46.236-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:38[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:38(2..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:06:47.184-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:37(3..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:06:48.090-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:36(4..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory time=2025-12-02T17:06:48.804-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:35[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:35(5..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory time=2025-12-02T17:06:49.497-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:34[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:34(6..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory time=2025-12-02T17:06:50.233-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:33(7..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:50.976-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:32[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:32(8..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:51.619-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:31[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:31(9..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:52.359-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:30[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:30(10..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:53.110-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:29[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:29(11..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:53.866-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:28[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:28(12..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:54.617-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:27[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:27(13..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:55.355-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:26[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:26(14..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:56.109-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:25(15..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:56.841-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:24(16..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:57.594-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:23[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(17..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:58.348-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:22[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:22(18..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:59.090-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:21[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:21(19..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:06:59.849-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:20[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:20(20..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:07:00.593-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:19[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:19(21..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:07:01.341-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:18[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:18(22..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:07:02.100-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:17(23..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:07:02.844-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 time=2025-12-02T17:07:03.965-05:00 level=WARN source=server.go:839 msg="failed to commit memory for model" memory.InputWeights=377487360 memory.CPU.Weights="[194068480 194068480 194068480 194068480 194068480 172441600 172441600 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1396150272]" memory.CUDA1.ID=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 memory.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 192716800 192716800 194068480 192716800 192716800 0]" memory.CUDA1.Graph=9668469760 time=2025-12-02T17:07:03.965-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="2.9 GiB" time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="5.6 GiB" time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="9.0 GiB" time=2025-12-02T17:07:03.965-05:00 level=INFO source=device.go:272 msg="total memory" size="17.5 GiB" time=2025-12-02T17:07:03.965-05:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\bhupe\.ollama\models\blobs\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 error="failed to commit memory for model" time=2025-12-02T17:07:04.098-05:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1" [GIN] 2025/12/02 - 17:07:04 | 500 | 24.3437775s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/12/02 - 17:08:14 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/02 - 17:08:14 | 200 | 63.4564ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/02 - 17:08:14 | 200 | 3.6238ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/12/02 - 17:08:15 | 200 | 938.1251ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2025/12/02 - 17:08:22 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/02 - 17:08:22 | 404 | 3.6157ms | 127.0.0.1 | POST "/api/show" time=2025-12-02T17:08:22.811-05:00 level=INFO source=download.go:177 msg="downloading 9026d5ef829c in 16 567 MB part(s)" time=2025-12-02T17:13:14.091-05:00 level=INFO source=download.go:177 msg="downloading dadd338c55cb in 1 2.4 KB part(s)" time=2025-12-02T17:13:15.347-05:00 level=INFO source=download.go:177 msg="downloading e0daf17ff83e in 1 21 B part(s)" time=2025-12-02T17:13:16.561-05:00 level=INFO source=download.go:177 msg="downloading 213955d84df2 in 1 515 B part(s)" [GIN] 2025/12/02 - 17:13:45 | 200 | 5m23s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/12/02 - 17:13:45 | 200 | 81.6246ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/02 - 17:13:45 | 200 | 83.0003ms | 127.0.0.1 | POST "/api/show" time=2025-12-02T17:13:45.806-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 5414" time=2025-12-02T17:13:46.282-05:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-12-02T17:13:46.283-05:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-12-02T17:13:46.411-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\bhupe\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\bhupe\\.ollama\\models\\blobs\\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 5421" time=2025-12-02T17:13:46.419-05:00 level=INFO source=sched.go:443 msg="system memory" total="31.9 GiB" free="22.0 GiB" free_swap="30.5 GiB" time=2025-12-02T17:13:46.419-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-f8657c39-1806-f26f-e294-a51dcd5da96b library=CUDA available="15.3 GiB" free="15.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-02T17:13:46.419-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 library=CUDA available="6.6 GiB" free="7.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-02T17:13:46.419-05:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 time=2025-12-02T17:13:46.466-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-02T17:13:46.469-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:5421" time=2025-12-02T17:13:46.473-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:46.515-05:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45 load_backend: loaded CPU backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes, ID: GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Device 1: NVIDIA GeForce RTX 2080 SUPER, compute capability 7.5, VMM: yes, ID: GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 load_backend: loaded CUDA backend from C:\Users\bhupe\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-12-02T17:13:46.749-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-12-02T17:13:47.606-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:48.119-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(0..22) ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:18(23..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9199.70 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9646586240 time=2025-12-02T17:13:49.322-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10 time=2025-12-02T17:13:49.322-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20 time=2025-12-02T17:13:49.323-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30 time=2025-12-02T17:13:49.323-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40 time=2025-12-02T17:13:49.323-05:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50 time=2025-12-02T17:13:49.323-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:40[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:40(0..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:13:50.457-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 time=2025-12-02T17:13:51.370-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:39[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:39(1..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:13:52.267-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:38[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:38(2..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:13:53.174-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:37(3..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 time=2025-12-02T17:13:54.091-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:36(4..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory time=2025-12-02T17:13:54.797-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:35[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:35(5..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory time=2025-12-02T17:13:55.518-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:34[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:34(6..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory time=2025-12-02T17:13:56.276-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:33(7..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:57.009-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:32[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:32(8..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:57.648-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:31[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:31(9..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:58.393-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:30[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:30(10..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:59.149-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:29[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:29(11..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:13:59.917-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:28[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:28(12..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:00.720-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:27[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:27(13..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:01.500-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:26[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:26(14..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:02.284-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:25(15..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:03.101-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:24(16..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:03.897-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:23[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:23(17..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:04.678-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:22[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:22(18..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:05.552-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:21[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:21(19..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:06.446-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:20[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:20(20..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:07.296-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:19[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:19(21..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:08.061-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:18[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:18(22..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:08.936-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ID:GPU-f8657c39-1806-f26f-e294-a51dcd5da96b Layers:17(23..39) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:09.826-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:17[ ID:GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 Layers:17(23..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 9668469760 time=2025-12-02T17:14:11.130-05:00 level=WARN source=server.go:839 msg="failed to commit memory for model" memory.InputWeights=377487360 memory.CPU.Weights="[194068480 194068480 194068480 194068480 194068480 172441600 172441600 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1396150272]" memory.CUDA1.ID=GPU-1a96ab85-adf6-988f-3fed-dc0004723a16 memory.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 171089920 171089920 194068480 192716800 192716800 194068480 192716800 192716800 0]" memory.CUDA1.Graph=9668469760 time=2025-12-02T17:14:11.136-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="2.9 GiB" time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="5.6 GiB" time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="9.0 GiB" time=2025-12-02T17:14:11.136-05:00 level=INFO source=device.go:272 msg="total memory" size="17.5 GiB" time=2025-12-02T17:14:11.136-05:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\bhupe\.ollama\models\blobs\sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 error="failed to commit memory for model" time=2025-12-02T17:14:11.289-05:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1" [GIN] 2025/12/02 - 17:14:11 | 500 | 25.6453802s | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.1
GiteaMirror added the bug label 2026-05-04 23:11:31 -05:00
Author
Owner

@mgielissen commented on GitHub (Dec 3, 2025):

Also with Linux. MacOS works

<!-- gh-comment-id:3605478883 --> @mgielissen commented on GitHub (Dec 3, 2025): Also with Linux. MacOS works
Author
Owner

@zeittresor commented on GitHub (Dec 4, 2025):

Hi there, I dont know what the reason is but my similar setup works with it.

I have following configuration on Win 10 (ESU) PC

RTX 4060 Ti 16GB
TESLA K80 24GB
RAM 20 GB (DDR3)
i5 K3770 (3th-Gen)

..might be its due to the latest version of ollama ?!

<!-- gh-comment-id:3611494166 --> @zeittresor commented on GitHub (Dec 4, 2025): Hi there, I dont know what the reason is but my similar setup works with it. I have following configuration on Win 10 (ESU) PC RTX 4060 Ti 16GB TESLA K80 24GB RAM 20 GB (DDR3) i5 K3770 (3th-Gen) ..might be its due to the latest version of ollama ?!
Author
Owner

@aole commented on GitHub (Dec 9, 2025):

v0.13.2 resolves the issue for me.

<!-- gh-comment-id:3630125826 --> @aole commented on GitHub (Dec 9, 2025): v0.13.2 resolves the issue for me.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70847