[GH-ISSUE #13283] Memory layout cannot be allocated #8778

Open
opened 2026-04-12 21:32:35 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @zhaoyuxin2 on GitHub (Dec 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13283

Originally assigned to: @jessegross on GitHub.

What is the issue?

We hope to deploy the qwen3-vl:235b model locally, but now we are encountering the issue of "Memory layout cannot be allocated". Could you please advise on how to solve this problem? (When we deployed qwen2.5-vl:72b, it could be deployed normally and the computer's memory was sufficient.)

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.13.0

Originally created by @zhaoyuxin2 on GitHub (Dec 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13283 Originally assigned to: @jessegross on GitHub. ### What is the issue? We hope to deploy the qwen3-vl:235b model locally, but now we are encountering the issue of "Memory layout cannot be allocated". Could you please advise on how to solve this problem? (When we deployed qwen2.5-vl:72b, it could be deployed normally and the computer's memory was sufficient.) ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-04-12 21:32:35 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 1, 2025):

Server log will help in debugging.

<!-- gh-comment-id:3595241240 --> @rick-github commented on GitHub (Dec 1, 2025): [Server log](https://docs.ollama.com/troubleshooting) will help in debugging.
Author
Owner

@zhaoyuxin2 commented on GitHub (Dec 1, 2025):

Thanks a lot, server log:
time=2025-12-01T15:08:08.335+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\RichAI\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 50061"
time=2025-12-01T15:08:08.517+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-12-01T15:08:08.517+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-12-01T15:08:08.517+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-12-01T15:08:08.576+08:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-01T15:08:08.577+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\RichAI\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model E:\ollama\models\blobs\sha256-02e588929c87e95d29571faea0693185503bea6f06ac3ea516092b037e149c8a --port 50066"
time=2025-12-01T15:08:08.579+08:00 level=INFO source=sched.go:443 msg="system memory" total="95.7 GiB" free="78.6 GiB" free_swap="83.6 GiB"
time=2025-12-01T15:08:08.579+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b library=CUDA available="22.7 GiB" free="23.2 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-01T15:08:08.580+08:00 level=INFO source=server.go:702 msg="loading model" "model layers"=95 requested=-1
time=2025-12-01T15:08:08.607+08:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-01T15:08:08.611+08:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:50066"
time=2025-12-01T15:08:08.613+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:95[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:95(0..94)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-01T15:08:08.636+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vlmoe file_type=Q4_K_M name="" description="" num_tensors=1590 num_key_values=43
load_backend: loaded CPU backend from C:\Users\RichAI\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b
load_backend: loaded CUDA backend from C:\Users\RichAI\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-12-01T15:08:08.733+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-12-01T15:08:09.672+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:12[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:12(82..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-01T15:08:09.964+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:12[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:12(82..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 123954089920
alloc_tensor_range: failed to allocate CPU buffer of size 123954089920
time=2025-12-01T15:08:09.991+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:14[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:14(80..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 21086.53 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate CUDA0 buffer of size 22110828544
time=2025-12-01T15:08:10.653+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:13[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:13(81..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 122552896448
alloc_tensor_range: failed to allocate CPU buffer of size 122552896448
time=2025-12-01T15:08:11.057+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:12[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:12(82..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 123954089920
alloc_tensor_range: failed to allocate CPU buffer of size 123954089920
time=2025-12-01T15:08:11.092+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10
time=2025-12-01T15:08:11.092+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20
time=2025-12-01T15:08:11.092+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30
time=2025-12-01T15:08:11.092+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:10[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:10(84..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 127172253632
alloc_tensor_range: failed to allocate CPU buffer of size 127172253632
time=2025-12-01T15:08:11.131+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40
time=2025-12-01T15:08:11.131+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:8[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:8(86..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 130389876672
alloc_tensor_range: failed to allocate CPU buffer of size 130389876672
time=2025-12-01T15:08:11.170+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50
time=2025-12-01T15:08:11.170+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:7[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:7(87..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 131999228864
alloc_tensor_range: failed to allocate CPU buffer of size 131999228864
time=2025-12-01T15:08:11.207+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.60
time=2025-12-01T15:08:11.208+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:5[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:5(89..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 135216851904
alloc_tensor_range: failed to allocate CPU buffer of size 135216851904
time=2025-12-01T15:08:11.244+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.70
time=2025-12-01T15:08:11.244+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:4[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:4(90..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
[GIN] 2025/12/01 - 15:08:11 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/12/01 - 15:08:11 | 200 | 1.5657ms | 127.0.0.1 | GET "/api/tags"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 136826204096
alloc_tensor_range: failed to allocate CPU buffer of size 136826204096
time=2025-12-01T15:08:11.393+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.80
time=2025-12-01T15:08:11.394+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:2[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:2(92..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 140043827136
alloc_tensor_range: failed to allocate CPU buffer of size 140043827136
time=2025-12-01T15:08:11.429+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.90
time=2025-12-01T15:08:11.430+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:1[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:1(93..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 141653179328
alloc_tensor_range: failed to allocate CPU buffer of size 141653179328
time=2025-12-01T15:08:11.469+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=1.00
time=2025-12-01T15:08:11.469+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 143261990848
alloc_tensor_range: failed to allocate CPU buffer of size 143261990848
time=2025-12-01T15:08:11.513+08:00 level=WARN source=server.go:818 msg="memory layout cannot be allocated" memory.InputWeights=350060544 memory.CPU.Weights="[1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1401734144 1401734144 1609352192 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1609352192 1608811520 1608811520 1609352192 1608811520 1608811520 1609352192 1608811520 1608811520 1609352192 1608811520 1627146176]"
time=2025-12-01T15:08:11.513+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-01T15:08:11.514+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="133.4 GiB"
time=2025-12-01T15:08:11.514+08:00 level=INFO source=device.go:272 msg="total memory" size="133.4 GiB"
time=2025-12-01T15:08:11.514+08:00 level=INFO source=sched.go:470 msg="Load failed" model=E:\ollama\models\blobs\sha256-02e588929c87e95d29571faea0693185503bea6f06ac3ea516092b037e149c8a error="memory layout cannot be allocated"
[GIN] 2025/12/01 - 15:08:11 | 200 | 65.1894ms | 127.0.0.1 | POST "/api/show"
time=2025-12-01T15:08:11.584+08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"

<!-- gh-comment-id:3595333197 --> @zhaoyuxin2 commented on GitHub (Dec 1, 2025): Thanks a lot, server log: time=2025-12-01T15:08:08.335+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\RichAI\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50061" time=2025-12-01T15:08:08.517+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-12-01T15:08:08.517+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-12-01T15:08:08.517+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-12-01T15:08:08.576+08:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-01T15:08:08.577+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\RichAI\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\ollama\\models\\blobs\\sha256-02e588929c87e95d29571faea0693185503bea6f06ac3ea516092b037e149c8a --port 50066" time=2025-12-01T15:08:08.579+08:00 level=INFO source=sched.go:443 msg="system memory" total="95.7 GiB" free="78.6 GiB" free_swap="83.6 GiB" time=2025-12-01T15:08:08.579+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b library=CUDA available="22.7 GiB" free="23.2 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-01T15:08:08.580+08:00 level=INFO source=server.go:702 msg="loading model" "model layers"=95 requested=-1 time=2025-12-01T15:08:08.607+08:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-01T15:08:08.611+08:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:50066" time=2025-12-01T15:08:08.613+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:95[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:95(0..94)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-01T15:08:08.636+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vlmoe file_type=Q4_K_M name="" description="" num_tensors=1590 num_key_values=43 load_backend: loaded CPU backend from C:\Users\RichAI\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b load_backend: loaded CUDA backend from C:\Users\RichAI\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-12-01T15:08:08.733+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-12-01T15:08:09.672+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:12[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:12(82..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-01T15:08:09.964+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:12[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:12(82..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 123954089920 alloc_tensor_range: failed to allocate CPU buffer of size 123954089920 time=2025-12-01T15:08:09.991+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:14[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:14(80..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 21086.53 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate CUDA0 buffer of size 22110828544 time=2025-12-01T15:08:10.653+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:13[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:13(81..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 122552896448 alloc_tensor_range: failed to allocate CPU buffer of size 122552896448 time=2025-12-01T15:08:11.057+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:12[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:12(82..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 123954089920 alloc_tensor_range: failed to allocate CPU buffer of size 123954089920 time=2025-12-01T15:08:11.092+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.10 time=2025-12-01T15:08:11.092+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.20 time=2025-12-01T15:08:11.092+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.30 time=2025-12-01T15:08:11.092+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:10[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:10(84..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 127172253632 alloc_tensor_range: failed to allocate CPU buffer of size 127172253632 time=2025-12-01T15:08:11.131+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.40 time=2025-12-01T15:08:11.131+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:8[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:8(86..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 130389876672 alloc_tensor_range: failed to allocate CPU buffer of size 130389876672 time=2025-12-01T15:08:11.170+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.50 time=2025-12-01T15:08:11.170+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:7[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:7(87..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 131999228864 alloc_tensor_range: failed to allocate CPU buffer of size 131999228864 time=2025-12-01T15:08:11.207+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.60 time=2025-12-01T15:08:11.208+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:5[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:5(89..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 135216851904 alloc_tensor_range: failed to allocate CPU buffer of size 135216851904 time=2025-12-01T15:08:11.244+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.70 time=2025-12-01T15:08:11.244+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:4[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:4(90..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" [GIN] 2025/12/01 - 15:08:11 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/12/01 - 15:08:11 | 200 | 1.5657ms | 127.0.0.1 | GET "/api/tags" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 136826204096 alloc_tensor_range: failed to allocate CPU buffer of size 136826204096 time=2025-12-01T15:08:11.393+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.80 time=2025-12-01T15:08:11.394+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:2[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:2(92..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 140043827136 alloc_tensor_range: failed to allocate CPU buffer of size 140043827136 time=2025-12-01T15:08:11.429+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=0.90 time=2025-12-01T15:08:11.430+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:1[ID:GPU-83f25cda-96d7-f277-7aa9-9028e8ac822b Layers:1(93..93)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 141653179328 alloc_tensor_range: failed to allocate CPU buffer of size 141653179328 time=2025-12-01T15:08:11.469+08:00 level=INFO source=server.go:824 msg="model layout did not fit, applying backoff" backoff=1.00 time=2025-12-01T15:08:11.469+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 143261990848 alloc_tensor_range: failed to allocate CPU buffer of size 143261990848 time=2025-12-01T15:08:11.513+08:00 level=WARN source=server.go:818 msg="memory layout cannot be allocated" memory.InputWeights=350060544 memory.CPU.Weights="[1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1609352192 1401734144 1401734144 1609352192 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1401734144 1401193472 1608811520 1609352192 1608811520 1608811520 1609352192 1608811520 1608811520 1609352192 1608811520 1608811520 1609352192 1608811520 1627146176]" time=2025-12-01T15:08:11.513+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-01T15:08:11.514+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="133.4 GiB" time=2025-12-01T15:08:11.514+08:00 level=INFO source=device.go:272 msg="total memory" size="133.4 GiB" time=2025-12-01T15:08:11.514+08:00 level=INFO source=sched.go:470 msg="Load failed" model=E:\ollama\models\blobs\sha256-02e588929c87e95d29571faea0693185503bea6f06ac3ea516092b037e149c8a error="memory layout cannot be allocated" [GIN] 2025/12/01 - 15:08:11 | 200 | 65.1894ms | 127.0.0.1 | POST "/api/show" time=2025-12-01T15:08:11.584+08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"
Author
Owner

@molbal commented on GitHub (Dec 3, 2025):

@zhaoyuxin2 qwen3-vl:235b is a much larger model than qwen2.5-vl:72b.

BAsed on the logs you have ~23.2GB free VRAM and ~78GB free system RAM. But the model weights are larger than that (about 133GB) so they cannot fit.

You need to:

  1. Use a smaller model
  2. Use a lower quantized version of the model
  3. Install more memory in your system
<!-- gh-comment-id:3606659441 --> @molbal commented on GitHub (Dec 3, 2025): @zhaoyuxin2 qwen3-vl:235b is a much larger model than qwen2.5-vl:72b. BAsed on the logs you have ~23.2GB free VRAM and ~78GB free system RAM. But the model weights are larger than that (about 133GB) so they cannot fit. You need to: 1. Use a smaller model 2. Use a lower quantized version of the model 3. Install more memory in your system
Author
Owner

@KuSh commented on GitHub (Feb 15, 2026):

It seems to be a problem with the autodiscovery of ollama models:

I've tried a simple ask question, with hollama using default setting:

{"model":"qwen3:4b","options":{},"messages":[{"role":"user","content":"How many e in iconoclaste ?"}]}

vs zed without any specific settings:

{
  "keep_alive": -1,
  "messages": [
    {
      "content": "How many e in iconoclaste ?",
      "role": "user"
    },
    {
      "content": "Generate a concise 3-7 word title for this conversation, omitting punctuation.\nGo straight to the title, without any preamble and prefix like `Here's a concise suggestion:...` or `Title:`.\nIf the conversation is about a specific subject, include it in the title.\nBe descriptive. DO NOT speak in the first person.\n",
      "role": "user"
    }
  ],
  "model": "qwen3:4b",
  "options": {
    "num_ctx": 262144,
    "num_predict": null,
    "stop": [],
    "temperature": 1.0,
    "top_p": null
  },
  "stream": true,
  "think": false,
  "tools": []
}

num_ctx is forced to 262144, despite zed documentation saying it is capped to 4096,

Ollama Context Length

Zed API requests to Ollama include the context length as the num_ctx parameter. By default, Zed uses a context length of 4096 tokens for all Ollama models.

If no num_ctx is configured, ollama use the correct value by default (4096 here for qwen3:4b):

request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:12 GPULayers:37[ID:GPU-20bd1237-f43f-9d1d-dedc-875016428156 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

vs when zed use a 262144 num_ctx:

request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType: NumThreads:12 GPULayers:37[ID:GPU-20bd1237-f43f-9d1d-dedc-875016428156 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

262144 is the value found in the /api/show endpoint, but perhaps it should be capped to something less than available memory? Or allow to set a global max_tokens setting?

<!-- gh-comment-id:3902838770 --> @KuSh commented on GitHub (Feb 15, 2026): It seems to be a problem with the autodiscovery of ollama models: I've tried a simple ask question, with hollama using default setting: ```json {"model":"qwen3:4b","options":{},"messages":[{"role":"user","content":"How many e in iconoclaste ?"}]} ``` vs zed without any specific settings: ```json { "keep_alive": -1, "messages": [ { "content": "How many e in iconoclaste ?", "role": "user" }, { "content": "Generate a concise 3-7 word title for this conversation, omitting punctuation.\nGo straight to the title, without any preamble and prefix like `Here's a concise suggestion:...` or `Title:`.\nIf the conversation is about a specific subject, include it in the title.\nBe descriptive. DO NOT speak in the first person.\n", "role": "user" } ], "model": "qwen3:4b", "options": { "num_ctx": 262144, "num_predict": null, "stop": [], "temperature": 1.0, "top_p": null }, "stream": true, "think": false, "tools": [] } ``` num_ctx is forced to 262144, despite zed documentation saying it is capped to 4096, > [Ollama Context Length](https://zed.dev/docs/ai/llm-providers?highlight=ollama#ollama-context) > > Zed API requests to Ollama include the context length as the num_ctx parameter. By default, Zed uses a context length of 4096 tokens for all Ollama models. If no num_ctx is configured, ollama use the correct value by default (4096 here for qwen3:4b): ```log request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:12 GPULayers:37[ID:GPU-20bd1237-f43f-9d1d-dedc-875016428156 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ``` vs when zed use a 262144 num_ctx: ```log request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType: NumThreads:12 GPULayers:37[ID:GPU-20bd1237-f43f-9d1d-dedc-875016428156 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ``` 262144 is the value found in the /api/show endpoint, but perhaps it should be capped to something less than available memory? Or allow to set a global max_tokens setting?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8778