[GH-ISSUE #14557] qwen3.5:9b Fails to Load on ollama 0.17.5 (not enough memory) #35203

Open
opened 2026-04-22 19:34:24 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @chr0n1x on GitHub (Mar 2, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14557

What is the issue?

all versions of qwen3.5 that I pull from ollama.com do not work.

Im getting weird "OOM" errors on model load. Im running ollama 0.17.5 with 24Gi of vRAM and still getting errors on qwen3.5:9b like:

when ollama itself starts up with

time=2026-03-02T20:39:05.345Z level=INFO source=routes.go:1720 msg="Listening on [::]:11434 (version 0.17.5)"
time=2026-03-02T20:39:05.346Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-02T20:39:05.347Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43495"
time=2026-03-02T20:39:05.582Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40623"
time=2026-03-02T20:39:05.667Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-02T20:39:05.667Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37705"
time=2026-03-02T20:39:05.878Z level=INFO source=types.go:42 msg="inference compute" id=GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3090" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="23.6 GiB"
time=2026-03-02T20:39:05.878Z level=INFO source=routes.go:1770 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768

ollama ls - reports nothing running at all. and this gpu is not used by anything else.

Relevant log output

time=2026-03-02T20:31:42.464Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.6 GiB"
time=2026-03-02T20:31:42.464Z level=INFO source=device.go:245 msg="model weights" device=CPU size="563.7 MiB"
time=2026-03-02T20:31:42.464Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.2 GiB"
time=2026-03-02T20:31:42.464Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="955.7 MiB"
time=2026-03-02T20:31:42.464Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="31.7 MiB"
time=2026-03-02T20:31:42.464Z level=INFO source=device.go:272 msg="total memory" size="9.3 GiB"
time=2026-03-02T20:31:42.464Z level=INFO source=sched.go:518 msg="Load failed" model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c error="model requires more system memory (595.3 MiB) than is available (19.7 MiB)"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.17.5

Originally created by @chr0n1x on GitHub (Mar 2, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14557 ### What is the issue? all versions of qwen3.5 that I pull from ollama.com do not work. Im getting weird "OOM" errors on model load. Im running ollama 0.17.5 with 24Gi of vRAM and still getting errors on `qwen3.5:9b` like: when ollama itself starts up with ``` time=2026-03-02T20:39:05.345Z level=INFO source=routes.go:1720 msg="Listening on [::]:11434 (version 0.17.5)" time=2026-03-02T20:39:05.346Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-02T20:39:05.347Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43495" time=2026-03-02T20:39:05.582Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40623" time=2026-03-02T20:39:05.667Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-02T20:39:05.667Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37705" time=2026-03-02T20:39:05.878Z level=INFO source=types.go:42 msg="inference compute" id=GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3090" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="23.6 GiB" time=2026-03-02T20:39:05.878Z level=INFO source=routes.go:1770 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768 ``` `ollama ls` - reports _nothing_ running at all. and this gpu is not used by anything else. ### Relevant log output ```shell time=2026-03-02T20:31:42.464Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.6 GiB" time=2026-03-02T20:31:42.464Z level=INFO source=device.go:245 msg="model weights" device=CPU size="563.7 MiB" time=2026-03-02T20:31:42.464Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.2 GiB" time=2026-03-02T20:31:42.464Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="955.7 MiB" time=2026-03-02T20:31:42.464Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="31.7 MiB" time=2026-03-02T20:31:42.464Z level=INFO source=device.go:272 msg="total memory" size="9.3 GiB" time=2026-03-02T20:31:42.464Z level=INFO source=sched.go:518 msg="Load failed" model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c error="model requires more system memory (595.3 MiB) than is available (19.7 MiB)" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.17.5
GiteaMirror added the bug label 2026-04-22 19:34:24 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 2, 2026):

The log message "model requires more system memory (595.3 MiB) than is available (19.7 MiB)" implies your very low on system memory and that's what blocked the load. Is that accurate on your system, or do you think there's a bug and we misidentified the available memory? What does free -h show?

If you're operating with very little available system memory, it's possible as we're trying to load the model, pages are getting paged out while we're setting things up leading to corruption.

<!-- gh-comment-id:3986866956 --> @dhiltgen commented on GitHub (Mar 2, 2026): The log message "model requires more system memory (595.3 MiB) than is available (19.7 MiB)" implies your very low on system memory and that's what blocked the load. Is that accurate on your system, or do you think there's a bug and we misidentified the available memory? What does `free -h` show? If you're operating with very little available system memory, it's possible as we're trying to load the model, pages are getting paged out while we're setting things up leading to corruption.
Author
Owner

@chr0n1x commented on GitHub (Mar 2, 2026):

@dhiltgen oh interesting, I rebooted the node & it works now. that's confusing to me though. my gpu has 24Gb of free space, why does system memory matter? afaik running a model used to work even when system memory usage was about the same 😕

<!-- gh-comment-id:3986976602 --> @chr0n1x commented on GitHub (Mar 2, 2026): @dhiltgen oh interesting, I rebooted the node & it works now. that's confusing to me though. my gpu has 24Gb of free space, why does system memory matter? afaik running a model used to work even when system memory usage was about the same 😕
Author
Owner

@chr0n1x commented on GitHub (Mar 2, 2026):

ok, I spoke too soon. I just tried to run qwen3.5:27b-q4_K_M.

nvidia gpu w/ 24Gb vRAM (nothing running on it).

my system has 34.04 GiB of 90.09 GiB in use (i.e.: > 60Gb RAM free)

still getting this weird error: 500: model requires more system memory (2.7 GiB) than is available (1.8 GiB)

meanwhile Im able to run hf.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF:Q4_K_XL on the same system

<!-- gh-comment-id:3987339345 --> @chr0n1x commented on GitHub (Mar 2, 2026): ok, I spoke too soon. I just tried to run `qwen3.5:27b-q4_K_M`. nvidia gpu w/ 24Gb vRAM (nothing running on it). my system has `34.04 GiB of 90.09 GiB` in use (i.e.: > 60Gb RAM free) still getting this weird error: `500: model requires more system memory (2.7 GiB) than is available (1.8 GiB)` meanwhile Im able to run `hf.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF:Q4_K_XL` on the same system
Author
Owner

@chr0n1x commented on GitHub (Mar 2, 2026):

@dhiltgen don't know if this helps, here are the server logs for the run that I just tried:

time=2026-03-02T22:32:09.573Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:65[ID:GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-02T22:32:09.631Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=1307 num_key_values=53
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sse42.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-02T22:32:09.736Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-03-02T22:32:10.824Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:65[ID:GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 903.62 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n_impl: failed to allocate CUDA0 buffer of size 947511296
time=2026-03-02T22:32:12.067Z level=INFO source=server.go:879 msg="model layout did not fit, applying backoff" backoff=0.10
time=2026-03-02T22:32:12.067Z level=WARN source=server.go:1044 msg="model request too large for system" requested="2.7 GiB" available="1.8 GiB" total="30.0 GiB" free="1.8 GiB" swap="0 B"
time=2026-03-02T22:32:12.067Z level=INFO source=runner.go:1302 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-02T22:32:12.067Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="15.5 GiB"
time=2026-03-02T22:32:12.067Z level=INFO source=device.go:245 msg="model weights" device=CPU size="710.2 MiB"
time=2026-03-02T22:32:12.067Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="5.7 GiB"
time=2026-03-02T22:32:12.067Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1013.6 MiB"
time=2026-03-02T22:32:12.067Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="168.0 MiB"
time=2026-03-02T22:32:12.067Z level=INFO source=device.go:272 msg="total memory" size="23.0 GiB"
time=2026-03-02T22:32:12.067Z level=INFO source=sched.go:518 msg="Load failed" model=/root/.ollama/models/blobs/sha256-7935de6e08f9444536d0edcacf19d2166b34bef8ddb4ac7ce9263ff5cad0693b error="model requires more system memory (2.7 GiB) than is available (1.8 GiB)"
time=2026-03-02T22:32:12.257Z level=ERROR source=server.go:303 msg="llama runner terminated" error="signal: killed"
<!-- gh-comment-id:3987366972 --> @chr0n1x commented on GitHub (Mar 2, 2026): @dhiltgen don't know if this helps, here are the server logs for the run that I just tried: ``` time=2026-03-02T22:32:09.573Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:65[ID:GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-02T22:32:09.631Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=1307 num_key_values=53 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sse42.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-02T22:32:09.736Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-03-02T22:32:10.824Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:65[ID:GPU-5068c5ff-0f1d-ec77-edbc-85cca4831d5e Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 903.62 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n_impl: failed to allocate CUDA0 buffer of size 947511296 time=2026-03-02T22:32:12.067Z level=INFO source=server.go:879 msg="model layout did not fit, applying backoff" backoff=0.10 time=2026-03-02T22:32:12.067Z level=WARN source=server.go:1044 msg="model request too large for system" requested="2.7 GiB" available="1.8 GiB" total="30.0 GiB" free="1.8 GiB" swap="0 B" time=2026-03-02T22:32:12.067Z level=INFO source=runner.go:1302 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-02T22:32:12.067Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="15.5 GiB" time=2026-03-02T22:32:12.067Z level=INFO source=device.go:245 msg="model weights" device=CPU size="710.2 MiB" time=2026-03-02T22:32:12.067Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="5.7 GiB" time=2026-03-02T22:32:12.067Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1013.6 MiB" time=2026-03-02T22:32:12.067Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="168.0 MiB" time=2026-03-02T22:32:12.067Z level=INFO source=device.go:272 msg="total memory" size="23.0 GiB" time=2026-03-02T22:32:12.067Z level=INFO source=sched.go:518 msg="Load failed" model=/root/.ollama/models/blobs/sha256-7935de6e08f9444536d0edcacf19d2166b34bef8ddb4ac7ce9263ff5cad0693b error="model requires more system memory (2.7 GiB) than is available (1.8 GiB)" time=2026-03-02T22:32:12.257Z level=ERROR source=server.go:303 msg="llama runner terminated" error="signal: killed" ```
Author
Owner

@codebest-og commented on GitHub (Mar 3, 2026):

On my Ubuntu system, 96GB of RAM 32GB VRAM I often run out of memory , same type of issue.

sync; echo 3 | sudo tee /proc/sys/vm/drop_caches

This let's me load models again without a reboot. It clears the system cache which allows ollama to see the true free memory. Might help, might not. Give it a try.

<!-- gh-comment-id:3988110996 --> @codebest-og commented on GitHub (Mar 3, 2026): On my Ubuntu system, 96GB of RAM 32GB VRAM I often run out of memory , same type of issue. sync; echo 3 | sudo tee /proc/sys/vm/drop_caches This let's me load models again without a reboot. It clears the system cache which allows ollama to see the true free memory. Might help, might not. Give it a try.
Author
Owner

@chr0n1x commented on GitHub (Mar 3, 2026):

@codebest-og I'm on talosOS in a k8s cluster, with the entire ollama dir mounted into a PV. I blew away the entire PV and it still happens, so I'm getting this on what's effectively a fresh instance 😕

<!-- gh-comment-id:3988594641 --> @chr0n1x commented on GitHub (Mar 3, 2026): @codebest-og I'm on talosOS in a k8s cluster, with the entire ollama dir mounted into a PV. I blew away the entire PV and it still happens, so I'm getting this on what's effectively a fresh instance 😕
Author
Owner

@airhand commented on GitHub (Mar 3, 2026):

the same problem

<!-- gh-comment-id:3990456722 --> @airhand commented on GitHub (Mar 3, 2026): the same problem
Author
Owner

@ghmer commented on GitHub (Mar 3, 2026):

@codebest-og I'm on talosOS in a k8s cluster, with the entire ollama dir mounted into a PV. I blew away the entire PV and it still happens, so I'm getting this on what's effectively a fresh instance 😕

You are running inside a k8s cluster? Are you hitting any limits, eventually?

<!-- gh-comment-id:3993427586 --> @ghmer commented on GitHub (Mar 3, 2026): > [@codebest-og](https://github.com/codebest-og) I'm on talosOS in a k8s cluster, with the entire ollama dir mounted into a PV. I blew away the entire PV and it still happens, so I'm getting this on what's effectively a fresh instance 😕 You are running inside a k8s cluster? Are you hitting any limits, eventually?
Author
Owner

@chr0n1x commented on GitHub (Mar 3, 2026):

@ghmer I dont have any limits on this pod. and if there were any limits, the RS would be in a crash loop wouldn't it? there are no events on my RS/deployment other than startup and successful allocs

unless you're talking about something else? I'm overall confused because I've run larger models than this on older versions of ollama on this same node, which is a reserved node

<!-- gh-comment-id:3994116118 --> @chr0n1x commented on GitHub (Mar 3, 2026): @ghmer I dont have any limits on this pod. and if there were any limits, the RS would be in a crash loop wouldn't it? there are no events on my RS/deployment other than startup and successful allocs unless you're talking about something else? I'm overall confused because I've run larger models than this on older versions of ollama on this same node, which is a reserved node
Author
Owner

@codebest-og commented on GitHub (Mar 8, 2026):

@codebest-og I'm on talosOS in a k8s cluster, with the entire ollama dir mounted into a PV. I blew away the entire PV and it still happens, so I'm getting this on what's effectively a fresh instance 😕

You are running inside a k8s cluster? Are you hitting any limits, eventually?

Old school, Ubuntu / Docker. My issue most likely isolated to Ubuntu where the disk cache uses up all available memory. Ollama then thinks it has no free memory when it has tons. I have to clear the disk cache often to get models to load. I should move to a different platform.

<!-- gh-comment-id:4017968573 --> @codebest-og commented on GitHub (Mar 8, 2026): > > [@codebest-og](https://github.com/codebest-og) I'm on talosOS in a k8s cluster, with the entire ollama dir mounted into a PV. I blew away the entire PV and it still happens, so I'm getting this on what's effectively a fresh instance 😕 > > You are running inside a k8s cluster? Are you hitting any limits, eventually? Old school, Ubuntu / Docker. My issue most likely isolated to Ubuntu where the disk cache uses up all available memory. Ollama then thinks it has no free memory when it has tons. I have to clear the disk cache often to get models to load. I should move to a different platform.
Author
Owner

@OctoberRust2000 commented on GitHub (Mar 8, 2026):

I have similar issue. Some recent update of Ollama broke something. Previously I was able to load large models (like llama3.3 42GB) but note anymore.

OS
Win 10

GPU
2 * RTX 3090 (48 GB in total)

CPU
AMD

Ollama version
0.17.7

<!-- gh-comment-id:4019140765 --> @OctoberRust2000 commented on GitHub (Mar 8, 2026): I have similar issue. Some recent update of Ollama broke something. Previously I was able to load large models (like llama3.3 42GB) but note anymore. **OS** Win 10 **GPU** 2 * RTX 3090 (48 GB in total) **CPU** AMD **Ollama version** 0.17.7
Author
Owner

@OctoberRust2000 commented on GitHub (Mar 8, 2026):

Downgraded Ollama to version 0.16.3 and I can load llama3.3 back again.

<!-- gh-comment-id:4020099710 --> @OctoberRust2000 commented on GitHub (Mar 8, 2026): **Downgraded Ollama to version 0.16.3** and I can load llama3.3 back again.
Author
Owner

@2jfs904judsw20600jikn613d0dookl23jsig commented on GitHub (Mar 8, 2026):

same here. ollama broke something. trying to load and run 'qwen3.5:27b' on a 4090 with 24gb vram. it was working around ~72 hours ago with no issues.

new behavior now: hangs forever at 100% gpu utilization and can't generate even a single token.

something was changed and broke it

<!-- gh-comment-id:4020208084 --> @2jfs904judsw20600jikn613d0dookl23jsig commented on GitHub (Mar 8, 2026): same here. ollama broke something. trying to load and run 'qwen3.5:27b' on a 4090 with 24gb vram. it was working around ~72 hours ago with no issues. new behavior now: hangs forever at 100% gpu utilization and can't generate even a single token. something was changed and broke it
Author
Owner

@chr0n1x commented on GitHub (Mar 9, 2026):

fwiw - I bumped to latest (so 0.17.7 I think?) and it's loading. slow as all hell. for some reason the Q4_K_M tag/quant from ollama.com can't fit on 24Gb of vRAM so ollama decided to spread 12% CPU 88% GPU.

I won't close until a maintainer chimes in with a definitive fix though 😕

<!-- gh-comment-id:4020596187 --> @chr0n1x commented on GitHub (Mar 9, 2026): fwiw - I bumped to `latest` (so `0.17.7` I think?) and it's loading. slow as all hell. for some reason the `Q4_K_M` tag/quant from ollama.com can't fit on 24Gb of vRAM so ollama decided to spread 12% CPU 88% GPU. I won't close until a maintainer chimes in with a definitive fix though 😕
Author
Owner

@Student414 commented on GitHub (Mar 11, 2026):

Same here. I am using Ubuntu24.04 on Hyper-V with dynamic memory enabled. When system idle, the actual memory usage is about 6GB, so Hyper-V just allocates about 10GB to Ubuntu automatically. But when I load model (recent often load Qwen3.5 series), got this error.
And, when I using command to allocate memory manually using head -c 10G /dev/zero | tail | sleep 60, then model loaded without errors.
So, I want to know why not disable memory checking logic when loading model?

<!-- gh-comment-id:4036880670 --> @Student414 commented on GitHub (Mar 11, 2026): Same here. I am using Ubuntu24.04 on Hyper-V with dynamic memory enabled. When system idle, the actual memory usage is about 6GB, so Hyper-V just allocates about 10GB to Ubuntu automatically. But when I load model (recent often load Qwen3.5 series), got this error. And, when I using command to allocate memory manually using `head -c 10G /dev/zero | tail | sleep 60`, then model loaded without errors. So, I want to know why not disable memory checking logic when loading model?
Author
Owner

@markasoftware-tc commented on GitHub (Mar 30, 2026):

@chr0n1x You mention "pod", are you running this in a k8s container (or any container)? If so, this is likely fixed by: https://github.com/ollama/ollama/pull/13782

The root cause is that ollama presently considers page cached memory as "used" when run inside a memory cgroup.

<!-- gh-comment-id:4157182455 --> @markasoftware-tc commented on GitHub (Mar 30, 2026): @chr0n1x You mention "pod", are you running this in a k8s container (or any container)? If so, this is likely fixed by: https://github.com/ollama/ollama/pull/13782 The root cause is that ollama presently considers page cached memory as "used" when run inside a memory cgroup.
Author
Owner

@chr0n1x commented on GitHub (Mar 30, 2026):

@markasoftware-tc yes.

that being said, later versions of ollama have been ok, so Im unsure if the issue I observed is the same one that you're describing. Ive moved to 0.18.x I think.

<!-- gh-comment-id:4158401177 --> @chr0n1x commented on GitHub (Mar 30, 2026): @markasoftware-tc yes. that being said, later versions of ollama have been ok, so Im unsure if the issue I observed is the same one that you're describing. Ive moved to 0.18.x I think.
Author
Owner

@markasoftware-tc commented on GitHub (Mar 30, 2026):

@markasoftware-tc yes.

that being said, later versions of ollama have been ok, so Im unsure if the issue I observed is the same one that you're describing. Ive moved to 0.18.x I think.

If you want to test it more certainly, you can intentionally consume page cache memory by doing something like head -c 10000000000 < /dev/zero > some_file (this would consume 10Gi of page cache, you can observe the buff/cache column in free -h increase from before to after). Maybe try doing this with an even larger number, if your pod truly has no memory limit, then make it at least as large as your system memory. Then the issue will more likely reproduce.

<!-- gh-comment-id:4158510826 --> @markasoftware-tc commented on GitHub (Mar 30, 2026): > [@markasoftware-tc](https://github.com/markasoftware-tc) yes. > > that being said, later versions of ollama have been ok, so Im unsure if the issue I observed is the same one that you're describing. Ive moved to 0.18.x I think. If you want to test it more certainly, you can intentionally consume page cache memory by doing something like `head -c 10000000000 < /dev/zero > some_file` (this would consume 10Gi of page cache, you can observe the `buff/cache` column in `free -h` increase from before to after). Maybe try doing this with an even larger number, if your pod truly has no memory limit, then make it at least as large as your system memory. Then the issue will more likely reproduce.
Author
Owner

@karmeleon commented on GitHub (Apr 3, 2026):

Also getting this for a CPU model:

> model requires more system memory (25.0 GiB) than is available (16.9 GiB)
$ free -h
               total        used        free      shared  buff/cache   available
Mem:            47Gi       4.5Gi       1.3Gi       1.2Gi        43Gi        42Gi
Swap:          8.0Gi       1.1Gi       6.9Gi
$ docker exec -it ollama cat /proc/meminfo | egrep "^(MemAvailable|MemFree|Buffers|Cached):"
MemFree:         1312044 kB
MemAvailable:   44598992 kB
Buffers:           66916 kB
Cached:         44140980 kB

I'm not even sure where it's getting 16.9 GiB from, that doesn't line up with anything in the system.

<!-- gh-comment-id:4182041104 --> @karmeleon commented on GitHub (Apr 3, 2026): Also getting this for a CPU model: ``` > model requires more system memory (25.0 GiB) than is available (16.9 GiB) ``` ```bash $ free -h total used free shared buff/cache available Mem: 47Gi 4.5Gi 1.3Gi 1.2Gi 43Gi 42Gi Swap: 8.0Gi 1.1Gi 6.9Gi $ docker exec -it ollama cat /proc/meminfo | egrep "^(MemAvailable|MemFree|Buffers|Cached):" MemFree: 1312044 kB MemAvailable: 44598992 kB Buffers: 66916 kB Cached: 44140980 kB ``` I'm not even sure where it's getting 16.9 GiB from, that doesn't line up with anything in the system.
Author
Owner

@markasoftware-tc commented on GitHub (Apr 3, 2026):

@karmeleon /proc/meminfo is not updated inside containers, what you want to do is enter the container then read /sys/fs/cgroup/memory.max (this is the memory limit) and /sys/fs/cgroup/memory.current (used memory, including page cache, in the container) and subtract the two, that's how ollama currently computes the "available" memory in a container.

What my PR linked above does is read /sys/fs/cgroup/memory.stat to figure out what portion of /sys/fs/cgroup/memory.current is actually just page cache, and exclude that from the used memory calculation.

<!-- gh-comment-id:4184382321 --> @markasoftware-tc commented on GitHub (Apr 3, 2026): @karmeleon `/proc/meminfo` is not updated inside containers, what you want to do is enter the container then read `/sys/fs/cgroup/memory.max` (this is the memory limit) and `/sys/fs/cgroup/memory.current` (used memory, including page cache, in the container) and subtract the two, that's how ollama currently computes the "available" memory in a container. What my PR linked above does is read `/sys/fs/cgroup/memory.stat` to figure out what portion of `/sys/fs/cgroup/memory.current` is actually just page cache, and exclude that from the used memory calculation.
Author
Owner

@OctoberRust2000 commented on GitHub (Apr 13, 2026):

The problem still exists in version 0.20.3

<!-- gh-comment-id:4236313785 --> @OctoberRust2000 commented on GitHub (Apr 13, 2026): The problem still exists in version **0.20.3**
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35203