[GH-ISSUE #15474] Ollama incorrectly detects available memory when running in containers #35651

Open
opened 2026-04-22 20:18:57 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @k8ieone on GitHub (Apr 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15474

What is the issue?

When Ollama runs in a container with a memory limit (this is very important), it incorrectly detects available memory.

When running on bare metal, Ollama uses MemAvailable from /proc/meminfo. This corresponds to an estimate of how much memory applications have to operate with. Available memory doesn't include the used page cache because page cache can be instantly freed.

When running in a container with a memory limit, Ollama reads the available memory from cgroups. But cgroups don't expose a metric comparable to MemAvailable from /proc/meminfo. The closest there is is /sys/fs/cgroup/memory.current but that also includes the used page cache. What this means in practice is that after Ollama loads a model once, Linux caches it into page cache and even after the model gets unloaded, the page cache stays filled and since the page cache is included in memory.current, Ollama thinks that there isn't enough available memory.

TLDR; mem.FreeMemory = /sys/fs/cgroup/memory.max - /sys/fs/cgroup/memory.current is too naive and causes Ollama to think it has less memory available than what it has in reality.

Relevant lines: https://github.com/ollama/ollama/blob/main/discover/cpu_linux.go#L69-L79

I'll have an LLM-generated PR ready to fix this soon. It's a fairly minimal change and it looks good to me but I'm not a Go programmer.

Relevant log output


OS

Docker, Linux

GPU

No response

CPU

Intel, Other

Ollama version

0.20.5

Originally created by @k8ieone on GitHub (Apr 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15474 ### What is the issue? When Ollama runs in a container with a **memory limit** (this is very important), it incorrectly detects available memory. When running on bare metal, Ollama uses `MemAvailable` from `/proc/meminfo`. This corresponds to an estimate of how much memory applications have to operate with. Available memory doesn't include the used page cache because page cache can be instantly freed. When running in a container with a memory limit, Ollama reads the available memory from cgroups. But cgroups don't expose a metric comparable to `MemAvailable` from `/proc/meminfo`. The closest there is is `/sys/fs/cgroup/memory.current` but that also includes the used page cache. What this means in practice is that after Ollama loads a model once, Linux caches it into page cache and even after the model gets unloaded, the page cache stays filled and since the page cache is included in `memory.current`, Ollama thinks that there isn't enough available memory. TLDR; `mem.FreeMemory = /sys/fs/cgroup/memory.max - /sys/fs/cgroup/memory.current` is too naive and causes Ollama to think it has less memory available than what it has in reality. Relevant lines: https://github.com/ollama/ollama/blob/main/discover/cpu_linux.go#L69-L79 I'll have an LLM-generated PR ready to fix this soon. It's a fairly minimal change and it looks good to me but I'm not a Go programmer. ### Relevant log output ```shell ``` ### OS Docker, Linux ### GPU _No response_ ### CPU Intel, Other ### Ollama version 0.20.5
GiteaMirror added the bug label 2026-04-22 20:18:57 -05:00
Author
Owner

@markasoftware-tc commented on GitHub (Apr 10, 2026):

See #13782

<!-- gh-comment-id:4225583714 --> @markasoftware-tc commented on GitHub (Apr 10, 2026): See #13782
Author
Owner

@k8ieone commented on GitHub (Apr 10, 2026):

Oh yesss, thank youu!

<!-- gh-comment-id:4225750331 --> @k8ieone commented on GitHub (Apr 10, 2026): Oh yesss, thank youu!
Author
Owner

@markasoftware-tc commented on GitHub (Apr 10, 2026):

I'm glad more people are finding this issue though, if there's anything you can do to somehow cause a maintainer to care about this issue please do! Every week or so we have a customer complain about this.

<!-- gh-comment-id:4225805507 --> @markasoftware-tc commented on GitHub (Apr 10, 2026): I'm glad more people are finding this issue though, if there's anything you can do to somehow cause a maintainer to care about this issue please do! Every week or so we have a customer complain about this.
Author
Owner

@k8ieone commented on GitHub (Apr 10, 2026):

I'm afraid I can't help your PR any more than you can :/

I built an image from my fork and I run that for now.

But I'm glad I'm not the only one who ran into this!

<!-- gh-comment-id:4225864668 --> @k8ieone commented on GitHub (Apr 10, 2026): I'm afraid I can't help your PR any more than you can :/ I built an image from my fork and I run that for now. But I'm glad I'm not the only one who ran into this!
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15474
Analyzed: 2026-04-18T18:20:54.825088

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274307414 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15474 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15474 **Analyzed**: 2026-04-18T18:20:54.825088 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@markasoftware-tc commented on GitHub (Apr 20, 2026):

@PureBlissAK your bot is leaving a comment on all ollama issues presumably as part of an openclaw or other agentic thing that's trying to fix ollama issues. This is extremely rude and disruptive, and you should delete all these comments.

<!-- gh-comment-id:4282809873 --> @markasoftware-tc commented on GitHub (Apr 20, 2026): @PureBlissAK your bot is leaving a comment on all ollama issues presumably as part of an openclaw or other agentic thing that's trying to fix ollama issues. This is extremely rude and disruptive, and you should delete all these comments.
Author
Owner

@k8ieone commented on GitHub (Apr 21, 2026):

It doesn't even say anythig useful...

<!-- gh-comment-id:4286215296 --> @k8ieone commented on GitHub (Apr 21, 2026): It doesn't even say anythig useful...
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35651