[PR #10292] [MERGED] discover: Support cgroups cores and memory limitations #44449

Closed
opened 2026-04-24 23:55:52 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/10292
Author: @SiLeader
Created: 4/15/2025
Status: Merged
Merged: 11/18/2025
Merged by: @dhiltgen

Base: mainHead: cgroups-resource-limits


📝 Commits (4)

  • 24d85e3 Add supports for cgroups cores and memory limitations
  • 0fad1ba Merge branch 'main' into cgroups-resource-limits
  • 2a73390 fix compile error and add logs
  • 15a14d7 remove cpu info log

📊 Changes

1 file changed (+71 additions, -1 deletions)

View changed files

📝 discover/cpu_linux.go (+71 -1)

📄 Description

This pull request can be performance improvement for inference using CPUs in a container like Kubernetes Pods.
Kubernetes and Docker have CPU and memory limitation features using cgroups. But Ollama cannot be fetch CPU and memory information from it. So Ollama start too many threads in a limited environment and encount performance decrement.

This pull request changes for this problem to read /sys/fs/cgroup/cpu.max and /sys/fs/cgroup/memory.(max|current) when these files exist.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/10292 **Author:** [@SiLeader](https://github.com/SiLeader) **Created:** 4/15/2025 **Status:** ✅ Merged **Merged:** 11/18/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `cgroups-resource-limits` --- ### 📝 Commits (4) - [`24d85e3`](https://github.com/ollama/ollama/commit/24d85e3ccf8ea324c70df93569e3b69b304688c4) Add supports for cgroups cores and memory limitations - [`0fad1ba`](https://github.com/ollama/ollama/commit/0fad1ba6615bacadaf3c446d6c53b2b06d3a3e15) Merge branch 'main' into cgroups-resource-limits - [`2a73390`](https://github.com/ollama/ollama/commit/2a73390bedf099878125a77a03de7dbacfcdc793) fix compile error and add logs - [`15a14d7`](https://github.com/ollama/ollama/commit/15a14d7791dbae8c0b58bcd783eeae9c11ba65d8) remove cpu info log ### 📊 Changes **1 file changed** (+71 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `discover/cpu_linux.go` (+71 -1) </details> ### 📄 Description This pull request can be performance improvement for inference using CPUs in a container like Kubernetes Pods. Kubernetes and Docker have CPU and memory limitation features using cgroups. But Ollama cannot be fetch CPU and memory information from it. So Ollama start too many threads in a limited environment and encount performance decrement. This pull request changes for this problem to read `/sys/fs/cgroup/cpu.max` and `/sys/fs/cgroup/memory.(max|current)` when these files exist. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:55:52 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#44449