[PR #4215] [MERGED] llm: add minimum based on layer size #11417

Closed
opened 2026-04-12 23:29:45 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/4215
Author: @mxyng
Created: 5/7/2024
Status: Merged
Merged: 5/7/2024
Merged by: @mxyng

Base: mainHead: mxyng/mem


📝 Commits (1)

  • 4736391 llm: add minimum based on layer size

📊 Changes

3 files changed (+7 additions, -7 deletions)

View changed files

📝 gpu/gpu.go (+2 -2)
📝 gpu/gpu_darwin.go (+1 -1)
📝 llm/memory.go (+4 -4)

📄 Description

adjust minimum memory requirements based on the model being loaded and reduce the static minimum


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/4215 **Author:** [@mxyng](https://github.com/mxyng) **Created:** 5/7/2024 **Status:** ✅ Merged **Merged:** 5/7/2024 **Merged by:** [@mxyng](https://github.com/mxyng) **Base:** `main` ← **Head:** `mxyng/mem` --- ### 📝 Commits (1) - [`4736391`](https://github.com/ollama/ollama/commit/4736391bfb854f540bc1dd2f6796f45fcdd58f57) llm: add minimum based on layer size ### 📊 Changes **3 files changed** (+7 additions, -7 deletions) <details> <summary>View changed files</summary> 📝 `gpu/gpu.go` (+2 -2) 📝 `gpu/gpu_darwin.go` (+1 -1) 📝 `llm/memory.go` (+4 -4) </details> ### 📄 Description adjust minimum memory requirements based on the model being loaded and reduce the static minimum --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-12 23:29:45 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#11417