[PR #9822] [MERGED] ml/backend/ggml: allocate memory with malloc when loading model #44311

Closed
opened 2026-04-24 23:48:42 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/9822
Author: @jmorganca
Created: 3/17/2025
Status: Merged
Merged: 3/17/2025
Merged by: @jmorganca

Base: mainHead: jmorganca/malloc


📝 Commits (1)

  • 45aa2db ml/backend/ggml: allocate memory with malloc when loading model

📊 Changes

1 file changed (+9 additions, -7 deletions)

View changed files

📝 ml/backend/ggml/ggml.go (+9 -7)

📄 Description

On some platforms the bts buffer would not be freed, leading to large memory allocation on model load


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/9822 **Author:** [@jmorganca](https://github.com/jmorganca) **Created:** 3/17/2025 **Status:** ✅ Merged **Merged:** 3/17/2025 **Merged by:** [@jmorganca](https://github.com/jmorganca) **Base:** `main` ← **Head:** `jmorganca/malloc` --- ### 📝 Commits (1) - [`45aa2db`](https://github.com/ollama/ollama/commit/45aa2dbe57ceb3d131ce33952974f4844eb1161c) ml/backend/ggml: allocate memory with malloc when loading model ### 📊 Changes **1 file changed** (+9 additions, -7 deletions) <details> <summary>View changed files</summary> 📝 `ml/backend/ggml/ggml.go` (+9 -7) </details> ### 📄 Description On some platforms the `bts` buffer would not be freed, leading to large memory allocation on model load --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:48:42 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#44311