[PR #8838] [CLOSED] fix: memory leak in clip_model_load #12792

Closed
opened 2026-04-13 00:09:49 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/8838
Author: @yushihang
Created: 2/5/2025
Status: Closed

Base: mainHead: release-meta-on-error


📝 Commits (1)

  • 6ae0781 fix: memory leak in clip_model_load

📊 Changes

1 file changed (+2 additions, -0 deletions)

View changed files

📝 llama/llama.cpp/examples/llava/clip.cpp (+2 -0)

📄 Description

Add ggml_free(meta) in error handling paths to prevent memory leak when file operations fail. The meta context is allocated during model loading and needs to be properly freed in all exit paths.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/8838 **Author:** [@yushihang](https://github.com/yushihang) **Created:** 2/5/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `release-meta-on-error` --- ### 📝 Commits (1) - [`6ae0781`](https://github.com/ollama/ollama/commit/6ae07818ec3f8bef3790132bc144f641c7bc1c49) fix: memory leak in clip_model_load ### 📊 Changes **1 file changed** (+2 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `llama/llama.cpp/examples/llava/clip.cpp` (+2 -0) </details> ### 📄 Description Add ggml_free(meta) in error handling paths to prevent memory leak when file operations fail. The meta context is allocated during model loading and needs to be properly freed in all exit paths. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:09:49 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#12792