[PR #12590] [MERGED] logs: fix bogus "0 MiB free" log line #12619

Closed
opened 2025-11-12 16:41:03 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12590
Author: @dhiltgen
Created: 10/12/2025
Status: Merged
Merged: 10/14/2025
Merged by: @dhiltgen

Base: mainHead: loading


📝 Commits (1)

  • 8d1ba5b logs: fix bogus "0 MiB free" log line

📊 Changes

2 files changed (+28 additions, -7 deletions)

View changed files

📝 llama/llama.cpp/src/llama.cpp (+3 -1)
📝 llama/patches/0024-ggml-Enable-resetting-backend-devices.patch (+25 -6)

📄 Description

In some recent issues, I noticed this new "0 MiB free" log line is causing confusion

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GT 1030) (0000:06:00.0) - 0 MiB free

On the llama runner, after the recent GGML bump a new log line reports incorrect 0 MiB free after our patch to remove memory from the props. This adjusts the llama.cpp code to fetch the actual free memory of the active device.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12590 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/12/2025 **Status:** ✅ Merged **Merged:** 10/14/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `loading` --- ### 📝 Commits (1) - [`8d1ba5b`](https://github.com/ollama/ollama/commit/8d1ba5b4cfcedb286fb1b0ea4d703967e5cb8810) logs: fix bogus "0 MiB free" log line ### 📊 Changes **2 files changed** (+28 additions, -7 deletions) <details> <summary>View changed files</summary> 📝 `llama/llama.cpp/src/llama.cpp` (+3 -1) 📝 `llama/patches/0024-ggml-Enable-resetting-backend-devices.patch` (+25 -6) </details> ### 📄 Description In some recent issues, I noticed this new "0 MiB free" log line is causing confusion ``` llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GT 1030) (0000:06:00.0) - 0 MiB free ``` On the llama runner, after the recent GGML bump a new log line reports incorrect 0 MiB free after our patch to remove memory from the props. This adjusts the llama.cpp code to fetch the actual free memory of the active device. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2025-11-12 16:41:03 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#12619