[PR #12683] [MERGED] win: more verbose load failures #13917

Closed
opened 2026-04-13 00:40:14 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12683
Author: @dhiltgen
Created: 10/17/2025
Status: Merged
Merged: 10/18/2025
Merged by: @dhiltgen

Base: mainHead: win_lib_error_log


📝 Commits (1)

  • 0942d53 win: more verbose load failures

📊 Changes

2 files changed (+44 additions, -0 deletions)

View changed files

llama/patches/0031-report-LoadLibrary-failures.patch (+32 -0)
📝 ml/backend/ggml/ggml/src/ggml-backend-reg.cpp (+12 -0)

📄 Description

When loading the dynamic libraries, if something goes wrong report some details. Unfortunately this wont explain which dependencies are missing, but this breadcrumb in the logs should help us diagnose GPU discovery failures.

For the following example, I manually deleted one of the cuda dependency libraries and then set $env:OLLAMA_DEBUG="2" then during bootstrap, the following is logged:

time=2025-10-17T14:44:58.970-07:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\danie\code\ollama\dist\windows-amd64\lib\ollama\cuda_v12
dl_load_library unable to load library C:\Users\danie\code\ollama\dist\windows-amd64\lib\ollama\cuda_v12\ggml-cuda.dll: The specified module could not be found.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12683 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/17/2025 **Status:** ✅ Merged **Merged:** 10/18/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `win_lib_error_log` --- ### 📝 Commits (1) - [`0942d53`](https://github.com/ollama/ollama/commit/0942d539d63ec2748cf8510309646fbd7f060d66) win: more verbose load failures ### 📊 Changes **2 files changed** (+44 additions, -0 deletions) <details> <summary>View changed files</summary> ➕ `llama/patches/0031-report-LoadLibrary-failures.patch` (+32 -0) 📝 `ml/backend/ggml/ggml/src/ggml-backend-reg.cpp` (+12 -0) </details> ### 📄 Description When loading the dynamic libraries, if something goes wrong report some details. Unfortunately this wont explain which dependencies are missing, but this breadcrumb in the logs should help us diagnose GPU discovery failures. For the following example, I manually deleted one of the cuda dependency libraries and then set `$env:OLLAMA_DEBUG="2"` then during bootstrap, the following is logged: ``` time=2025-10-17T14:44:58.970-07:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\danie\code\ollama\dist\windows-amd64\lib\ollama\cuda_v12 dl_load_library unable to load library C:\Users\danie\code\ollama\dist\windows-amd64\lib\ollama\cuda_v12\ggml-cuda.dll: The specified module could not be found. ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:40:14 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13917