[PR #12090] [MERGED] Use runners for GPU discovery #76000

Closed
opened 2026-05-05 08:27:20 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12090
Author: @dhiltgen
Created: 8/26/2025
Status: Merged
Merged: 10/1/2025
Merged by: @dhiltgen

Base: mainHead: engine_based_discovery


📝 Commits (1)

  • 0619a81 Use runners for GPU discovery

📊 Changes

57 files changed (+3274 additions, -3805 deletions)

View changed files

📝 CMakeLists.txt (+3 -4)
📝 Dockerfile (+5 -5)
discover/amd_common.go (+0 -83)
discover/amd_hip_windows.go (+0 -147)
discover/amd_linux.go (+0 -549)
discover/amd_windows.go (+0 -226)
discover/cpu_common.go (+0 -24)
📝 discover/cpu_linux.go (+19 -45)
📝 discover/cpu_linux_test.go (+1 -4)
📝 discover/cpu_windows.go (+23 -45)
📝 discover/cpu_windows_test.go (+0 -0)
discover/cuda_common.go (+0 -64)
📝 discover/gpu.go (+93 -675)
📝 discover/gpu_darwin.go (+11 -56)
discover/gpu_info.h (+0 -72)
discover/gpu_info_cudart.c (+0 -181)
discover/gpu_info_cudart.h (+0 -145)
discover/gpu_info_nvcuda.c (+0 -251)
discover/gpu_info_nvcuda.h (+0 -79)
discover/gpu_info_nvml.c (+0 -104)

...and 37 more files

📄 Description

This revamps how we discover GPUs in the system by leveraging the Ollama runner. This should eliminate inconsistency between our GPU discovery and the runners capabilities at runtime, particularly for cases where we try to filter out unsupported GPUs. Now the runner does that implicitly based on the actual device list. In some cases free VRAM reporting can be unreliable which can leaad to scheduling mistakes, so this also includes a patch to leverage more reliable VRAM reporting libraries if available.

Automatic workarounds have been removed as only one GPU leveraged this, which is now documented. This GPU will soon fall off the support matrix with the next ROCm bump.

Additional cleanup of the scheduler and discovery packages can be done in the future once we have switched on the new memory management code, and removed support for the llama runner.

Marking draft while I do additional testing, and discovered iGPUs will likely need additional handling.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12090 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 8/26/2025 **Status:** ✅ Merged **Merged:** 10/1/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `engine_based_discovery` --- ### 📝 Commits (1) - [`0619a81`](https://github.com/ollama/ollama/commit/0619a81670f68a100e16d165a69d2cf537144f02) Use runners for GPU discovery ### 📊 Changes **57 files changed** (+3274 additions, -3805 deletions) <details> <summary>View changed files</summary> 📝 `CMakeLists.txt` (+3 -4) 📝 `Dockerfile` (+5 -5) ➖ `discover/amd_common.go` (+0 -83) ➖ `discover/amd_hip_windows.go` (+0 -147) ➖ `discover/amd_linux.go` (+0 -549) ➖ `discover/amd_windows.go` (+0 -226) ➖ `discover/cpu_common.go` (+0 -24) 📝 `discover/cpu_linux.go` (+19 -45) 📝 `discover/cpu_linux_test.go` (+1 -4) 📝 `discover/cpu_windows.go` (+23 -45) 📝 `discover/cpu_windows_test.go` (+0 -0) ➖ `discover/cuda_common.go` (+0 -64) 📝 `discover/gpu.go` (+93 -675) 📝 `discover/gpu_darwin.go` (+11 -56) ➖ `discover/gpu_info.h` (+0 -72) ➖ `discover/gpu_info_cudart.c` (+0 -181) ➖ `discover/gpu_info_cudart.h` (+0 -145) ➖ `discover/gpu_info_nvcuda.c` (+0 -251) ➖ `discover/gpu_info_nvcuda.h` (+0 -79) ➖ `discover/gpu_info_nvml.c` (+0 -104) _...and 37 more files_ </details> ### 📄 Description This revamps how we discover GPUs in the system by leveraging the Ollama runner. This should eliminate inconsistency between our GPU discovery and the runners capabilities at runtime, particularly for cases where we try to filter out unsupported GPUs. Now the runner does that implicitly based on the actual device list. In some cases free VRAM reporting can be unreliable which can leaad to scheduling mistakes, so this also includes a patch to leverage more reliable VRAM reporting libraries if available. Automatic workarounds have been removed as only one GPU leveraged this, which is now documented. This GPU will soon fall off the support matrix with the next ROCm bump. Additional cleanup of the scheduler and discovery packages can be done in the future once we have switched on the new memory management code, and removed support for the llama runner. Marking draft while I do additional testing, and discovered iGPUs will likely need additional handling. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 08:27:20 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76000