[PR #12894] [MERGED] discovery: only retry AMD GPUs #45242

Closed
opened 2026-04-25 00:56:35 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12894
Author: @dhiltgen
Created: 10/31/2025
Status: Merged
Merged: 11/4/2025
Merged by: @dhiltgen

Base: mainHead: tune_bootstrap


📝 Commits (2)

📊 Changes

9 files changed (+96 additions, -137 deletions)

View changed files

📝 discover/runner.go (+31 -87)
📝 discover/types.go (+1 -1)
📝 llama/patches/0026-GPU-discovery-enhancements.patch (+24 -29)
📝 llama/patches/0030-Add-memory-detection-using-DXGI-PDH.patch (+6 -6)
📝 llama/patches/0031-interleave-multi-rope.patch (+2 -2)
📝 ml/backend/ggml/ggml.go (+0 -4)
📝 ml/backend/ggml/ggml/include/ggml-backend.h (+0 -2)
📝 ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp (+0 -3)
📝 ml/device.go (+32 -3)

📄 Description

Follow up from #12775

CUDA and Vulkan don't crash on unsupported devices, so retry isn't necessary. This also refactors the code to shift the Library specific logic into the ml package.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12894 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/31/2025 **Status:** ✅ Merged **Merged:** 11/4/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `tune_bootstrap` --- ### 📝 Commits (2) - [`98f1fa6`](https://github.com/ollama/ollama/commit/98f1fa63c79579175c6c77722b7c010bd9d80998) discovery: only retry AMD GPUs - [`0b3c4ba`](https://github.com/ollama/ollama/commit/0b3c4bab5e92638d2b1b4a8b87feac12cb237eec) review comments ### 📊 Changes **9 files changed** (+96 additions, -137 deletions) <details> <summary>View changed files</summary> 📝 `discover/runner.go` (+31 -87) 📝 `discover/types.go` (+1 -1) 📝 `llama/patches/0026-GPU-discovery-enhancements.patch` (+24 -29) 📝 `llama/patches/0030-Add-memory-detection-using-DXGI-PDH.patch` (+6 -6) 📝 `llama/patches/0031-interleave-multi-rope.patch` (+2 -2) 📝 `ml/backend/ggml/ggml.go` (+0 -4) 📝 `ml/backend/ggml/ggml/include/ggml-backend.h` (+0 -2) 📝 `ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp` (+0 -3) 📝 `ml/device.go` (+32 -3) </details> ### 📄 Description Follow up from #12775 CUDA and Vulkan don't crash on unsupported devices, so retry isn't necessary. This also refactors the code to shift the Library specific logic into the ml package. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 00:56:35 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#45242