[PR #12775] [MERGED] Fix vulkan PCI ID and ID handling #12683

Closed
opened 2025-11-12 16:43:28 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12775
Author: @dhiltgen
Created: 10/24/2025
Status: Merged
Merged: 10/28/2025
Merged by: @dhiltgen

Base: mainHead: vulkan_indexes


📝 Commits (2)

📊 Changes

15 files changed (+418 additions, -447 deletions)

View changed files

📝 discover/runner.go (+20 -7)
📝 discover/types.go (+3 -0)
📝 llama/patches/0026-GPU-discovery-enhancements.patch (+352 -45)
📝 llama/patches/0027-NVML-fallback-for-unified-memory-GPUs.patch (+1 -1)
llama/patches/0027-vulkan-get-GPU-ID-ollama-v0.11.5.patch (+0 -95)
📝 llama/patches/0028-CUDA-Changing-the-CUDA-scheduling-strategy-to-spin-1.patch (+1 -1)
llama/patches/0028-vulkan-pci-and-memory.patch (+0 -254)
📝 llama/patches/0029-report-LoadLibrary-failures.patch (+0 -0)
📝 ml/backend/ggml/ggml.go (+3 -1)
📝 ml/backend/ggml/ggml/include/ggml-backend.h (+0 -3)
📝 ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu (+3 -12)
📝 ml/backend/ggml/ggml/src/ggml-impl.h (+1 -1)
📝 ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp (+17 -17)
📝 ml/backend/ggml/ggml/src/mem_hip.cpp (+11 -8)
📝 ml/device.go (+6 -2)

📄 Description

Intel GPUs may not report PCI IDs which was leading to incorrect overlap detection. Switch to using the existing PCI IDs, however AMD GPUs claim not to report PCI IDs, but actually do, so try anyway, as this is required for ADLX to find the GPUs on Windows.

Draft while I test on more systems...


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12775 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/24/2025 **Status:** ✅ Merged **Merged:** 10/28/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `vulkan_indexes` --- ### 📝 Commits (2) - [`5c5fd1e`](https://github.com/ollama/ollama/commit/5c5fd1ebe84c9f1fae78426b3d58f2fe4c742c87) Fix vulkan PCI ID and ID handling - [`4420984`](https://github.com/ollama/ollama/commit/4420984c038687c904614e1f96af906b7fda43f9) review comments ### 📊 Changes **15 files changed** (+418 additions, -447 deletions) <details> <summary>View changed files</summary> 📝 `discover/runner.go` (+20 -7) 📝 `discover/types.go` (+3 -0) 📝 `llama/patches/0026-GPU-discovery-enhancements.patch` (+352 -45) 📝 `llama/patches/0027-NVML-fallback-for-unified-memory-GPUs.patch` (+1 -1) ➖ `llama/patches/0027-vulkan-get-GPU-ID-ollama-v0.11.5.patch` (+0 -95) 📝 `llama/patches/0028-CUDA-Changing-the-CUDA-scheduling-strategy-to-spin-1.patch` (+1 -1) ➖ `llama/patches/0028-vulkan-pci-and-memory.patch` (+0 -254) 📝 `llama/patches/0029-report-LoadLibrary-failures.patch` (+0 -0) 📝 `ml/backend/ggml/ggml.go` (+3 -1) 📝 `ml/backend/ggml/ggml/include/ggml-backend.h` (+0 -3) 📝 `ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu` (+3 -12) 📝 `ml/backend/ggml/ggml/src/ggml-impl.h` (+1 -1) 📝 `ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp` (+17 -17) 📝 `ml/backend/ggml/ggml/src/mem_hip.cpp` (+11 -8) 📝 `ml/device.go` (+6 -2) </details> ### 📄 Description Intel GPUs may not report PCI IDs which was leading to incorrect overlap detection. Switch to using the existing PCI IDs, however AMD GPUs claim not to report PCI IDs, but actually do, so try anyway, as this is required for ADLX to find the GPUs on Windows. ~~Draft while I test on more systems...~~ --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2025-11-12 16:43:28 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#12683