[PR #11333] [MERGED] ggml: Report ordinal IDs for AMD GPUs on Windows #24053

Closed
opened 2026-04-19 17:21:13 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/11333
Author: @jessegross
Created: 7/8/2025
Status: Merged
Merged: 7/9/2025
Merged by: @jessegross

Base: mainHead: jessegross/uuid


📝 Commits (1)

  • f278cc5 ggml: Report ordinal IDs for AMD GPUs on Windows

📊 Changes

6 files changed (+45 additions, -33 deletions)

View changed files

📝 llama/patches/0017-ggml-Export-GPU-UUIDs.patch (+22 -16)
📝 ml/backend.go (+5 -5)
📝 ml/backend/ggml/ggml.go (+2 -2)
📝 ml/backend/ggml/ggml/include/ggml-backend.h (+1 -1)
📝 ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu (+14 -8)
📝 ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.m (+1 -1)

📄 Description

We don't get valid UUIDs for AMD GPUs on Windows, so the best option is to use the ordinal IDs. This brings us in line with what we currently do on the Ollama server - the only exception is AMD GPUs on Linux, which falls back to using ordinal IDs. The GGML implementation has no fallback but it doesn't appear to occur for any of the GPUs that we support.

It's also possible that there are collisions between ordinal IDs for different libraries - however the only places where we use them are AMD on Windows and Metal on Mac, which can never occur on the same system.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/11333 **Author:** [@jessegross](https://github.com/jessegross) **Created:** 7/8/2025 **Status:** ✅ Merged **Merged:** 7/9/2025 **Merged by:** [@jessegross](https://github.com/jessegross) **Base:** `main` ← **Head:** `jessegross/uuid` --- ### 📝 Commits (1) - [`f278cc5`](https://github.com/ollama/ollama/commit/f278cc58b80af44a0d633eddfd11bc06535917e4) ggml: Report ordinal IDs for AMD GPUs on Windows ### 📊 Changes **6 files changed** (+45 additions, -33 deletions) <details> <summary>View changed files</summary> 📝 `llama/patches/0017-ggml-Export-GPU-UUIDs.patch` (+22 -16) 📝 `ml/backend.go` (+5 -5) 📝 `ml/backend/ggml/ggml.go` (+2 -2) 📝 `ml/backend/ggml/ggml/include/ggml-backend.h` (+1 -1) 📝 `ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu` (+14 -8) 📝 `ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.m` (+1 -1) </details> ### 📄 Description We don't get valid UUIDs for AMD GPUs on Windows, so the best option is to use the ordinal IDs. This brings us in line with what we currently do on the Ollama server - the only exception is AMD GPUs on Linux, which falls back to using ordinal IDs. The GGML implementation has no fallback but it doesn't appear to occur for any of the GPUs that we support. It's also possible that there are collisions between ordinal IDs for different libraries - however the only places where we use them are AMD on Windows and Metal on Mac, which can never occur on the same system. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 17:21:13 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#24053