[PR #7466] [MERGED] Workaround buggy P2P ROCm copy on windows #17702

Closed
opened 2026-04-16 06:11:27 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7466
Author: @dhiltgen
Created: 11/1/2024
Status: Merged
Merged: 11/7/2024
Merged by: @dhiltgen

Base: mainHead: win_rocm_p2p_workaround


📝 Commits (1)

  • 50369c3 Workaround buggy P2P ROCm copy on windows

📊 Changes

1 file changed (+6 additions, -0 deletions)

View changed files

📝 llama/make/Makefile.rocm (+6 -0)

📄 Description

This enables the workaround code only for windows which should help windows users with muliple AMD GPUs

While testing #7378 I've only been able to reproduce the gibberish behavior on one system and only on Windows. Windows ROCm shouldn't allow smaller system memory compared to VRAM, so we believe enabling this flag is safe. (On linux, if we enable this flag, it breaks users with less RAM than VRAM when they try to load a model)

Fixes #7461


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7466 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 11/1/2024 **Status:** ✅ Merged **Merged:** 11/7/2024 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `win_rocm_p2p_workaround` --- ### 📝 Commits (1) - [`50369c3`](https://github.com/ollama/ollama/commit/50369c31001588c72dcd30cbe415d957e03b8e64) Workaround buggy P2P ROCm copy on windows ### 📊 Changes **1 file changed** (+6 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `llama/make/Makefile.rocm` (+6 -0) </details> ### 📄 Description This enables the workaround code only for windows which should help windows users with muliple AMD GPUs While testing #7378 I've only been able to reproduce the gibberish behavior on one system and only on Windows. Windows ROCm shouldn't allow smaller system memory compared to VRAM, so we believe enabling this flag is safe. (On linux, if we enable this flag, it breaks users with less RAM than VRAM when they try to load a model) Fixes #7461 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 06:11:27 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#17702