[PR #2195] [MERGED] Ignore AMD integrated GPUs #21357

Closed
opened 2026-04-19 15:35:47 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/2195
Author: @dhiltgen
Created: 1/26/2024
Status: Merged
Merged: 1/26/2024
Merged by: @dhiltgen

Base: mainHead: rocm_real_gpus


📝 Commits (1)

  • 9d7b5d6 Ignore AMD integrated GPUs

📊 Changes

3 files changed (+35 additions, -3 deletions)

View changed files

📝 gpu/gpu.go (+25 -1)
📝 gpu/gpu_info.h (+1 -0)
📝 gpu/gpu_info_rocm.c (+9 -2)

📄 Description

Fixes #2054

Integrated GPUs (APUs) from AMD may be reported by ROCm, but we can't run on them with our current llama.cpp configuration. These iGPUs report 512M of memory, so I've coded the check to ignore any ROCm reported GPU that has less than 1G of memory. If we detect only an integrated GPU, this will fallback to CPU mode. If we detect multiple ROCm GPUs, meaning one or more are discrete, and one is integrated, we'll now set ROCR_VISIBLE_DEVICES so we ignore the iGPU. If the user has explicitly set ROCR_VISIBLE_DEVICES we'll respect their setting.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/2195 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 1/26/2024 **Status:** ✅ Merged **Merged:** 1/26/2024 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `rocm_real_gpus` --- ### 📝 Commits (1) - [`9d7b5d6`](https://github.com/ollama/ollama/commit/9d7b5d6c91b0686619010f5355d7eb44c856a95d) Ignore AMD integrated GPUs ### 📊 Changes **3 files changed** (+35 additions, -3 deletions) <details> <summary>View changed files</summary> 📝 `gpu/gpu.go` (+25 -1) 📝 `gpu/gpu_info.h` (+1 -0) 📝 `gpu/gpu_info_rocm.c` (+9 -2) </details> ### 📄 Description Fixes #2054 Integrated GPUs (APUs) from AMD may be reported by ROCm, but we can't run on them with our current llama.cpp configuration. These iGPUs report 512M of memory, so I've coded the check to ignore any ROCm reported GPU that has less than 1G of memory. If we detect only an integrated GPU, this will fallback to CPU mode. If we detect multiple ROCm GPUs, meaning one or more are discrete, and one is integrated, we'll now set `ROCR_VISIBLE_DEVICES` so we ignore the iGPU. If the user has explicitly set `ROCR_VISIBLE_DEVICES` we'll respect their setting. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 15:35:47 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#21357