[PR #814] [CLOSED] ROCm support #41588

Closed
opened 2026-04-24 21:26:39 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/814
Author: @65a
Created: 10/17/2023
Status: Closed

Base: mainHead: main


📝 Commits (1)

  • 33a0f7c Use build tags to generate accelerated binaries for CUDA and ROCm on Linux.

📊 Changes

10 files changed (+264 additions, -59 deletions)

View changed files

📝 Dockerfile (+4 -2)
📝 Dockerfile.build (+2 -2)
📝 README.md (+33 -2)
llm/accelerator_cuda.go (+67 -0)
llm/accelerator_none.go (+21 -0)
llm/accelerator_rocm.go (+85 -0)
📝 llm/llama.cpp/generate_linux.go (+0 -7)
llm/llama.cpp/generate_linux_cuda.go (+24 -0)
llm/llama.cpp/generate_linux_rocm.go (+25 -0)
📝 llm/llama.go (+3 -46)

📄 Description

#667 got closed during a bad rebase attempt. This should be just about the minimum I can come up with to use build tags to switch between ROCm and CUDA, as well as docs for how to build it. The existing dockerfiles are updated so they do not break.

Please let me know @jmorganca @mxyng @BruceMacD if you'd like this in a different approach or something, or if you don't want to do this. Closes #738. Will post test results for GGML and GGUF files.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/814 **Author:** [@65a](https://github.com/65a) **Created:** 10/17/2023 **Status:** ❌ Closed **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (1) - [`33a0f7c`](https://github.com/ollama/ollama/commit/33a0f7c3032f3d59d37f29b4f07f67af8736b28f) Use build tags to generate accelerated binaries for CUDA and ROCm on Linux. ### 📊 Changes **10 files changed** (+264 additions, -59 deletions) <details> <summary>View changed files</summary> 📝 `Dockerfile` (+4 -2) 📝 `Dockerfile.build` (+2 -2) 📝 `README.md` (+33 -2) ➕ `llm/accelerator_cuda.go` (+67 -0) ➕ `llm/accelerator_none.go` (+21 -0) ➕ `llm/accelerator_rocm.go` (+85 -0) 📝 `llm/llama.cpp/generate_linux.go` (+0 -7) ➕ `llm/llama.cpp/generate_linux_cuda.go` (+24 -0) ➕ `llm/llama.cpp/generate_linux_rocm.go` (+25 -0) 📝 `llm/llama.go` (+3 -46) </details> ### 📄 Description #667 got closed during a bad rebase attempt. This should be just about the minimum I can come up with to use build tags to switch between ROCm and CUDA, as well as docs for how to build it. The existing dockerfiles are updated so they do not break. Please let me know @jmorganca @mxyng @BruceMacD if you'd like this in a different approach or something, or if you don't want to do this. Closes #738. Will post test results for GGML and GGUF files. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 21:26:39 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#41588