[PR #9304] [MERGED] Update ROCm (6.3 linux, 6.2 windows) and CUDA v12.8 #44160

Closed
opened 2026-04-24 23:41:22 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/9304
Author: @dhiltgen
Created: 2/23/2025
Status: Merged
Merged: 2/25/2025
Merged by: @dhiltgen

Base: mainHead: bump_cuda_rocm


📝 Commits (2)

  • 86ee220 Bump cuda and rocm versions
  • acfd343 Fix windows build script

📊 Changes

6 files changed (+105 additions, -65 deletions)

View changed files

📝 .github/workflows/release.yaml (+4 -4)
📝 .github/workflows/test.yaml (+1 -1)
📝 Dockerfile (+17 -13)
📝 scripts/build_docker.sh (+1 -1)
📝 scripts/build_linux.sh (+29 -3)
📝 scripts/build_windows.ps1 (+53 -43)

📄 Description

Includes some local build script fixes as well.

Marking draft until I finish testing, and can verify if ROCm memory metrics are now reliable on Windows allowing us to toggle this to support scheduling multiple models


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/9304 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 2/23/2025 **Status:** ✅ Merged **Merged:** 2/25/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `bump_cuda_rocm` --- ### 📝 Commits (2) - [`86ee220`](https://github.com/ollama/ollama/commit/86ee22036e9ed298290696a407cbfa4e295ebb07) Bump cuda and rocm versions - [`acfd343`](https://github.com/ollama/ollama/commit/acfd343d401ed3421469cd31a2a1844c417aefd6) Fix windows build script ### 📊 Changes **6 files changed** (+105 additions, -65 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/release.yaml` (+4 -4) 📝 `.github/workflows/test.yaml` (+1 -1) 📝 `Dockerfile` (+17 -13) 📝 `scripts/build_docker.sh` (+1 -1) 📝 `scripts/build_linux.sh` (+29 -3) 📝 `scripts/build_windows.ps1` (+53 -43) </details> ### 📄 Description Includes some local build script fixes as well. ~~Marking draft until I finish testing, and can verify if ROCm memory metrics are now reliable on Windows allowing us to toggle [this](https://github.com/ollama/ollama/blob/main/discover/amd_windows.go#L110-L111) to support scheduling multiple models~~ --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:41:22 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#44160