[GH-ISSUE #9812] AMD RX9070/9070XT support #68478

Closed
opened 2026-05-04 14:06:52 -05:00 by GiteaMirror · 27 comments
Owner

Originally created by @9suns on GitHub (Mar 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9812

It seems that RX9070 and 9070XT has been supported after ROCm 6.3.11 with LLVM target gfx1200 and
gfx12012 , please help to update the ROCm support.

Originally created by @9suns on GitHub (Mar 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9812 It seems that RX9070 and 9070XT has been supported after ROCm 6.3.1[^6.3.1Support] with LLVM target ```gfx1200``` and ```gfx1201```[^llvm_target], please help to update the ROCm support. [^6.3.1Support]: https://github.com/ROCm/ROCm/issues/4485 [^llvm_target]: https://github.com/ROCm/ROCm/pull/4162
GiteaMirror added the feature request label 2026-05-04 14:06:52 -05:00
Author
Owner

@GDsouza commented on GitHub (Mar 18, 2025):

Relevant PR in llama.cpp - https://github.com/ggml-org/llama.cpp/pull/12372

<!-- gh-comment-id:2734949541 --> @GDsouza commented on GitHub (Mar 18, 2025): Relevant PR in llama.cpp - https://github.com/ggml-org/llama.cpp/pull/12372
Author
Owner

@hnedelciuc commented on GitHub (Mar 24, 2025):

This is a much awaited feature. Thanks.

<!-- gh-comment-id:2746716661 --> @hnedelciuc commented on GitHub (Mar 24, 2025): This is a much awaited feature. Thanks.
Author
Owner

@codeliger commented on GitHub (Mar 27, 2025):

https://github.com/ggml-org/llama.cpp/pull/12372 was merged

<!-- gh-comment-id:2758601483 --> @codeliger commented on GitHub (Mar 27, 2025): https://github.com/ggml-org/llama.cpp/pull/12372 was merged
Author
Owner

@prurigro commented on GitHub (Mar 27, 2025):

Just tested the latest on the main branch and it works perfectly. I think this issue can be closed :)

<!-- gh-comment-id:2758886193 --> @prurigro commented on GitHub (Mar 27, 2025): Just tested the latest on the main branch and it works perfectly. I think this issue can be closed :)
Author
Owner

@codeliger commented on GitHub (Mar 27, 2025):

I am assuming you changed the commit hash of the vendored llama.cpp and rebuilt it manually?

I am trying to do that now. I can't seem to get my card detected.

<!-- gh-comment-id:2758917204 --> @codeliger commented on GitHub (Mar 27, 2025): I am assuming you changed the commit hash of the vendored llama.cpp and rebuilt it manually? I am trying to do that now. I can't seem to get my card detected.
Author
Owner

@prurigro commented on GitHub (Mar 27, 2025):

I didn't need to change the hash for llama.cpp with the main branch, but I did need to include gfx1201 in the -DAMDGPU_TARGETS. Here's my full build step (which also includes gfx1200 for the 9070):

  cmake -B build -G Ninja \
    -DCMAKE_CUDA_ARCHITECTURES="" \
    -DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" \
    -DCMAKE_INSTALL_PREFIX=/usr
  cmake --build build
  go build .
<!-- gh-comment-id:2759082745 --> @prurigro commented on GitHub (Mar 27, 2025): I didn't need to change the hash for llama.cpp with the main branch, but I did need to include gfx1201 in the -DAMDGPU_TARGETS. Here's my full build step (which also includes gfx1200 for the 9070): ``` cmake -B build -G Ninja \ -DCMAKE_CUDA_ARCHITECTURES="" \ -DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" \ -DCMAKE_INSTALL_PREFIX=/usr cmake --build build go build . ```
Author
Owner

@codeliger commented on GitHub (Mar 27, 2025):

Thanks @prurigro your tip helped me with debugging.

Here is what worked for me on Arch Linux with my 9070 XT:

Update to linux kernel to 6.14.*

The Linux Kernel comes with a video card kernel driver called AMDGPU which only gets updated when you upgrade the kernel. You may be able to use the non-free closed source drivers without updating the kernel, but I haven't tried it. According to search engine results you need kernel 6.14 to get AMDGPU that has proper support for the 9070/XT.

  • Edit /etc/pacman.conf:

Uncomment this section:

[core-testing]
Include = /etc/pacman.d/mirrorlist
  • Save

sudo pacman -Sy

  • Update linux kernel to 6.14.* so that the "AMDGPU" drivers have the latest 9070 XT support:

sudo pacman -S linux linux-headers

  • Ensure whatever boot manager you have installed detects and adds the new linux kernel to the boot menu.
  • Reboot

Clone Ollama

git clone git@github.com:ollama/ollama.git
cd ollama

Update the commit pointing to llama.cpp that supports your card.

UPDATE the Makefile.sync to a commit with support. (9070 XT):

FETCH_HEAD=5dec47dcd411fdf815a3708fd6194e2b13d19006

make -f Makefile.sync checkout
make -f Makefile.sync sync

Note: When running the sync command some of the patches will fail. They seem to be cuda related patches so I ignored their failure and compiled without them.

Compile and install Ollama - ROCm specific libraries

  • cmake --preset "ROCm 6" -B build

NOTE: use /usr/local because the linux ollama installer will install there

cmake -B build \
-DCMAKE_CUDA_ARCHITECTURES="" \
-DAMDGPU_TARGETS="gfx1201" \
-DCMAKE_INSTALL_PREFIX=/usr/local

NOTE: I am manually overriding the DAMDGPU_TARGETS to only build my gpu (gfx1201) (which is a way faster build process)

NOTE: You do not need to specify DAMDGPU_TARGETS if you specified the preset in the first cmake step it will include all rocm 6 gpus.

Now run cmake --build build
Then:

cd build
make install

Note

: Because you used the /usr/local install prefix this will install the libraries to the correct location /usr/local/lib/ollama/* since ollama looks for the libs relatively (on linux) at ../lib/ollama from the executable location of /usr/local/bin/ollama.

Build and install latest verison of ollama

cd ..
go build .
sudo systemctl stop ollama
sudo cp ./ollama /usr/local/bin/ollama
sudo systemctl start ollama
sudo systemctl status ollama

Result:

...
msg="amdgpu is supported" gpu=GPU-********* gpu_type=gfx1201
...
<!-- gh-comment-id:2759335029 --> @codeliger commented on GitHub (Mar 27, 2025): Thanks @prurigro your tip helped me with debugging. Here is what worked for me on Arch Linux with my 9070 XT: ## Update to linux kernel to 6.14.* > The Linux Kernel comes with a video card kernel driver called AMDGPU which only gets updated when you upgrade the kernel. You may be able to use the non-free closed source drivers without updating the kernel, but I haven't tried it. According to search engine results you need kernel 6.14 to get AMDGPU that has proper support for the 9070/XT. * Edit `/etc/pacman.conf`: Uncomment this section: ``` [core-testing] Include = /etc/pacman.d/mirrorlist ``` * Save `sudo pacman -Sy` * Update linux kernel to 6.14.* so that the "AMDGPU" drivers have the latest 9070 XT support: `sudo pacman -S linux linux-headers` * Ensure whatever boot manager you have installed detects and adds the new linux kernel to the boot menu. * Reboot ## Clone Ollama ``` git clone git@github.com:ollama/ollama.git cd ollama ``` ## Update the commit pointing to llama.cpp that supports your card. UPDATE the `Makefile.sync` to a commit with support. (9070 XT): `FETCH_HEAD=5dec47dcd411fdf815a3708fd6194e2b13d19006` ```bash make -f Makefile.sync checkout make -f Makefile.sync sync ``` > **Note:** When running the sync command some of the patches will fail. They seem to be cuda related patches so I ignored their failure and compiled without them. ## Compile and install Ollama - ROCm specific libraries * `cmake --preset "ROCm 6" -B build` > **NOTE:** use `/usr/local` because the linux ollama installer will install there ```bash cmake -B build \ -DCMAKE_CUDA_ARCHITECTURES="" \ -DAMDGPU_TARGETS="gfx1201" \ -DCMAKE_INSTALL_PREFIX=/usr/local ``` > **NOTE:** I am manually overriding the DAMDGPU_TARGETS to only build my gpu (gfx1201) (which is a way faster build process) > **NOTE:** You do not need to specify DAMDGPU_TARGETS if you specified the preset in the first cmake step it will include all rocm 6 gpus. Now run ` cmake --build build` Then: ```bash cd build make install ``` > **Note**: Because you used the `/usr/local` install prefix this will install the libraries to the correct location `/usr/local/lib/ollama/*` since ollama looks for the libs relatively (on linux) at `../lib/ollama` from the executable location of `/usr/local/bin/ollama`. # Build and install latest verison of ollama ``` cd .. go build . sudo systemctl stop ollama sudo cp ./ollama /usr/local/bin/ollama sudo systemctl start ollama sudo systemctl status ollama ``` ## Result: ``` ... msg="amdgpu is supported" gpu=GPU-********* gpu_type=gfx1201 ... ```
Author
Owner

@prurigro commented on GitHub (Mar 27, 2025):

If you're also using arch, here's a modified official PKGBUILD that will work on the 9070 and 9070xt (note that I'm also using linux 6.14):

# Maintainer: Alexander F. Rødseth <xyproto@archlinux.org>
# Maintainer: Sven-Hendrik Haase <svenstaro@archlinux.org>
# Contributor: Steven Allen <steven@stebalien.com>
# Contributor: Matt Harrison <matt@harrison.us.com>
# Contributor: Kainoa Kanter <kainoa@t1c.dev>

pkgbase=ollama
pkgname=(ollama ollama-rocm ollama-docs)
pkgver=0.6.2
pkgrel=1
pkgdesc='Create, run and share large language models (LLMs)'
arch=(x86_64)
url='https://github.com/ollama/ollama'
license=(MIT)
options=('!lto')
makedepends=(cmake ninja git go hipblas clblast)
source=(git+https://github.com/ollama/ollama
        ollama-ld.conf
        ollama.service
        sysusers.conf
        tmpfiles.d)
b2sums=('SKIP'
        '121a7854b5a7ffb60226aaf22eed1f56311ab7d0a5630579525211d5c096040edbcfd2608169a4b6d83e8b4e4855dbb22f8ebf3d52de78a34ea3d4631b7eff36'
        '031e0809a7f564de87017401c83956d43ac29bd0e988b250585af728b952a27d139b3cad0ab1e43750e2cd3b617287d3b81efc4a70ddd61709127f68bd15eabd'
        '68622ac2e20c1d4f9741c57d2567695ec7b5204ab43356d164483cd3bc9da79fad72489bb33c8a17c2e5cb3b142353ed5f466ce857b0f46965426d16fb388632'
        'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed')

build() {
  export CGO_CPPFLAGS="${CPPFLAGS}"
  export CGO_CFLAGS="${CFLAGS}"
  export CGO_CXXFLAGS="${CXXFLAGS}"
  export CGO_LDFLAGS="${LDFLAGS}"
  export GOPATH="${srcdir}"
  export GOFLAGS="-buildmode=pie -mod=readonly -modcacherw '-ldflags=-linkmode=external -compressdwarf=false -X=github.com/ollama/ollama/version.Version=$pkgver -X=github.com/ollama/ollama/server.mode=release'"

  cd ollama

  # Remove the runtime dependencies from installation so CMake doesn't install
  # lots of system dependencies into the target path.
  sed -i 's/PRE_INCLUDE_REGEXES.*/PRE_INCLUDE_REGEXES = ""/' CMakeLists.txt

  # Sync GPU targets from CMakePresets.json
  cmake -B build -G Ninja \
    -DCMAKE_CUDA_ARCHITECTURES="" \
    -DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" \
    -DCMAKE_INSTALL_PREFIX=/usr
  cmake --build build
  go build .
}

check() {
  ollama/ollama --version > /dev/null
  cd ollama
  go test .
}

package_ollama() {
  DESTDIR="$pkgdir" cmake --install ollama/build --component CPU

  install -Dm755 $pkgname/ollama "$pkgdir/usr/bin/ollama"
  install -dm755 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"

  ln -s /var/lib/ollama "$pkgdir/usr/share/ollama"
}

package_ollama-rocm() {
  pkgdesc='Create, run and share large language models (LLMs) with ROCm'
  depends+=(ollama hipblas)

  DESTDIR="$pkgdir" cmake --install ollama/build --component HIP
  rm -rf "$pkgdir"/usr/lib/ollama/rocm/rocblas/library
}

package_ollama-docs() {
  pkgdesc='Documentation for Ollama'

  install -d "$pkgdir/usr/share/doc"
  cp -r ollama/docs "$pkgdir/usr/share/doc/ollama"
  install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

NOTE: I'm using the current pkgver because I assume the next ollama release will include the required updates, and this way it'll upgrade seamlessly.

EDIT: Thanks @hnedelciuc for pointing out that I swapped the "9" and "7" for the two card models- I edited the above to fix them.

<!-- gh-comment-id:2759340424 --> @prurigro commented on GitHub (Mar 27, 2025): If you're also using arch, here's a modified official PKGBUILD that will work on the 9070 and 9070xt (note that I'm also using linux 6.14): ```bash # Maintainer: Alexander F. Rødseth <xyproto@archlinux.org> # Maintainer: Sven-Hendrik Haase <svenstaro@archlinux.org> # Contributor: Steven Allen <steven@stebalien.com> # Contributor: Matt Harrison <matt@harrison.us.com> # Contributor: Kainoa Kanter <kainoa@t1c.dev> pkgbase=ollama pkgname=(ollama ollama-rocm ollama-docs) pkgver=0.6.2 pkgrel=1 pkgdesc='Create, run and share large language models (LLMs)' arch=(x86_64) url='https://github.com/ollama/ollama' license=(MIT) options=('!lto') makedepends=(cmake ninja git go hipblas clblast) source=(git+https://github.com/ollama/ollama ollama-ld.conf ollama.service sysusers.conf tmpfiles.d) b2sums=('SKIP' '121a7854b5a7ffb60226aaf22eed1f56311ab7d0a5630579525211d5c096040edbcfd2608169a4b6d83e8b4e4855dbb22f8ebf3d52de78a34ea3d4631b7eff36' '031e0809a7f564de87017401c83956d43ac29bd0e988b250585af728b952a27d139b3cad0ab1e43750e2cd3b617287d3b81efc4a70ddd61709127f68bd15eabd' '68622ac2e20c1d4f9741c57d2567695ec7b5204ab43356d164483cd3bc9da79fad72489bb33c8a17c2e5cb3b142353ed5f466ce857b0f46965426d16fb388632' 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed') build() { export CGO_CPPFLAGS="${CPPFLAGS}" export CGO_CFLAGS="${CFLAGS}" export CGO_CXXFLAGS="${CXXFLAGS}" export CGO_LDFLAGS="${LDFLAGS}" export GOPATH="${srcdir}" export GOFLAGS="-buildmode=pie -mod=readonly -modcacherw '-ldflags=-linkmode=external -compressdwarf=false -X=github.com/ollama/ollama/version.Version=$pkgver -X=github.com/ollama/ollama/server.mode=release'" cd ollama # Remove the runtime dependencies from installation so CMake doesn't install # lots of system dependencies into the target path. sed -i 's/PRE_INCLUDE_REGEXES.*/PRE_INCLUDE_REGEXES = ""/' CMakeLists.txt # Sync GPU targets from CMakePresets.json cmake -B build -G Ninja \ -DCMAKE_CUDA_ARCHITECTURES="" \ -DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" \ -DCMAKE_INSTALL_PREFIX=/usr cmake --build build go build . } check() { ollama/ollama --version > /dev/null cd ollama go test . } package_ollama() { DESTDIR="$pkgdir" cmake --install ollama/build --component CPU install -Dm755 $pkgname/ollama "$pkgdir/usr/bin/ollama" install -dm755 "$pkgdir/var/lib/ollama" install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" ln -s /var/lib/ollama "$pkgdir/usr/share/ollama" } package_ollama-rocm() { pkgdesc='Create, run and share large language models (LLMs) with ROCm' depends+=(ollama hipblas) DESTDIR="$pkgdir" cmake --install ollama/build --component HIP rm -rf "$pkgdir"/usr/lib/ollama/rocm/rocblas/library } package_ollama-docs() { pkgdesc='Documentation for Ollama' install -d "$pkgdir/usr/share/doc" cp -r ollama/docs "$pkgdir/usr/share/doc/ollama" install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" } ``` NOTE: I'm using the current pkgver because I assume the next ollama release will include the required updates, and this way it'll upgrade seamlessly. EDIT: Thanks @hnedelciuc for pointing out that I swapped the "9" and "7" for the two card models- I edited the above to fix them.
Author
Owner

@hnedelciuc commented on GitHub (Mar 28, 2025):

If you're also using arch, here's a modified official PKGBUILD that will work on the 7090 and 7090xt (note that I'm also using linux 6.14):

Maintainer: Alexander F. Rødseth xyproto@archlinux.org

Maintainer: Sven-Hendrik Haase svenstaro@archlinux.org

Contributor: Steven Allen steven@stebalien.com

Contributor: Matt Harrison matt@harrison.us.com

Contributor: Kainoa Kanter kainoa@t1c.dev

pkgbase=ollama
pkgname=(ollama ollama-rocm ollama-docs)
pkgver=0.6.2
pkgrel=1
pkgdesc='Create, run and share large language models (LLMs)'
arch=(x86_64)
url='https://github.com/ollama/ollama'
license=(MIT)
options=('!lto')
makedepends=(cmake ninja git go hipblas clblast)
source=(git+https://github.com/ollama/ollama
ollama-ld.conf
ollama.service
sysusers.conf
tmpfiles.d)
b2sums=('SKIP'
'121a7854b5a7ffb60226aaf22eed1f56311ab7d0a5630579525211d5c096040edbcfd2608169a4b6d83e8b4e4855dbb22f8ebf3d52de78a34ea3d4631b7eff36'
'031e0809a7f564de87017401c83956d43ac29bd0e988b250585af728b952a27d139b3cad0ab1e43750e2cd3b617287d3b81efc4a70ddd61709127f68bd15eabd'
'68622ac2e20c1d4f9741c57d2567695ec7b5204ab43356d164483cd3bc9da79fad72489bb33c8a17c2e5cb3b142353ed5f466ce857b0f46965426d16fb388632'
'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed')

build() {
export CGO_CPPFLAGS="${CPPFLAGS}"
export CGO_CFLAGS="${CFLAGS}"
export CGO_CXXFLAGS="${CXXFLAGS}"
export CGO_LDFLAGS="${LDFLAGS}"
export GOPATH="${srcdir}"
export GOFLAGS="-buildmode=pie -mod=readonly -modcacherw '-ldflags=-linkmode=external -compressdwarf=false -X=github.com/ollama/ollama/version.Version=$pkgver -X=github.com/ollama/ollama/server.mode=release'"

cd ollama

Remove the runtime dependencies from installation so CMake doesn't install

lots of system dependencies into the target path.

sed -i 's/PRE_INCLUDE_REGEXES.*/PRE_INCLUDE_REGEXES = ""/' CMakeLists.txt

Sync GPU targets from CMakePresets.json

cmake -B build -G Ninja
-DCMAKE_CUDA_ARCHITECTURES=""
-DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-"
-DCMAKE_INSTALL_PREFIX=/usr
cmake --build build
go build .
}

check() {
ollama/ollama --version > /dev/null
cd ollama
go test .
}

package_ollama() {
DESTDIR="$pkgdir" cmake --install ollama/build --component CPU

install -Dm755 $pkgname/ollama "$pkgdir/usr/bin/ollama"
install -dm755 "$pkgdir/var/lib/ollama"
install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"

ln -s /var/lib/ollama "$pkgdir/usr/share/ollama"
}

package_ollama-rocm() {
pkgdesc='Create, run and share large language models (LLMs) with ROCm'
depends+=(ollama hipblas)

DESTDIR="$pkgdir" cmake --install ollama/build --component HIP
rm -rf "$pkgdir"/usr/lib/ollama/rocm/rocblas/library
}

package_ollama-docs() {
pkgdesc='Documentation for Ollama'

install -d "$pkgdir/usr/share/doc"
cp -r ollama/docs "$pkgdir/usr/share/doc/ollama"
install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}
NOTE: I'm using the current pkgver because I assume the next ollama release will include the required updates, and this way it'll upgrade seamlessly.

AMD doesn't have a 7090XT or a 7090 GPU. I assume you meant 9070 and 9070XT? AMD has changed its naming scheme for their latest Radeon GPUs which everyone finds a bit confusing. Their previous generation 7900 series featured the following models: 7900XTX, 7900XT, 7900GRE and 7900M (mobile), but those probably already worked well with Ollama so I don't think you are referring to them.

<!-- gh-comment-id:2759948298 --> @hnedelciuc commented on GitHub (Mar 28, 2025): > If you're also using arch, here's a modified official PKGBUILD that will work on the 7090 and 7090xt (note that I'm also using linux 6.14): > > # Maintainer: Alexander F. Rødseth <xyproto@archlinux.org> > # Maintainer: Sven-Hendrik Haase <svenstaro@archlinux.org> > # Contributor: Steven Allen <steven@stebalien.com> > # Contributor: Matt Harrison <matt@harrison.us.com> > # Contributor: Kainoa Kanter <kainoa@t1c.dev> > > pkgbase=ollama > pkgname=(ollama ollama-rocm ollama-docs) > pkgver=0.6.2 > pkgrel=1 > pkgdesc='Create, run and share large language models (LLMs)' > arch=(x86_64) > url='https://github.com/ollama/ollama' > license=(MIT) > options=('!lto') > makedepends=(cmake ninja git go hipblas clblast) > source=(git+https://github.com/ollama/ollama > ollama-ld.conf > ollama.service > sysusers.conf > tmpfiles.d) > b2sums=('SKIP' > '121a7854b5a7ffb60226aaf22eed1f56311ab7d0a5630579525211d5c096040edbcfd2608169a4b6d83e8b4e4855dbb22f8ebf3d52de78a34ea3d4631b7eff36' > '031e0809a7f564de87017401c83956d43ac29bd0e988b250585af728b952a27d139b3cad0ab1e43750e2cd3b617287d3b81efc4a70ddd61709127f68bd15eabd' > '68622ac2e20c1d4f9741c57d2567695ec7b5204ab43356d164483cd3bc9da79fad72489bb33c8a17c2e5cb3b142353ed5f466ce857b0f46965426d16fb388632' > 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed') > > build() { > export CGO_CPPFLAGS="${CPPFLAGS}" > export CGO_CFLAGS="${CFLAGS}" > export CGO_CXXFLAGS="${CXXFLAGS}" > export CGO_LDFLAGS="${LDFLAGS}" > export GOPATH="${srcdir}" > export GOFLAGS="-buildmode=pie -mod=readonly -modcacherw '-ldflags=-linkmode=external -compressdwarf=false -X=github.com/ollama/ollama/version.Version=$pkgver -X=github.com/ollama/ollama/server.mode=release'" > > cd ollama > > # Remove the runtime dependencies from installation so CMake doesn't install > # lots of system dependencies into the target path. > sed -i 's/PRE_INCLUDE_REGEXES.*/PRE_INCLUDE_REGEXES = ""/' CMakeLists.txt > > # Sync GPU targets from CMakePresets.json > cmake -B build -G Ninja \ > -DCMAKE_CUDA_ARCHITECTURES="" \ > -DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" \ > -DCMAKE_INSTALL_PREFIX=/usr > cmake --build build > go build . > } > > check() { > ollama/ollama --version > /dev/null > cd ollama > go test . > } > > package_ollama() { > DESTDIR="$pkgdir" cmake --install ollama/build --component CPU > > install -Dm755 $pkgname/ollama "$pkgdir/usr/bin/ollama" > install -dm755 "$pkgdir/var/lib/ollama" > install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" > install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" > install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" > install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" > > ln -s /var/lib/ollama "$pkgdir/usr/share/ollama" > } > > package_ollama-rocm() { > pkgdesc='Create, run and share large language models (LLMs) with ROCm' > depends+=(ollama hipblas) > > DESTDIR="$pkgdir" cmake --install ollama/build --component HIP > rm -rf "$pkgdir"/usr/lib/ollama/rocm/rocblas/library > } > > package_ollama-docs() { > pkgdesc='Documentation for Ollama' > > install -d "$pkgdir/usr/share/doc" > cp -r ollama/docs "$pkgdir/usr/share/doc/ollama" > install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" > } > NOTE: I'm using the current pkgver because I assume the next ollama release will include the required updates, and this way it'll upgrade seamlessly. AMD doesn't have a 7090XT or a 7090 GPU. I assume you meant 9070 and 9070XT? AMD has changed its naming scheme for their latest Radeon GPUs which everyone finds a bit confusing. Their previous generation 7900 series featured the following models: 7900XTX, 7900XT, 7900GRE and 7900M (mobile), but those probably already worked well with Ollama so I don't think you are referring to them.
Author
Owner

@Khameleon21 commented on GitHub (Mar 29, 2025):

If you're also using arch, here's a modified official PKGBUILD that will work on the 9070 and 9070xt (note that I'm also using linux 6.14):

Maintainer: Alexander F. Rødseth xyproto@archlinux.org

Maintainer: Sven-Hendrik Haase svenstaro@archlinux.org

Contributor: Steven Allen steven@stebalien.com

Contributor: Matt Harrison matt@harrison.us.com

Contributor: Kainoa Kanter kainoa@t1c.dev

pkgbase=ollama
pkgname=(ollama ollama-rocm ollama-docs)
pkgver=0.6.2
pkgrel=1
pkgdesc='Create, run and share large language models (LLMs)'
arch=(x86_64)
url='https://github.com/ollama/ollama'
license=(MIT)
options=('!lto')
makedepends=(cmake ninja git go hipblas clblast)
source=(git+https://github.com/ollama/ollama
ollama-ld.conf
ollama.service
sysusers.conf
tmpfiles.d)
b2sums=('SKIP'
'121a7854b5a7ffb60226aaf22eed1f56311ab7d0a5630579525211d5c096040edbcfd2608169a4b6d83e8b4e4855dbb22f8ebf3d52de78a34ea3d4631b7eff36'
'031e0809a7f564de87017401c83956d43ac29bd0e988b250585af728b952a27d139b3cad0ab1e43750e2cd3b617287d3b81efc4a70ddd61709127f68bd15eabd'
'68622ac2e20c1d4f9741c57d2567695ec7b5204ab43356d164483cd3bc9da79fad72489bb33c8a17c2e5cb3b142353ed5f466ce857b0f46965426d16fb388632'
'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed')

build() {
export CGO_CPPFLAGS="${CPPFLAGS}"
export CGO_CFLAGS="${CFLAGS}"
export CGO_CXXFLAGS="${CXXFLAGS}"
export CGO_LDFLAGS="${LDFLAGS}"
export GOPATH="${srcdir}"
export GOFLAGS="-buildmode=pie -mod=readonly -modcacherw '-ldflags=-linkmode=external -compressdwarf=false -X=github.com/ollama/ollama/version.Version=$pkgver -X=github.com/ollama/ollama/server.mode=release'"

cd ollama

Remove the runtime dependencies from installation so CMake doesn't install

lots of system dependencies into the target path.

sed -i 's/PRE_INCLUDE_REGEXES.*/PRE_INCLUDE_REGEXES = ""/' CMakeLists.txt

Sync GPU targets from CMakePresets.json

cmake -B build -G Ninja
-DCMAKE_CUDA_ARCHITECTURES=""
-DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-"
-DCMAKE_INSTALL_PREFIX=/usr
cmake --build build
go build .
}

check() {
ollama/ollama --version > /dev/null
cd ollama
go test .
}

package_ollama() {
DESTDIR="$pkgdir" cmake --install ollama/build --component CPU

install -Dm755 $pkgname/ollama "$pkgdir/usr/bin/ollama"
install -dm755 "$pkgdir/var/lib/ollama"
install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"

ln -s /var/lib/ollama "$pkgdir/usr/share/ollama"
}

package_ollama-rocm() {
pkgdesc='Create, run and share large language models (LLMs) with ROCm'
depends+=(ollama hipblas)

DESTDIR="$pkgdir" cmake --install ollama/build --component HIP
rm -rf "$pkgdir"/usr/lib/ollama/rocm/rocblas/library
}

package_ollama-docs() {
pkgdesc='Documentation for Ollama'

install -d "$pkgdir/usr/share/doc"
cp -r ollama/docs "$pkgdir/usr/share/doc/ollama"
install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

NOTE: I'm using the current pkgver because I assume the next ollama release will include the required updates, and this way it'll upgrade seamlessly.

EDIT: Thanks @hnedelciuc for pointing out that I swapped the "9" and "7" for the two card models- I edited the above to fix them.

I get the error ==> ERROR: ollama-ld.conf was not found in the build directory and is not a URL.
I comment out the reference and it will build but then things are not working properly.

<!-- gh-comment-id:2763461645 --> @Khameleon21 commented on GitHub (Mar 29, 2025): > If you're also using arch, here's a modified official PKGBUILD that will work on the 9070 and 9070xt (note that I'm also using linux 6.14): > > # Maintainer: Alexander F. Rødseth <xyproto@archlinux.org> > # Maintainer: Sven-Hendrik Haase <svenstaro@archlinux.org> > # Contributor: Steven Allen <steven@stebalien.com> > # Contributor: Matt Harrison <matt@harrison.us.com> > # Contributor: Kainoa Kanter <kainoa@t1c.dev> > > pkgbase=ollama > pkgname=(ollama ollama-rocm ollama-docs) > pkgver=0.6.2 > pkgrel=1 > pkgdesc='Create, run and share large language models (LLMs)' > arch=(x86_64) > url='https://github.com/ollama/ollama' > license=(MIT) > options=('!lto') > makedepends=(cmake ninja git go hipblas clblast) > source=(git+https://github.com/ollama/ollama > ollama-ld.conf > ollama.service > sysusers.conf > tmpfiles.d) > b2sums=('SKIP' > '121a7854b5a7ffb60226aaf22eed1f56311ab7d0a5630579525211d5c096040edbcfd2608169a4b6d83e8b4e4855dbb22f8ebf3d52de78a34ea3d4631b7eff36' > '031e0809a7f564de87017401c83956d43ac29bd0e988b250585af728b952a27d139b3cad0ab1e43750e2cd3b617287d3b81efc4a70ddd61709127f68bd15eabd' > '68622ac2e20c1d4f9741c57d2567695ec7b5204ab43356d164483cd3bc9da79fad72489bb33c8a17c2e5cb3b142353ed5f466ce857b0f46965426d16fb388632' > 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed') > > build() { > export CGO_CPPFLAGS="${CPPFLAGS}" > export CGO_CFLAGS="${CFLAGS}" > export CGO_CXXFLAGS="${CXXFLAGS}" > export CGO_LDFLAGS="${LDFLAGS}" > export GOPATH="${srcdir}" > export GOFLAGS="-buildmode=pie -mod=readonly -modcacherw '-ldflags=-linkmode=external -compressdwarf=false -X=github.com/ollama/ollama/version.Version=$pkgver -X=github.com/ollama/ollama/server.mode=release'" > > cd ollama > > # Remove the runtime dependencies from installation so CMake doesn't install > # lots of system dependencies into the target path. > sed -i 's/PRE_INCLUDE_REGEXES.*/PRE_INCLUDE_REGEXES = ""/' CMakeLists.txt > > # Sync GPU targets from CMakePresets.json > cmake -B build -G Ninja \ > -DCMAKE_CUDA_ARCHITECTURES="" \ > -DAMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" \ > -DCMAKE_INSTALL_PREFIX=/usr > cmake --build build > go build . > } > > check() { > ollama/ollama --version > /dev/null > cd ollama > go test . > } > > package_ollama() { > DESTDIR="$pkgdir" cmake --install ollama/build --component CPU > > install -Dm755 $pkgname/ollama "$pkgdir/usr/bin/ollama" > install -dm755 "$pkgdir/var/lib/ollama" > install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" > install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" > install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" > install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" > > ln -s /var/lib/ollama "$pkgdir/usr/share/ollama" > } > > package_ollama-rocm() { > pkgdesc='Create, run and share large language models (LLMs) with ROCm' > depends+=(ollama hipblas) > > DESTDIR="$pkgdir" cmake --install ollama/build --component HIP > rm -rf "$pkgdir"/usr/lib/ollama/rocm/rocblas/library > } > > package_ollama-docs() { > pkgdesc='Documentation for Ollama' > > install -d "$pkgdir/usr/share/doc" > cp -r ollama/docs "$pkgdir/usr/share/doc/ollama" > install -Dm644 ollama/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" > } > > NOTE: I'm using the current pkgver because I assume the next ollama release will include the required updates, and this way it'll upgrade seamlessly. > > EDIT: Thanks [@hnedelciuc](https://github.com/hnedelciuc) for pointing out that I swapped the "9" and "7" for the two card models- I edited the above to fix them. I get the error ==> ERROR: ollama-ld.conf was not found in the build directory and is not a URL. I comment out the reference and it will build but then things are not working properly.
Author
Owner

@prurigro commented on GitHub (Mar 29, 2025):

Hmm, it's probably because there've been more commits since then. I used ead27aa9fe.

If it makes life easier, here are the packages pre-built:

http://96.126.108.7:90/ollama-0.6.2-1-x86_64.pkg.tar
http://96.126.108.7:90/ollama-rocm-0.6.2-1-x86_64.pkg.tar

<!-- gh-comment-id:2763551589 --> @prurigro commented on GitHub (Mar 29, 2025): Hmm, it's probably because there've been more commits since then. I used ead27aa9fe85b4a1e1c434080d5e005e3cd68a16. If it makes life easier, here are the packages pre-built: http://96.126.108.7:90/ollama-0.6.2-1-x86_64.pkg.tar http://96.126.108.7:90/ollama-rocm-0.6.2-1-x86_64.pkg.tar
Author
Owner

@Khameleon21 commented on GitHub (Mar 29, 2025):

Hmm, it's probably because there've been more commits since then. I used ead27aa.

If it makes life easier, here are the packages pre-built:

http://96.126.108.7:90/ollama-0.6.2-1-x86_64.pkg.tar http://96.126.108.7:90/ollama-rocm-0.6.2-1-x86_64.pkg.tar

Thanks a lot!

<!-- gh-comment-id:2763573912 --> @Khameleon21 commented on GitHub (Mar 29, 2025): > Hmm, it's probably because there've been more commits since then. I used [ead27aa](https://github.com/ollama/ollama/commit/ead27aa9fe85b4a1e1c434080d5e005e3cd68a16). > > If it makes life easier, here are the packages pre-built: > > http://96.126.108.7:90/ollama-0.6.2-1-x86_64.pkg.tar http://96.126.108.7:90/ollama-rocm-0.6.2-1-x86_64.pkg.tar Thanks a lot!
Author
Owner

@prurigro commented on GitHub (Apr 4, 2025):

The latest arch release uses a version that supports the 9070/9070xt but they haven't built support in. I've opened an issue here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues/4 -- in the meantime, on the chance that it's still helpful, here are precompiled updates (the tweak I mention in the issue also works if you build yourself):

http://96.126.108.7:90/ollama-0.6.4-1-x86_64.pkg.tar
http://96.126.108.7:90/ollama-rocm-0.6.4-1-x86_64.pkg.tar

This issue can definitely be closed now. EDIT: It's possible Windows still needs support, so don't listen to me :)

<!-- gh-comment-id:2779617966 --> @prurigro commented on GitHub (Apr 4, 2025): The latest arch release uses a version that supports the 9070/9070xt but they haven't built support in. I've opened an issue here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues/4 -- in the meantime, on the chance that it's still helpful, here are precompiled updates (the tweak I mention in the issue also works if you build yourself): http://96.126.108.7:90/ollama-0.6.4-1-x86_64.pkg.tar http://96.126.108.7:90/ollama-rocm-0.6.4-1-x86_64.pkg.tar This issue can definitely be closed now. EDIT: It's possible Windows still needs support, so don't listen to me :)
Author
Owner

@leacar21 commented on GitHub (Apr 6, 2025):

When is support for the 9070 and 9070 XT expected to be available for Windows?

<!-- gh-comment-id:2781555770 --> @leacar21 commented on GitHub (Apr 6, 2025): When is support for the 9070 and 9070 XT expected to be available for Windows?
Author
Owner

@scotbud123 commented on GitHub (Apr 8, 2025):

When is support for the 9070 and 9070 XT expected to be available for Windows?

Yeah, just installed Msty on my new build and was sad to see no Windows support for my 9070 XT from Ollama in general on Windows.

<!-- gh-comment-id:2785036858 --> @scotbud123 commented on GitHub (Apr 8, 2025): > When is support for the 9070 and 9070 XT expected to be available for Windows? Yeah, just installed Msty on my new build and was sad to see no Windows support for my 9070 XT from Ollama in general on Windows.
Author
Owner

@hnedelciuc commented on GitHub (Apr 9, 2025):

When is support for the 9070 and 9070 XT expected to be available for Windows?

Yeah, just installed Msty on my new build and was sad to see no Windows support for my 9070 XT from Ollama in general on Windows.

It seems most users in the Ollama community use Linux for AI work, hence they just assume "we can definitely close the issue". Windows users are just being swept under the rug, it seems.

<!-- gh-comment-id:2791161712 --> @hnedelciuc commented on GitHub (Apr 9, 2025): > > When is support for the 9070 and 9070 XT expected to be available for Windows? > > Yeah, just installed Msty on my new build and was sad to see no Windows support for my 9070 XT from Ollama in general on Windows. It seems most users in the Ollama community use Linux for AI work, hence they just assume "we can definitely close the issue". Windows users are just being swept under the rug, it seems.
Author
Owner

@prurigro commented on GitHub (Apr 10, 2025):

@hnedelciuc Yeah my bad, I assumed support on Linux meant support across the board. That said, is it possible that msty has the same issue as the package for Arch, where support was possible but the the build script wasn't updated to include gfx1200 and gfx1201?

<!-- gh-comment-id:2791302984 --> @prurigro commented on GitHub (Apr 10, 2025): @hnedelciuc Yeah my bad, I assumed support on Linux meant support across the board. That said, is it possible that msty has the same issue as the package for Arch, where support was possible but the the build script wasn't updated to include gfx1200 and gfx1201?
Author
Owner

@SnowLeopard71 commented on GitHub (Apr 11, 2025):

When is support for the 9070 and 9070 XT expected to be available for Windows?

My guess is shortly after AMD finally releases a HIP SDK that includes ROCM 6.3.1.
The latest HIP SDK has only ROCM 6.2 which does not have any files for gfx1200 and gfx1201.

<!-- gh-comment-id:2797714518 --> @SnowLeopard71 commented on GitHub (Apr 11, 2025): > When is support for the 9070 and 9070 XT expected to be available for Windows? My guess is shortly after AMD finally releases a HIP SDK that includes ROCM 6.3.1. The latest HIP SDK has only ROCM 6.2 which does not have any files for gfx1200 and gfx1201.
Author
Owner

@leacar21 commented on GitHub (Apr 11, 2025):

Thanks @SnowLeopard71 . I see that the new version of HIT has just been released: ROCm HIP 6.4.0. At least one of the commits shows this regarding the gfx1200 and gfx1201:

Image

<!-- gh-comment-id:2798123270 --> @leacar21 commented on GitHub (Apr 11, 2025): Thanks @SnowLeopard71 . I see that the new version of HIT has just been released: [ROCm HIP 6.4.0](https://github.com/ROCm/hip/releases/tag/rocm-6.4.0). At least one of the commits shows this regarding the gfx1200 and gfx1201: ![Image](https://github.com/user-attachments/assets/fb7916d9-3ef5-46bf-a4b9-71c1a2867541)
Author
Owner

@SnowLeopard71 commented on GitHub (Apr 11, 2025):

Thanks @SnowLeopard71 . I see that the new version of HIT has just been released: ROCm HIP 6.4.0. At least one of the commits shows this regarding the gfx1200 and gfx1201:

I was referring to https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html
which I found from the main ROCm page https://rocm.docs.amd.com/en/latest/what-is-rocm.html

I also tried unsuccessfully to get the 9070 recognized under WSL2, but the Adrenalin driver doesn't yet allow these to work, only the 7000 series as per the release notes.

<!-- gh-comment-id:2798136300 --> @SnowLeopard71 commented on GitHub (Apr 11, 2025): > Thanks [@SnowLeopard71](https://github.com/SnowLeopard71) . I see that the new version of HIT has just been released: [ROCm HIP 6.4.0](https://github.com/ROCm/hip/releases/tag/rocm-6.4.0). At least one of the commits shows this regarding the gfx1200 and gfx1201: I was referring to https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html which I found from the main ROCm page https://rocm.docs.amd.com/en/latest/what-is-rocm.html I also tried unsuccessfully to get the 9070 recognized under WSL2, but the Adrenalin driver doesn't yet allow these to work, only the 7000 series as per the release notes.
Author
Owner

@9suns commented on GitHub (Apr 13, 2025):

I had tested on my workstation, my gfx1200 has been found and worked with the latest version 0.6.5 and ROCm version 6.4.0

But I also tested on version 0.6.4, it didn't work, please note that.

Thanks to the contributors, but I think I will keep this issue open and pending for AMD HIP/ROCm update for Windows.

Thanks.

<!-- gh-comment-id:2800036599 --> @9suns commented on GitHub (Apr 13, 2025): I had tested on my workstation, my gfx1200 has been found and worked with the latest version 0.6.5 and ROCm version 6.4.0 But I also tested on version 0.6.4, it didn't work, please note that. Thanks to the contributors, but I think I will keep this issue open and pending for AMD HIP/ROCm update for Windows. Thanks.
Author
Owner

@BloodyIron commented on GitHub (Jun 26, 2025):

Time to close this thread then?

<!-- gh-comment-id:3009742244 --> @BloodyIron commented on GitHub (Jun 26, 2025): Time to close this thread then?
Author
Owner

@romain-hebert commented on GitHub (Jun 26, 2025):

Time to close this thread then?

Still no windows support, waiting on HIP sdk binaries

<!-- gh-comment-id:3009749246 --> @romain-hebert commented on GitHub (Jun 26, 2025): > Time to close this thread then? Still no windows support, waiting on HIP sdk binaries
Author
Owner

@sudomateo commented on GitHub (Jun 26, 2025):

I'm of the opinion that we should close #9633 but keep this issue open until a new HIP SDK is released for Windows. This is the issue people are going to find when searching.

Looks like a new HIP SDK is coming soon according to https://github.com/ROCm/ROCm/issues/4934#issuecomment-2981015134.

In the meantime, you can try TheRock which allows users to build ROCm/HIP on native Windows for the 9000 series. The project is under active development and currently supports a subset of the complete ROCm component list. For more information, see https://github.com/ROCm/TheRock/blob/main/docs/development/windows_support.md.

<!-- gh-comment-id:3009773559 --> @sudomateo commented on GitHub (Jun 26, 2025): I'm of the opinion that we should close #9633 but keep this issue open until a new HIP SDK is released for Windows. This is the issue people are going to find when searching. Looks like a new HIP SDK is coming soon according to https://github.com/ROCm/ROCm/issues/4934#issuecomment-2981015134. > In the meantime, you can try [TheRock](https://github.com/ROCm/TheRock) which allows users to build ROCm/HIP on native Windows for the 9000 series. The project is under active development and currently supports a subset of the complete ROCm component list. For more information, see https://github.com/ROCm/TheRock/blob/main/docs/development/windows_support.md.
Author
Owner

@dhiltgen commented on GitHub (Jul 5, 2025):

Linux support has been merged for a while. Windows support is still waiting on the ROCm 6.4 release on Windows. PR #10676 can be unblocked once that's available. We're tracking windows support in #10430

<!-- gh-comment-id:3040194955 --> @dhiltgen commented on GitHub (Jul 5, 2025): Linux support has been merged for a while. Windows support is still waiting on the ROCm 6.4 release on Windows. PR #10676 can be unblocked once that's available. We're tracking windows support in #10430
Author
Owner

@jclsn commented on GitHub (Oct 18, 2025):

@dhiltgen It doesn't work for me on Linux. What do I need to do? The GPU is not recognized at all. I tried adding Environment="HSA_OVERRIDE_GFX_VERSION=12.0.1"

Okt 18 22:37:39 precision5810 systemd[1]: Started Ollama Service.
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.277+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:gfx1201 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=images.go:522 msg="total blobs: 15"
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.12.6)"
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.307+02:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.7 GiB" available="57.0 GiB"
Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.307+02:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
<!-- gh-comment-id:3418800963 --> @jclsn commented on GitHub (Oct 18, 2025): @dhiltgen It doesn't work for me on Linux. What do I need to do? The GPU is not recognized at all. I tried adding `Environment="HSA_OVERRIDE_GFX_VERSION=12.0.1"` ``` Okt 18 22:37:39 precision5810 systemd[1]: Started Ollama Service. Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.277+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:gfx1201 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=images.go:522 msg="total blobs: 15" Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.12.6)" Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.278+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.307+02:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.7 GiB" available="57.0 GiB" Okt 18 22:37:39 precision5810 ollama[7864]: time=2025-10-18T22:37:39.307+02:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ```
Author
Owner

@jclsn commented on GitHub (Oct 18, 2025):

Seems like I had to install everything rocm-related and then reinstall. Works now

<!-- gh-comment-id:3418805441 --> @jclsn commented on GitHub (Oct 18, 2025): Seems like I had to install everything rocm-related and then reinstall. Works now
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68478