[GH-ISSUE #7565] Linux ollama 0.4.0, 0.4.2, 0.4.5, 0.5.0 custom compile for AMD ROCm fails missing ggml_rocm in go compile #51329

Closed
opened 2026-04-28 19:29:25 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @ganakee on GitHub (Nov 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7565

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Report date: 2024-11-07

During a custom compile of ollama 0.4.0 on Linux (POP OS 22.04) for AMD ROCm GPUs (AMD 6650 GPU), the initial compile works.

However, when trying to execute the go compile, the compile fails after about two minutes citing exit code 1 and saying the error is unable to find ggml_rocm.

ROCm 6.0

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.4.0

Originally created by @ganakee on GitHub (Nov 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7565 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Report date: 2024-11-07 During a custom compile of ollama 0.4.0 on Linux (POP OS 22.04) for AMD ROCm GPUs (AMD 6650 GPU), the initial compile works. However, when trying to execute the go compile, the compile fails after about two minutes citing exit code 1 and saying the error is unable to find ggml_rocm. ROCm 6.0 ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.4.0
GiteaMirror added the buildbug labels 2026-04-28 19:29:27 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 8, 2024):

Most likely solved by #7499

<!-- gh-comment-id:2465430404 --> @dhiltgen commented on GitHub (Nov 8, 2024): Most likely solved by #7499
Author
Owner

@stephensrmmartin commented on GitHub (Nov 9, 2024):

@dhiltgen I just tried your PR, and it did not resolve the issue from the OP.

Edit: I am on Arch. I tried this both manually and using the arch repo's latest PKGBUILD for 0.4.0 and 0.4.1. In any case, it fails due to not finding ggml_rocm during the build phase.

The lib does exist, but perhaps it is not prepending the correct LD path when linking?

<!-- gh-comment-id:2466044333 --> @stephensrmmartin commented on GitHub (Nov 9, 2024): @dhiltgen I just tried your PR, and it did not resolve the issue from the OP. Edit: I am on Arch. I tried this both manually and using the arch repo's latest PKGBUILD for 0.4.0 and 0.4.1. In any case, it fails due to not finding ggml_rocm during the build phase. The lib does exist, but perhaps it is not prepending the correct LD path when linking?
Author
Owner

@ganakee commented on GitHub (Nov 10, 2024):

The compile still fails even with 0.4.1--newly released..

I just tried (2024-11-10) with the newly released 0.4.1. I was not clear whether this includes @dhiltgen 's updates.

  1. I downloaded 0.4.1.
  2. I change to the download directory and run (for my AMD machine): ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast AMDGPU_TARGETS="gfx1030" /usr/local/go/bin/go generate -tags rocm ./...
  3. The compile completes.
  4. Within the ollama directory (created above), I run (for my AMD machine): /usr/local/go/bin/go build -tags rocm .
  5. Thegocompile results in an error:
# github.com/ollama/ollama
/home/me/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.8.linux-amd64/pkg/tool/linux_amd64/link: running g++ failed: exit status 1
/usr/bin/ld: cannot find -lggml_rocm: No such file or directory
collect2: error: ld returned 1 exit status

Thus, the compile still fails.

I am not sure if it helps, but I poked around the initial compile directories (see result from Step 2 above).
I found:

~/ollama0.41/ollama/llama/build/linux-amd64$ ls
ggml-aarch64.rocm.o  ggml-cuda           ggml.rocm.o
ggml-alloc.rocm.o    ggml-cuda.rocm.o    runners
ggml-backend.rocm.o  ggml-quants.rocm.o  sgemm.rocm.o
<!-- gh-comment-id:2466874819 --> @ganakee commented on GitHub (Nov 10, 2024): **The compile still fails even with 0.4.1--newly released..** I just tried (2024-11-10) with the newly released 0.4.1. I was not clear whether this includes @dhiltgen 's updates. 1. I downloaded 0.4.1. 2. I change to the download directory and run (for my AMD machine): `ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast AMDGPU_TARGETS="gfx1030" /usr/local/go/bin/go generate -tags rocm ./...` 3. The compile completes. 4. Within the `ollama` directory (created above), I run (for my AMD machine):` /usr/local/go/bin/go build -tags rocm .` 5. The` go `compile results in an **error**: ``` # github.com/ollama/ollama /home/me/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.8.linux-amd64/pkg/tool/linux_amd64/link: running g++ failed: exit status 1 /usr/bin/ld: cannot find -lggml_rocm: No such file or directory collect2: error: ld returned 1 exit status ``` Thus, the compile still fails. I am not sure if it helps, but I poked around the initial compile directories (see result from Step 2 above). I found: ``` ~/ollama0.41/ollama/llama/build/linux-amd64$ ls ggml-aarch64.rocm.o ggml-cuda ggml.rocm.o ggml-alloc.rocm.o ggml-cuda.rocm.o runners ggml-backend.rocm.o ggml-quants.rocm.o sgemm.rocm.o ```
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

@stephensrmmartin can you try again with the latest update to my branch for the PR?

If it still doesn't find ggml_rocm can you share the build logs leading up to that failure so I can see what the go build command looked like? Also share where it was located.

<!-- gh-comment-id:2474930385 --> @dhiltgen commented on GitHub (Nov 13, 2024): @stephensrmmartin can you try again with the latest update to my branch for the PR? If it still doesn't find `ggml_rocm` can you share the build logs leading up to that failure so I can see what the `go build` command looked like? Also share where it was located.
Author
Owner

@stephensrmmartin commented on GitHub (Nov 14, 2024):

@dhiltgen Did a very simple barebones build:

go generate ./...
go build ./

Building takes place in /home/<single_word_username>/Builds/ollama (where ollama is checked out).

Using your latest branch, it successfully built. I did not merge it into dev/main, but built straight from your branch.

<!-- gh-comment-id:2475402548 --> @stephensrmmartin commented on GitHub (Nov 14, 2024): @dhiltgen Did a very simple barebones build: ``` go generate ./... go build ./ ``` Building takes place in /home/<single_word_username>/Builds/ollama (where ollama is checked out). Using your latest branch, it successfully built. I did not merge it into dev/main, but built straight from your branch.
Author
Owner

@stephensrmmartin commented on GitHub (Nov 14, 2024):

@dhiltgen

I assume arch packaging will need to change, so that libggml_rocm.so is located in LIBDIR/ollama/runners, right?
Or is it always relative to the binary? Must it always look in ../lib/ollama/runners ?

Nvm, the Arch maintainer already made the necessary changes! Thanks everyone.

<!-- gh-comment-id:2475406204 --> @stephensrmmartin commented on GitHub (Nov 14, 2024): @dhiltgen I assume arch packaging will need to change, so that libggml_rocm.so is located in LIBDIR/ollama/runners, right? Or is it always relative to the binary? Must it always look in `../lib/ollama/runners` ? Nvm, the Arch maintainer already made the necessary changes! Thanks everyone.
Author
Owner

@kaleocheng commented on GitHub (Nov 14, 2024):

I successfully built the changes in nixpkgs with https://github.com/NixOS/nixpkgs/pull/354969. but I noticed that the location of libggml_rocm.so has changed from dist/linux-amd64/lib/ to llama/build/linux-amd64/. This change causes the installPhase to fail because the current logic is:

  postInstall = lib.optionalString stdenv.hostPlatform.isLinux ''
    # Copy libggml_*.so and runners into the lib directory
    # Reference: https://github.com/ollama/ollama/blob/v0.4.1/llama/make/gpu.make#L90
    mkdir -p $out/lib
    cp -r dist/*/lib/* $out/lib/
  '';

could you confirm this change in the directory structure?

<!-- gh-comment-id:2475419157 --> @kaleocheng commented on GitHub (Nov 14, 2024): I successfully built the changes in nixpkgs with https://github.com/NixOS/nixpkgs/pull/354969. but I noticed that the location of `libggml_rocm.so` has changed from `dist/linux-amd64/lib/` to `llama/build/linux-amd64/`. This change causes the installPhase to fail because the current logic is: ``` postInstall = lib.optionalString stdenv.hostPlatform.isLinux '' # Copy libggml_*.so and runners into the lib directory # Reference: https://github.com/ollama/ollama/blob/v0.4.1/llama/make/gpu.make#L90 mkdir -p $out/lib cp -r dist/*/lib/* $out/lib/ ''; ``` could you confirm this change in the directory structure?
Author
Owner

@stephensrmmartin commented on GitHub (Nov 14, 2024):

Additionally, it still does not actually run once installed. This may be an arch packaging issue, but arch has it installed to

/usr/bin/ollama
/usr/lib/ollama/
---- libggml_rocm.so
---- runners/{cpu, cpu_avx,cpu_avx2,rocm}/ollama_llama_server

It throws an error about libggml_rocm.so not being found in the /tmp directory it creates.

Edit: When I manually add that .so into the /tmp/systemd-...ollama/tmp/ollama..../runners/rocm directory, it "runs", but immediately crashes when generating. I believe this is related to
#7590 and more likely this: https://gitlab.archlinux.org/archlinux/packaging/packages/rocblas/-/issues/2

<!-- gh-comment-id:2475424111 --> @stephensrmmartin commented on GitHub (Nov 14, 2024): Additionally, it still does not actually *run* once installed. This may be an arch packaging issue, but arch has it installed to ``` /usr/bin/ollama /usr/lib/ollama/ ---- libggml_rocm.so ---- runners/{cpu, cpu_avx,cpu_avx2,rocm}/ollama_llama_server ``` It throws an error about `libggml_rocm.so` not being found in the /tmp directory it creates. Edit: When I manually add that .so into the /tmp/systemd-...ollama/tmp/ollama..../runners/rocm directory, it "runs", but immediately crashes when generating. I believe this is related to #7590 and more likely this: https://gitlab.archlinux.org/archlinux/packaging/packages/rocblas/-/issues/2
Author
Owner

@rainbyte commented on GitHub (Nov 16, 2024):

I just installed the last updates:

  • rocblas 6.2.2-2
  • ollama-rocm 0.4.2-1

And run ollama with this command:

ollama serve

Then I got this error as before:

/tmp/ollama3240664242/runners/rocm/ollama_llama_server: error while loading shared libraries: libggml_rocm.so: cannot open shared object file: No such file or directory
time=2024-11-16T03:30:42.832-03:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127"

To make it work I had to do this:

cp /usr/lib/ollama/libggml_rocm.so /tmp/ollama1482905245/runners/rocm/

Or this:

LD_LIBRARY_PATH=/usr/lib/ollama/ ollama serve

I'm using RX 7900 XTX gpu

<!-- gh-comment-id:2480461147 --> @rainbyte commented on GitHub (Nov 16, 2024): I just installed the last updates: - rocblas 6.2.2-2 - ollama-rocm 0.4.2-1 And run ollama with this command: ``` ollama serve ``` Then I got this error as before: ```txt /tmp/ollama3240664242/runners/rocm/ollama_llama_server: error while loading shared libraries: libggml_rocm.so: cannot open shared object file: No such file or directory time=2024-11-16T03:30:42.832-03:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127" ``` To make it work I had to do this: ```sh cp /usr/lib/ollama/libggml_rocm.so /tmp/ollama1482905245/runners/rocm/ ``` Or this: ```sh LD_LIBRARY_PATH=/usr/lib/ollama/ ollama serve ``` I'm using RX 7900 XTX gpu
Author
Owner

@ganakee commented on GitHub (Nov 17, 2024):

Report Date: 2024-11-17

The error still occurs.

I downloaded the 0.4.2 version from 2024-11-14.

I first run: ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast AMDGPU_TARGETS="gfx1030" /usr/local/go/bin/go generate -tags rocm ./...

I then run where the error occurs:

~/ollama-0.4.2/ollama$ /usr/local/go/bin/go build -tags rocm .
go: downloading golang.org/x/text v0.20.0
go: downloading golang.org/x/image v0.22.0
# github.com/ollama/ollama
/home/me/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.8.linux-amd64/pkg/tool/linux_amd64/link: running g++ failed: exit status 1
/usr/bin/ld: cannot find -lggml_rocm: No such file or directory
collect2: error: ld returned 1 exit status
<!-- gh-comment-id:2481514591 --> @ganakee commented on GitHub (Nov 17, 2024): Report Date: 2024-11-17 **The error still occurs.** I downloaded the 0.4.2 version from 2024-11-14. I first run: `ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast AMDGPU_TARGETS="gfx1030" /usr/local/go/bin/go generate -tags rocm ./...` I then run where the error occurs: ``` ~/ollama-0.4.2/ollama$ /usr/local/go/bin/go build -tags rocm . go: downloading golang.org/x/text v0.20.0 go: downloading golang.org/x/image v0.22.0 # github.com/ollama/ollama /home/me/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.8.linux-amd64/pkg/tool/linux_amd64/link: running g++ failed: exit status 1 /usr/bin/ld: cannot find -lggml_rocm: No such file or directory collect2: error: ld returned 1 exit status ```
Author
Owner

@dhiltgen commented on GitHub (Nov 18, 2024):

I updated the PR over the weekend to try to improve its ability to find the libraries and get the paths set properly. If folks are still seeing problems at build or runtime with the latest commits on the PR, please let me know by commenting on the PR.

<!-- gh-comment-id:2484014009 --> @dhiltgen commented on GitHub (Nov 18, 2024): I updated the [PR](https://github.com/ollama/ollama/pull/7499) over the weekend to try to improve its ability to find the libraries and get the paths set properly. If folks are still seeing problems at build or runtime with the latest commits on the PR, please let me know by commenting on the PR.
Author
Owner

@ganakee commented on GitHub (Nov 26, 2024):

2024-11-26

I just ran as above the newest 0.4.5. The ggml_rocm error still occurs.

/usr/local/go/bin/go build -tags rocm .
# github.com/ollama/ollama
/home/me/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.8.linux-amd64/pkg/tool/linux_amd64/link: running g++ failed: exit status 1
/usr/bin/ld: cannot find -lggml_rocm: No such file or directory
collect2: error: ld returned 1 exit status

I tried the PR but cannot get anything to download. I get an error saying the directory is not a repo.

<!-- gh-comment-id:2501430722 --> @ganakee commented on GitHub (Nov 26, 2024): 2024-11-26 I just ran as above the newest 0.4.5. The ggml_rocm error still occurs. ``` /usr/local/go/bin/go build -tags rocm . # github.com/ollama/ollama /home/me/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.8.linux-amd64/pkg/tool/linux_amd64/link: running g++ failed: exit status 1 /usr/bin/ld: cannot find -lggml_rocm: No such file or directory collect2: error: ld returned 1 exit status ``` _I tried the PR but cannot get anything to download. I get an error saying the directory is not a repo._
Author
Owner

@ganakee commented on GitHub (Nov 27, 2024):

Thanks @dhiltgen.

Quick Answer

I was able to build using the PR from about 2024-11-27.
I was also able to run ollama --version afterwards.
I ran some test queries using prior models.

Tips for Using the PR for Others Unfamiliar with gh CLI

I struggled with using the PR code. For others who may be unfamiliar with gh command line and PRs:

  1. from a command line, run gh auth login (assuming gh is installed with sudo apt install gh)
  2. this will prompt for authentication. I selected the WEB authentication and followed the command line instructions including copying the token and logging into github via the web browser)
  3. I created a target directory on my local system called ollama-pr and switched to this directory.
  4. I could not get the code to clone or download. I was NOT able to use the CODE <> shortcut of gh pr checkout 7499 on the PR page .
  5. Instead I used gh repo clone dhiltgen/ollama. I found this link by looking at the PR page and clicking through the dhiltgen:make_targets link right under the page title and then the CODE <> shortcut.
  6. Now the code is downloaded. I still needed to build.
  7. For me, AMD 6650 GPU, I switched to the ollama subfolder.
  8. I make an edit to the ollama/version/version.go file to show this is a PR version. OPTIONAL but helpful.
  9. I switched-up to the ollama directory (cd ..)and ran ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast AMDGPU_TARGETS="gfx1030" /usr/local/go/bin/go generate -tags rocm ./...
  10. After several minutes and the compile completes, I then ran /usr/local/go/bin/go build -tags rocm . (note the closing period)
  11. After a few minutes (this is where the build formerly failed before @dhiltgen 's helpful PR updates) and within the ollama directory, I verify with ./ollama --version.
<!-- gh-comment-id:2504539774 --> @ganakee commented on GitHub (Nov 27, 2024): Thanks @dhiltgen. ### Quick Answer I was able to build using the [PR](https://github.com/ollama/ollama/pull/7499) from about 2024-11-27. I was also able to run `ollama --version` afterwards. I ran some test queries using prior models. ### Tips for Using the PR for Others Unfamiliar with `gh` CLI I struggled with using the PR code. For others who may be unfamiliar with gh command line and PRs: 1. from a command line, run `gh auth login` (assuming `gh `is installed with `sudo apt install gh`) 2. this will prompt for authentication. I selected the WEB authentication and followed the command line instructions including copying the token and logging into github via the web browser) 3. I created a target directory on my local system called `ollama-pr `and switched to this directory. 4. I could not get the code to clone or download. I was NOT able to use the `CODE <>` shortcut of `gh pr checkout 7499` on the[ PR page](https://github.com/ollama/ollama/pull/7499) . 5. **Instead I used` gh repo clone dhiltgen/ollama`.** I found this link by looking at the[ PR page](https://github.com/ollama/ollama/pull/7499) and clicking through the [`dhiltgen:make_targets` link](https://github.com/dhiltgen/ollama/tree/make_targets) right under the page title and then the `CODE <> `shortcut. 6. Now the code is downloaded. I still needed to build. 7. For me, AMD 6650 GPU, I switched to the `ollama` subfolder. 8. I make an edit to the `ollama/version/version.go` file to show this is a PR version. OPTIONAL but helpful. 9. I switched-up to the `ollama `directory (`cd ..`)and ran `ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast AMDGPU_TARGETS="gfx1030" /usr/local/go/bin/go generate -tags rocm ./...` 10. After several minutes and the compile completes, I then ran `/usr/local/go/bin/go build -tags rocm .` (note the closing period) 11. After a few minutes (this is where the build formerly failed before @dhiltgen 's helpful PR updates) and within the `ollama `directory, I verify with `./ollama --version`.
Author
Owner

@ganakee commented on GitHub (Dec 7, 2024):

On 2024-12-07, I tried 0.5.0 and the default build fails with the gguf error.

<!-- gh-comment-id:2525249027 --> @ganakee commented on GitHub (Dec 7, 2024): On 2024-12-07, I tried 0.5.0 and the default build fails with the gguf error.
Author
Owner

@dhiltgen commented on GitHub (Apr 9, 2025):

The new cmake based build should cover this now.

<!-- gh-comment-id:2790772463 --> @dhiltgen commented on GitHub (Apr 9, 2025): The new cmake based build should cover this now.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51329