[GH-ISSUE #9970] RX6600 detected but not used (Linux) #32290

Closed
opened 2026-04-22 13:25:00 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @LevitatingBusinessMan on GitHub (Mar 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9970

What is the issue?

I run HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve, the gpu is seemingly detected. But it still ends up loading a cpu backend.

I am using opensuse and official upstream rocm packages.

Relevant log output

I attached a whole log but here is the relevant stuff:

time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib]
time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/rein/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 12 --parallel 4 --port 40801"
time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[HSA_OVERRIDE_GFX_VERSION=10.3.0 PATH=/home/rein/go/bin:/home/rein/.local/bin:/home/rein/scripts:/home/rein/.cargo/bin:/home/rein/bin:/home/rein/.config/emacs/bin:/home/rein/.gem/ruby/3.0.0/bin:/usr/local/bin:/bin:/usr/bin LD_LIBRARY_PATH=/opt/rocm/lib:/usr/lib/ollama ROCR_VISIBLE_DEVICES=0]"
time=2025-03-25T04:17:20.113+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-25T04:17:20.120+01:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib
time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib]
time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/rein/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 12 --parallel 4 --port 40801"
time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[HSA_OVERRIDE_GFX_VERSION=10.3.0 PATH=/home/rein/go/bin:/home/rein/.local/bin:/home/rein/scripts:/home/rein/.cargo/bin:/home/rein/bin:/home/rein/.config/emacs/bin:/home/rein/.gem/ruby/3.0.0/bin:/usr/local/bin:/bin:/usr/bin LD_LIBRARY_PATH=/opt/rocm/lib:/usr/lib/ollama ROCR_VISIBLE_DEVICES=0]"
time=2025-03-25T04:17:20.113+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-25T04:17:20.120+01:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib
time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.

I am guessing this could have something to do with it:
time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib

log.txt


### OS

Linux

### GPU

AMD

### CPU

AMD

### Ollama version

0.6.0
Originally created by @LevitatingBusinessMan on GitHub (Mar 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9970 ### What is the issue? I run `HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve`, the gpu is seemingly detected. But it still ends up loading a cpu backend. I am using opensuse and official upstream rocm packages. ### Relevant log output ```shell I attached a whole log but here is the relevant stuff: time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib] time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/rein/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 12 --parallel 4 --port 40801" time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[HSA_OVERRIDE_GFX_VERSION=10.3.0 PATH=/home/rein/go/bin:/home/rein/.local/bin:/home/rein/scripts:/home/rein/.cargo/bin:/home/rein/bin:/home/rein/.config/emacs/bin:/home/rein/.gem/ruby/3.0.0/bin:/usr/local/bin:/bin:/usr/bin LD_LIBRARY_PATH=/opt/rocm/lib:/usr/lib/ollama ROCR_VISIBLE_DEVICES=0]" time=2025-03-25T04:17:20.113+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-25T04:17:20.120+01:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib] time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/rein/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 12 --parallel 4 --port 40801" time=2025-03-25T04:17:20.113+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[HSA_OVERRIDE_GFX_VERSION=10.3.0 PATH=/home/rein/go/bin:/home/rein/.local/bin:/home/rein/scripts:/home/rein/.cargo/bin:/home/rein/bin:/home/rein/.config/emacs/bin:/home/rein/.gem/ruby/3.0.0/bin:/usr/local/bin:/bin:/usr/bin LD_LIBRARY_PATH=/opt/rocm/lib:/usr/lib/ollama ROCR_VISIBLE_DEVICES=0]" time=2025-03-25T04:17:20.113+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-25T04:17:20.113+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-25T04:17:20.120+01:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell. ``` I am guessing this could have something to do with it: `time=2025-03-25T04:17:20.120+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib` [log.txt](https://github.com/user-attachments/files/19441190/out.txt) ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.6.0
GiteaMirror added the bug label 2026-04-22 13:25:00 -05:00
Author
Owner

@LevitatingBusinessMan commented on GitHub (Mar 25, 2025):

I forgot to install the rocm lib tarball

<!-- gh-comment-id:2750008708 --> @LevitatingBusinessMan commented on GitHub (Mar 25, 2025): I forgot to install the rocm lib tarball
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32290