[GH-ISSUE #11757] building Linux ROCm from development.md no longer detects HIP for ROCm #54303

Closed
opened 2026-04-29 05:38:40 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @codeliger on GitHub (Aug 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11757

What is the issue?

HIP/rocm not detected when building following instructions from development.md

Card: 9070 XT (gfx1201)

on tag 0.11.3:

make -f Makefile.sync clean checkout apply-patches sync
git -C llama/vendor fetch
remote: Enumerating objects: 17, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 17 (delta 5), reused 4 (delta 4), pack-reused 4 (from 3)
Unpacking objects: 100% (17/17), 48.36 KiB | 2.54 MiB/s, done.
From https://github.com/ggerganov/llama.cpp
   0d883154..3db4da56  master     -> origin/master
 * [new tag]           b6103      -> b6103
 * [new tag]           b6102      -> b6102
git -C llama/vendor checkout -f de4c07f93783a1a96456a44dc16b9db538ee1618
Warning: you are leaving 25 commits behind, not connected to
any of your branches:

  e5cc7535 Disable ggml-blas on macos v13 and older
  db945977 cuda: disable graph compat check for OP_ADD
  07987bca MXFP4
  5bccd9d3 BF16 macos version guard
 ... and 21 more.

If you want to keep them by creating a new branch, this may be a good time
to do so with:

 git branch <new-branch-name> e5cc7535

HEAD is now at de4c07f9 clip : cap max image size 1024 for qwen vl model (#13478)
rm -f llama/patches/.*.patched
make: Nothing to be done for 'checkout'.
Applying: ggml-backend: malloc and free using the same compiler
Applying: pretokenizer
Applying: embeddings
Applying: clip-unicode
Applying: solar-pro
Applying: fix deepseek deseret regex
Applying: maintain ordering for rules for grammar
Applying: ensure KV cache is fully defragmented
Applying: sort devices by score
Applying: add phony target ggml-cpu for all cpu variants
Applying: remove amx
Applying: fix string arr kv loading
Applying: ollama debug tensor
Applying: add ollama vocab for grammar support
Applying: add argsort and cuda copy for i32
Applying: graph memory reporting on failure
Applying: ggml: Export GPU UUIDs
Applying: temporary prevent rocm+cuda mixed loading
Applying: metal : add mean kernel (#14267)
Applying: CUDA: add mean operation (#14313)
Applying: Enable CUDA Graphs for gemma3n.
Applying: BF16 macos version guard
Applying: MXFP4
Applying: cuda: disable graph compat check for OP_ADD
Applying: Disable ggml-blas on macos v13 and older
rsync -arvzc -f "merge llama/llama.cpp/.rsync-filter" llama/vendor/ llama/llama.cpp
sending incremental file list
common/
src/
tools/mtmd/

sent 3,027 bytes  received 80 bytes  6,214.00 bytes/sec
total size is 3,378,487  speedup is 1,087.38
sed -e 's|@FETCH_HEAD@|de4c07f93783a1a96456a44dc16b9db538ee1618|' <llama/build-info.cpp.in >llama/build-info.cpp
rsync -arvzc -f "merge ml/backend/ggml/ggml/.rsync-filter" llama/vendor/ggml/ ml/backend/ggml/ggml
sending incremental file list
include/
src/
src/ggml-blas/
src/ggml-cpu/
src/ggml-cuda/
src/ggml-metal/

sent 13,732 bytes  received 140 bytes  27,744.00 bytes/sec
total size is 4,474,392  speedup is 322.55
go generate ./ml/backend/ggml/ggml/src/ggml-metal

New command and output:
cmake -B build
-- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu-x64:  
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX
-- x86 detected
-- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512
-- x86 detected
-- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI
-- x86 detected
-- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI
-- Configuring done (0.0s)
-- Generating done (0.0s)
-- Build files have been written to: /home/user/code/ollama/build

Relevant log output

yay -Qs rocm
local/hsa-rocr 6.4.1-2
    HSA Runtime API and runtime for ROCm
local/rocm-core 6.4.1-1
    AMD ROCm core package (version files)
local/rocm-device-libs 6.4.1-1
    AMD specific device-side language runtime libraries
local/rocm-llvm 6.4.1-1
    Radeon Open Compute - LLVM toolchain (llvm, clang, lld)
local/rocm-opencl-runtime 6.4.1-1
    OpenCL implementation for AMD
local/rocm-smi-lib 6.4.1-1
    ROCm System Management Interface Library
local/rocminfo 6.4.1-1
    ROCm Application for Reporting System Info




Aug 06 16:28:18 user systemd[1]: Started Ollama Service.
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.847-04:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8164 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/opt/ollama_models/ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.870-04:00 level=INFO source=images.go:477 msg="total blobs: 124"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.871-04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
Aug 06 16:28:18 user ollama[182539]:  - using env:        export GIN_MODE=release
Aug 06 16:28:18 user ollama[182539]:  - using code:        gin.SetMode(gin.ReleaseMode)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.872-04:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.0.0)"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.872-04:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.872-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.873-04:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.873-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.873-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.903-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.903-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.903-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcudart.so* /libcudart.so* /usr/local/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=30032 unique_id=10890148317490592358
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="15.9 GiB"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="12.3 GiB"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/lib/ollama/rocm"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib64"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.3 GiB" available


Let me know what other logs you need.

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.11.3

Originally created by @codeliger on GitHub (Aug 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11757 ### What is the issue? HIP/rocm not detected when building following instructions from development.md Card: 9070 XT (gfx1201) on tag 0.11.3: ``` make -f Makefile.sync clean checkout apply-patches sync git -C llama/vendor fetch remote: Enumerating objects: 17, done. remote: Counting objects: 100% (13/13), done. remote: Compressing objects: 100% (9/9), done. remote: Total 17 (delta 5), reused 4 (delta 4), pack-reused 4 (from 3) Unpacking objects: 100% (17/17), 48.36 KiB | 2.54 MiB/s, done. From https://github.com/ggerganov/llama.cpp 0d883154..3db4da56 master -> origin/master * [new tag] b6103 -> b6103 * [new tag] b6102 -> b6102 git -C llama/vendor checkout -f de4c07f93783a1a96456a44dc16b9db538ee1618 Warning: you are leaving 25 commits behind, not connected to any of your branches: e5cc7535 Disable ggml-blas on macos v13 and older db945977 cuda: disable graph compat check for OP_ADD 07987bca MXFP4 5bccd9d3 BF16 macos version guard ... and 21 more. If you want to keep them by creating a new branch, this may be a good time to do so with: git branch <new-branch-name> e5cc7535 HEAD is now at de4c07f9 clip : cap max image size 1024 for qwen vl model (#13478) rm -f llama/patches/.*.patched make: Nothing to be done for 'checkout'. Applying: ggml-backend: malloc and free using the same compiler Applying: pretokenizer Applying: embeddings Applying: clip-unicode Applying: solar-pro Applying: fix deepseek deseret regex Applying: maintain ordering for rules for grammar Applying: ensure KV cache is fully defragmented Applying: sort devices by score Applying: add phony target ggml-cpu for all cpu variants Applying: remove amx Applying: fix string arr kv loading Applying: ollama debug tensor Applying: add ollama vocab for grammar support Applying: add argsort and cuda copy for i32 Applying: graph memory reporting on failure Applying: ggml: Export GPU UUIDs Applying: temporary prevent rocm+cuda mixed loading Applying: metal : add mean kernel (#14267) Applying: CUDA: add mean operation (#14313) Applying: Enable CUDA Graphs for gemma3n. Applying: BF16 macos version guard Applying: MXFP4 Applying: cuda: disable graph compat check for OP_ADD Applying: Disable ggml-blas on macos v13 and older rsync -arvzc -f "merge llama/llama.cpp/.rsync-filter" llama/vendor/ llama/llama.cpp sending incremental file list common/ src/ tools/mtmd/ sent 3,027 bytes received 80 bytes 6,214.00 bytes/sec total size is 3,378,487 speedup is 1,087.38 sed -e 's|@FETCH_HEAD@|de4c07f93783a1a96456a44dc16b9db538ee1618|' <llama/build-info.cpp.in >llama/build-info.cpp rsync -arvzc -f "merge ml/backend/ggml/ggml/.rsync-filter" llama/vendor/ggml/ ml/backend/ggml/ggml sending incremental file list include/ src/ src/ggml-blas/ src/ggml-cpu/ src/ggml-cuda/ src/ggml-metal/ sent 13,732 bytes received 140 bytes 27,744.00 bytes/sec total size is 4,474,392 speedup is 322.55 go generate ./ml/backend/ggml/ggml/src/ggml-metal New command and output: ``` ``` cmake -B build -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF. -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Configuring done (0.0s) -- Generating done (0.0s) -- Build files have been written to: /home/user/code/ollama/build ``` ### Relevant log output ```shell yay -Qs rocm local/hsa-rocr 6.4.1-2 HSA Runtime API and runtime for ROCm local/rocm-core 6.4.1-1 AMD ROCm core package (version files) local/rocm-device-libs 6.4.1-1 AMD specific device-side language runtime libraries local/rocm-llvm 6.4.1-1 Radeon Open Compute - LLVM toolchain (llvm, clang, lld) local/rocm-opencl-runtime 6.4.1-1 OpenCL implementation for AMD local/rocm-smi-lib 6.4.1-1 ROCm System Management Interface Library local/rocminfo 6.4.1-1 ROCm Application for Reporting System Info Aug 06 16:28:18 user systemd[1]: Started Ollama Service. Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.847-04:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8164 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/opt/ollama_models/ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.870-04:00 level=INFO source=images.go:477 msg="total blobs: 124" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.871-04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Aug 06 16:28:18 user ollama[182539]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. Aug 06 16:28:18 user ollama[182539]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. Aug 06 16:28:18 user ollama[182539]: - using env: export GIN_MODE=release Aug 06 16:28:18 user ollama[182539]: - using code: gin.SetMode(gin.ReleaseMode) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) Aug 06 16:28:18 user ollama[182539]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.872-04:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.0.0)" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.872-04:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.872-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.873-04:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.873-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.873-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.903-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.903-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.903-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcudart.so* /libcudart.so* /usr/local/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=30032 unique_id=10890148317490592358 Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="15.9 GiB" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="12.3 GiB" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/lib/ollama/rocm" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.913-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib64" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" Aug 06 16:28:18 user ollama[182539]: time=2025-08-06T16:28:18.918-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.3 GiB" available Let me know what other logs you need. ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-04-29 05:38:40 -05:00
Author
Owner

@spirittechie commented on GitHub (Aug 7, 2025):

Try this... easy build with the correct kernel mods. Fedora 42 has ROCm 6.3 solid. Ubuntu is native for ROCm. Built Llama.cpp and Whisper.cpp off this command.
Your gfx1201 has had native support since version 4.x in 2021.

cmake -S . -B build -DGGML_HIP=ON -DCMAKE_BUILD_TYPE=Release 
cmake --build build -j$(nproc)
<!-- gh-comment-id:3162320792 --> @spirittechie commented on GitHub (Aug 7, 2025): Try this... easy build with the correct kernel mods. Fedora 42 has ROCm 6.3 solid. Ubuntu is native for ROCm. Built Llama.cpp and Whisper.cpp off this command. Your gfx1201 has had native support since version 4.x in 2021. ``` cmake -S . -B build -DGGML_HIP=ON -DCMAKE_BUILD_TYPE=Release cmake --build build -j$(nproc) ```
Author
Owner

@codeliger commented on GitHub (Aug 7, 2025):

CMake Error at /usr/share/cmake/Modules/CMakeDetermineHIPCompiler.cmake:174 (message):
  Failed to find ROCm root directory.
Call Stack (most recent call first):
  ml/backend/ggml/ggml/src/ggml-hip/CMakeLists.txt:36 (enable_language)
echo $ROCM_PATH
/opt/rocm

It doesn't find my rocm install for some reason.

/opt/rocm
tree -L 2
.
├── amdgcn
│   └── bitcode
├── bin
│   ├── amdclang -> /opt/rocm/lib/llvm/bin/amdclang
│   ├── amdclang++ -> /opt/rocm/lib/llvm/bin/amdclang++
│   ├── amdclang-cl -> /opt/rocm/lib/llvm/bin/amdclang-cl
│   ├── amdclang-cpp -> /opt/rocm/lib/llvm/bin/amdclang-cpp
│   ├── amdlld -> /opt/rocm/lib/llvm/bin/amdlld
│   ├── amd-smi -> ../libexec/amdsmi_cli/amdsmi_cli.py
│   ├── clinfo
│   ├── rocm_agent_enumerator
│   ├── rocminfo
│   └── rocm-smi -> ../libexec/rocm_smi/rocm_smi.py
├── include
│   ├── amd_comgr
│   ├── amd_smi
│   ├── amdsmi_go_shim.h
│   ├── CL
│   ├── goamdsmi.h
│   ├── hsa
│   ├── hsakmt
│   ├── oam
│   ├── rocm-core
│   ├── rocm_smi
│   └── rocprofiler-register
├── lib
│   ├── cmake
│   ├── libamd_comgr.so -> libamd_comgr.so.3
│   ├── libamd_comgr.so.3 -> libamd_comgr.so.3.0
│   ├── libamd_comgr.so.3.0
│   ├── libamdocl64.so -> libamdocl64.so.2
│   ├── libamdocl64.so.
│   ├── libamdocl64.so.2 -> libamdocl64.so.
│   ├── libamd_smi.so -> libamd_smi.so.0
│   ├── libamd_smi.so.0 -> libamd_smi.so.0.0
│   ├── libamd_smi.so.0.0
│   ├── libcltrace.so
│   ├── libgoamdsmi_shim64.so -> libgoamdsmi_shim64.so.1
│   ├── libgoamdsmi_shim64.so.1 -> libgoamdsmi_shim64.so.1.0
│   ├── libgoamdsmi_shim64.so.1.0
│   ├── libhsakmt.a
│   ├── libhsa-runtime64.so -> libhsa-runtime64.so.1
│   ├── libhsa-runtime64.so.1 -> libhsa-runtime64.so.1.15.0
│   ├── libhsa-runtime64.so.1.15.0
│   ├── liboam.so -> liboam.so.1
│   ├── liboam.so.1 -> liboam.so.1.0
│   ├── liboam.so.1.0
│   ├── librocm-core.so -> librocm-core.so.1
│   ├── librocm-core.so.1 -> librocm-core.so.1.0.60401
│   ├── librocm-core.so.1.0.60401
│   ├── librocm_smi64.so -> librocm_smi64.so.1
│   ├── librocm_smi64.so.1 -> librocm_smi64.so.1.0
│   ├── librocm_smi64.so.1.0
│   ├── librocprofiler-register.so -> librocprofiler-register.so.0
│   ├── librocprofiler-register.so.0 -> librocprofiler-register.so.0.4.0
│   ├── librocprofiler-register.so.0.4.0
│   ├── llvm
│   ├── pkgconfig
│   └── rocmmod
├── libexec
│   ├── amdsmi_cli
│   ├── rocm-core
│   └── rocm_smi
├── llvm -> /opt/rocm/lib/llvm
└── share
    ├── amd_smi
    ├── doc
    ├── modulefiles
    └── rocprofiler-register
<!-- gh-comment-id:3164298804 --> @codeliger commented on GitHub (Aug 7, 2025): ``` CMake Error at /usr/share/cmake/Modules/CMakeDetermineHIPCompiler.cmake:174 (message): Failed to find ROCm root directory. Call Stack (most recent call first): ml/backend/ggml/ggml/src/ggml-hip/CMakeLists.txt:36 (enable_language) ``` ``` echo $ROCM_PATH /opt/rocm ``` It doesn't find my rocm install for some reason. ``` /opt/rocm tree -L 2 . ├── amdgcn │   └── bitcode ├── bin │   ├── amdclang -> /opt/rocm/lib/llvm/bin/amdclang │   ├── amdclang++ -> /opt/rocm/lib/llvm/bin/amdclang++ │   ├── amdclang-cl -> /opt/rocm/lib/llvm/bin/amdclang-cl │   ├── amdclang-cpp -> /opt/rocm/lib/llvm/bin/amdclang-cpp │   ├── amdlld -> /opt/rocm/lib/llvm/bin/amdlld │   ├── amd-smi -> ../libexec/amdsmi_cli/amdsmi_cli.py │   ├── clinfo │   ├── rocm_agent_enumerator │   ├── rocminfo │   └── rocm-smi -> ../libexec/rocm_smi/rocm_smi.py ├── include │   ├── amd_comgr │   ├── amd_smi │   ├── amdsmi_go_shim.h │   ├── CL │   ├── goamdsmi.h │   ├── hsa │   ├── hsakmt │   ├── oam │   ├── rocm-core │   ├── rocm_smi │   └── rocprofiler-register ├── lib │   ├── cmake │   ├── libamd_comgr.so -> libamd_comgr.so.3 │   ├── libamd_comgr.so.3 -> libamd_comgr.so.3.0 │   ├── libamd_comgr.so.3.0 │   ├── libamdocl64.so -> libamdocl64.so.2 │   ├── libamdocl64.so. │   ├── libamdocl64.so.2 -> libamdocl64.so. │   ├── libamd_smi.so -> libamd_smi.so.0 │   ├── libamd_smi.so.0 -> libamd_smi.so.0.0 │   ├── libamd_smi.so.0.0 │   ├── libcltrace.so │   ├── libgoamdsmi_shim64.so -> libgoamdsmi_shim64.so.1 │   ├── libgoamdsmi_shim64.so.1 -> libgoamdsmi_shim64.so.1.0 │   ├── libgoamdsmi_shim64.so.1.0 │   ├── libhsakmt.a │   ├── libhsa-runtime64.so -> libhsa-runtime64.so.1 │   ├── libhsa-runtime64.so.1 -> libhsa-runtime64.so.1.15.0 │   ├── libhsa-runtime64.so.1.15.0 │   ├── liboam.so -> liboam.so.1 │   ├── liboam.so.1 -> liboam.so.1.0 │   ├── liboam.so.1.0 │   ├── librocm-core.so -> librocm-core.so.1 │   ├── librocm-core.so.1 -> librocm-core.so.1.0.60401 │   ├── librocm-core.so.1.0.60401 │   ├── librocm_smi64.so -> librocm_smi64.so.1 │   ├── librocm_smi64.so.1 -> librocm_smi64.so.1.0 │   ├── librocm_smi64.so.1.0 │   ├── librocprofiler-register.so -> librocprofiler-register.so.0 │   ├── librocprofiler-register.so.0 -> librocprofiler-register.so.0.4.0 │   ├── librocprofiler-register.so.0.4.0 │   ├── llvm │   ├── pkgconfig │   └── rocmmod ├── libexec │   ├── amdsmi_cli │   ├── rocm-core │   └── rocm_smi ├── llvm -> /opt/rocm/lib/llvm └── share ├── amd_smi ├── doc ├── modulefiles └── rocprofiler-register ```
Author
Owner

@codeliger commented on GitHub (Aug 12, 2025):

I figured out the problem. I was using rocm-opencl-runtime instead of rocm-hip-runtime on arch linux.

I still don't know exactly when to use either one. I remember benchmarking them 6 months ago and opencl seemed faster.

<!-- gh-comment-id:3179354687 --> @codeliger commented on GitHub (Aug 12, 2025): I figured out the problem. I was using `rocm-opencl-runtime` instead of `rocm-hip-runtime` on arch linux. I still don't know exactly when to use either one. I remember benchmarking them 6 months ago and opencl seemed faster.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54303