[GH-ISSUE #2473] Packaging Ollama with ROCm support for Arch Linux #27206

Closed
opened 2026-04-22 04:17:41 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @xyproto on GitHub (Feb 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2473

Originally assigned to: @dhiltgen on GitHub.

Hi, Arch Linux maintainer of the ollama and ollama-cuda packages here.

I want to package ollama-rocm, with support for AMD/ROCm, but I get error messages when building the package, and wonder if I am enabling support in the right way when building, or not.

So far, I am building with -tags rocm and have added clblast, rocm-hip-sdk and rocm-opencl-sdk as dependencies.

Here is the current error message:

[ 12%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
/opt/rocm/llvm/bin/clang++ -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -DK_QUANTS_PER_ITERATION=2 -DUSE_PROF_API=1 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -D__HIu
cd /build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1/common && /opt/rocm/llvm/bin/clang++ -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -D_GNU_SOURCE -D_XOPEN_SOURCE=600  -march=p
make[3]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
[ 12%] Built target build_info
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:620:1: warning: function declared 'noreturn' should not return [-Winvalid-noreturn]
}
^
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6240:17: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch]
        switch (op) {
                ^~
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6252:25: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch]
                switch (op) {
                        ^~
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6240:17: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch]
        switch (op) {
                ^~
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:8908:5: note: in instantiation of function template specialization 'pool2d_nchw_kernel<float, float>' requested here
    pool2d_nchw_kernel<<<block_nums, CUDA_IM2COL_BLOCK_SIZE, 0, main_stream>>>(IH, IW, OH, OW, k1, k0, s1, s0, p1, p0, parallel_elements, src0_dd, dst_dd, op);
    ^
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6252:25: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch]
                switch (op) {
                        ^~
error: option 'cf-protection=return' cannot be specified on this target
error: option 'cf-protection=branch' cannot be specified on this target
5 warnings and 2 errors generated when compiling for gfx1010.
make[3]: *** [CMakeFiles/ggml-rocm.dir/build.make:79: CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.o] Error 1
make[3]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make[2]: *** [CMakeFiles/Makefile2:727: CMakeFiles/ggml-rocm.dir/all] Error 2
make[2]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make[1]: *** [CMakeFiles/Makefile2:2908: examples/server/CMakeFiles/ext_server.dir/rule] Error 2
make[1]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make: *** [Makefile:1183: ext_server] Error 2

And here is the PKGBUILD that I am working on:

pkgname=ollama-rocm
pkgdesc='Create, run and share large language models (LLMs) with ROCm'
pkgver=0.1.24
pkgrel=1
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
_ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24
_llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c
makedepends=(clblast cmake git go rocm-hip-sdk rocm-opencl-sdk)
provides=(ollama)
conflicts=(ollama)
source=(git+$url#tag=v$pkgver
        llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit
        ollama.service
        sysusers.conf
        tmpfiles.d)
b2sums=('SKIP'
        'SKIP'
        'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124'
        '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec'
        'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed')

prepare() {
  cd ${pkgname/-rocm}
  rm -frv llm/llama.cpp

  # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
  cp -r "$srcdir/llama.cpp" llm/llama.cpp

  # Turn LTO on and set the build type to Release
  sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh
}

build() {
  cd ${pkgname/-rocm}
  export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
  go generate ./...
  go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \
    -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" -tags rocm
}

check() {
  cd ${pkgname/-rocm}
  go test -tags rocm ./api ./format
  ./ollama --version > /dev/null
}

package() {
  install -Dm755 ${pkgname/-rocm}/${pkgname/-rocm} "$pkgdir/usr/bin/${pkgname/-rocm}"
  install -dm755 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 ${pkgname/-rocm}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

In addition to this, solutions for how to set CMAKE flags without modifying gen_linux.sh, for building with "CPU only", "CUDA only" or "ROCm only" support, are warmly welcome.

Thanks in advance.

Originally created by @xyproto on GitHub (Feb 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2473 Originally assigned to: @dhiltgen on GitHub. Hi, Arch Linux maintainer of the `ollama` and `ollama-cuda` packages here. I want to package `ollama-rocm`, with support for AMD/ROCm, but I get error messages when building the package, and wonder if I am enabling support in the right way when building, or not. So far, I am building with `-tags rocm` and have added `clblast`, `rocm-hip-sdk` and `rocm-opencl-sdk` as dependencies. Here is the current error message: ``` [ 12%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o /opt/rocm/llvm/bin/clang++ -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -DK_QUANTS_PER_ITERATION=2 -DUSE_PROF_API=1 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -D__HIu cd /build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1/common && /opt/rocm/llvm/bin/clang++ -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -march=p make[3]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' [ 12%] Built target build_info /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:620:1: warning: function declared 'noreturn' should not return [-Winvalid-noreturn] } ^ /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6240:17: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch] switch (op) { ^~ /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6252:25: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch] switch (op) { ^~ /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6240:17: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch] switch (op) { ^~ /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:8908:5: note: in instantiation of function template specialization 'pool2d_nchw_kernel<float, float>' requested here pool2d_nchw_kernel<<<block_nums, CUDA_IM2COL_BLOCK_SIZE, 0, main_stream>>>(IH, IW, OH, OW, k1, k0, s1, s0, p1, p0, parallel_elements, src0_dd, dst_dd, op); ^ /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml-cuda.cu:6252:25: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch] switch (op) { ^~ error: option 'cf-protection=return' cannot be specified on this target error: option 'cf-protection=branch' cannot be specified on this target 5 warnings and 2 errors generated when compiling for gfx1010. make[3]: *** [CMakeFiles/ggml-rocm.dir/build.make:79: CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.o] Error 1 make[3]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' make[2]: *** [CMakeFiles/Makefile2:727: CMakeFiles/ggml-rocm.dir/all] Error 2 make[2]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' make[1]: *** [CMakeFiles/Makefile2:2908: examples/server/CMakeFiles/ext_server.dir/rule] Error 2 make[1]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' make: *** [Makefile:1183: ext_server] Error 2 ``` And here is the `PKGBUILD` that I am working on: ```bash pkgname=ollama-rocm pkgdesc='Create, run and share large language models (LLMs) with ROCm' pkgver=0.1.24 pkgrel=1 arch=(x86_64) url='https://github.com/jmorganca/ollama' license=(MIT) _ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24 _llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c makedepends=(clblast cmake git go rocm-hip-sdk rocm-opencl-sdk) provides=(ollama) conflicts=(ollama) source=(git+$url#tag=v$pkgver llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit ollama.service sysusers.conf tmpfiles.d) b2sums=('SKIP' 'SKIP' 'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124' '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec' 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed') prepare() { cd ${pkgname/-rocm} rm -frv llm/llama.cpp # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks. cp -r "$srcdir/llama.cpp" llm/llama.cpp # Turn LTO on and set the build type to Release sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh } build() { cd ${pkgname/-rocm} export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS" go generate ./... go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \ -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" -tags rocm } check() { cd ${pkgname/-rocm} go test -tags rocm ./api ./format ./ollama --version > /dev/null } package() { install -Dm755 ${pkgname/-rocm}/${pkgname/-rocm} "$pkgdir/usr/bin/${pkgname/-rocm}" install -dm755 "$pkgdir/var/lib/ollama" install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" install -Dm644 ${pkgname/-rocm}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" } ``` In addition to this, solutions for how to set `CMAKE` flags without modifying `gen_linux.sh`, for building with "CPU only", "CUDA only" or "ROCm only" support, are warmly welcome. Thanks in advance.
Author
Owner

@Erihel commented on GitHub (Feb 13, 2024):

That might be related to llama.cpp and not ollama itself.

It's complaing about -fcf-protection. That's what I see inside /etc/makepkg.conf

CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \
        -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \
        -fstack-clash-protection -fcf-protection"

You can try playing with that https://clang.llvm.org/docs/UsersManual.html

  -fcf-protection=<value> Instrument control-flow architecture protection. Options: return, branch, full, none.
  -fcf-protection         Enable cf-protection in 'full' mode
<!-- gh-comment-id:1941640723 --> @Erihel commented on GitHub (Feb 13, 2024): That might be related to llama.cpp and not ollama itself. It's complaing about `-fcf-protection`. That's what I see inside `/etc/makepkg.conf` ``` CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \ -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \ -fstack-clash-protection -fcf-protection" ``` You can try playing with that https://clang.llvm.org/docs/UsersManual.html ``` -fcf-protection=<value> Instrument control-flow architecture protection. Options: return, branch, full, none. -fcf-protection Enable cf-protection in 'full' mode ```
Author
Owner

@xyproto commented on GitHub (Feb 13, 2024):

Thanks, using -fcf-protection=none got the compilation a bit further, but now it stops at:

[...]
"/opt/rocm/lib/llvm/bin/llvm-ranlib" libext_server.a
make[3]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
[100%] Built target ext_server
make[2]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
/usr/bin/cmake -E cmake_progress_start /build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1/CMakeFiles 0
make[1]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'

+ mkdir -p ../llama.cpp/build/linux/x86_64/rocm_v1/lib/
+ g++ -fPIC -g -shared -o ../llama.cpp/build/linux/x86_64/rocm_v1/lib/libext_server.so -Wl,--whole-archive ../llama.cpp/build/linux/x86_64/rocm_v1/examples/server/libext_server.a -Wl,--no-whole-archive ../llama.cpp/build/linux/x86_64/rocm_v1/common/libcommon.a ../llama.cpp/build/linux/x86_64/rocm_v1/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -L/opt/rocm/lib -L/opt/amdgpu/lu
/usr/bin/ld: ../llama.cpp/build/linux/x86_64/rocm_v1/examples/server/libext_server.a: member ../llama.cpp/build/linux/x86_64/rocm_v1/examples/server/libext_server.a(ext_server.cpp.o) in archive is not an object
collect2: error: ld returned 1 exit status
llm/generate/generate_linux.go:3: running "bash": exit status 1
<!-- gh-comment-id:1941938204 --> @xyproto commented on GitHub (Feb 13, 2024): Thanks, using `-fcf-protection=none` got the compilation a bit further, but now it stops at: ``` [...] "/opt/rocm/lib/llvm/bin/llvm-ranlib" libext_server.a make[3]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' [100%] Built target ext_server make[2]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' /usr/bin/cmake -E cmake_progress_start /build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1/CMakeFiles 0 make[1]: Leaving directory '/build/ollama-rocm/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1' + mkdir -p ../llama.cpp/build/linux/x86_64/rocm_v1/lib/ + g++ -fPIC -g -shared -o ../llama.cpp/build/linux/x86_64/rocm_v1/lib/libext_server.so -Wl,--whole-archive ../llama.cpp/build/linux/x86_64/rocm_v1/examples/server/libext_server.a -Wl,--no-whole-archive ../llama.cpp/build/linux/x86_64/rocm_v1/common/libcommon.a ../llama.cpp/build/linux/x86_64/rocm_v1/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -L/opt/rocm/lib -L/opt/amdgpu/lu /usr/bin/ld: ../llama.cpp/build/linux/x86_64/rocm_v1/examples/server/libext_server.a: member ../llama.cpp/build/linux/x86_64/rocm_v1/examples/server/libext_server.a(ext_server.cpp.o) in archive is not an object collect2: error: ld returned 1 exit status llm/generate/generate_linux.go:3: running "bash": exit status 1 ```
Author
Owner

@Erihel commented on GitHub (Feb 13, 2024):

I have the same issue. I see it fails trying to link with g++. I've checked a file that's inside that libext_server.a.

 $ file ext_server.cpp.o
ext_server.cpp.o: LLVM IR bitcode

 $ llvm-bcanalyzer  ext_server.cpp.o 
Summary of ext_server.cpp.o:
         Total size: 132322720b/16540340.00B/4135085W
        Stream type: LLVM IR
...

So I replaced g++ with /opt/rocm/llvm/bin/clang++ and it did create the so lib. Not sure if it's valid. I see that's inside gen_common.sh. Try adding sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh to the prepare step.

I can test that later.

<!-- gh-comment-id:1942090475 --> @Erihel commented on GitHub (Feb 13, 2024): I have the same issue. I see it fails trying to link with `g++`. I've checked a file that's inside that `libext_server.a`. ``` $ file ext_server.cpp.o ext_server.cpp.o: LLVM IR bitcode $ llvm-bcanalyzer ext_server.cpp.o Summary of ext_server.cpp.o: Total size: 132322720b/16540340.00B/4135085W Stream type: LLVM IR ... ``` So I replaced `g++` with `/opt/rocm/llvm/bin/clang++` and it did create the so lib. Not sure if it's valid. I see that's inside `gen_common.sh`. Try adding `sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh` to the prepare step. I can test that later.
Author
Owner

@Erihel commented on GitHub (Feb 14, 2024):

It does compile with the two changes but it crashes for me when running a model. I have a 5700 XT and I've tried with and without HSA_OVERRIDE_GFX_VERSION=10.3.0. I've got two different crashes.
With:

:0:rocdevice.cpp            :2726: 1385395379 us: [pid:12314 tid:0x7a26526006c0] Callback: Queue 0x7a2620300000 aborting with error : HSA_STATUS_ERROR_MEMORY_APERTURE_VIOLATION: The agent attempted to access memory beyond the largest legal address. code: 0x29

Without:

rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1010
 List of available TensileLibrary Files : 
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
<!-- gh-comment-id:1943361869 --> @Erihel commented on GitHub (Feb 14, 2024): It does compile with the two changes but it crashes for me when running a model. I have a 5700 XT and I've tried with and without `HSA_OVERRIDE_GFX_VERSION=10.3.0`. I've got two different crashes. With: ``` :0:rocdevice.cpp :2726: 1385395379 us: [pid:12314 tid:0x7a26526006c0] Callback: Queue 0x7a2620300000 aborting with error : HSA_STATUS_ERROR_MEMORY_APERTURE_VIOLATION: The agent attempted to access memory beyond the largest legal address. code: 0x29 ``` Without: ``` rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1010 List of available TensileLibrary Files : "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat" ```
Author
Owner

@shtrophic commented on GitHub (Feb 14, 2024):

I am on a 6700xt with HSA_OVERRIDE_GFX_VERSION=10.3.0 and with your sed patch of llm/generate/gen_common.sh and -fcf-protection=none I can run it first try:

ollama serve
~/s/A/ollama-rocm ❯❯❯ ollama serve
time=2024-02-14T11:07:40.202+01:00 level=INFO source=images.go:863 msg="total blobs: 0"
time=2024-02-14T11:07:40.202+01:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/jmorganca/ollama/server.PullModelHandler (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/jmorganca/ollama/server.GenerateHandler (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/jmorganca/ollama/server.ChatHandler (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/jmorganca/ollama/server.EmbeddingHandler (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/jmorganca/ollama/server.CreateModelHandler (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/jmorganca/ollama/server.PushModelHandler (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/jmorganca/ollama/server.CopyModelHandler (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/jmorganca/ollama/server.DeleteModelHandler (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/jmorganca/ollama/server.ShowModelHandler (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/jmorganca/ollama/server.CreateBlobHandler (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/jmorganca/ollama/server.HeadBlobHandler (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/jmorganca/ollama/server.ChatHandler (6 handlers)
[GIN-debug] GET    /                         --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
time=2024-02-14T11:07:40.202+01:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)"
time=2024-02-14T11:07:40.202+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-14T11:07:40.278+01:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu rocm_v1 cpu_avx cpu_avx2]"
time=2024-02-14T11:07:40.278+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-14T11:07:40.278+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-14T11:07:40.284+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: []"
time=2024-02-14T11:07:40.284+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-14T11:07:40.285+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.1.0]"
time=2024-02-14T11:07:40.291+01:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-14T11:07:40.291+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/02/14 - 11:08:12 | 200 |        34.5µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/14 - 11:08:12 | 404 |       91.37µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-14T11:08:14.051+01:00 level=INFO source=download.go:136 msg="downloading 66002b78c70a in 20 100 MB part(s)"
time=2024-02-14T11:08:47.296+01:00 level=INFO source=download.go:136 msg="downloading dd90d0f2b7ee in 1 95 B part(s)"
time=2024-02-14T11:08:50.789+01:00 level=INFO source=download.go:136 msg="downloading 93ca9b3d83dc in 1 89 B part(s)"
time=2024-02-14T11:08:53.877+01:00 level=INFO source=download.go:136 msg="downloading 33eb43a1488d in 1 52 B part(s)"
time=2024-02-14T11:08:57.044+01:00 level=INFO source=download.go:136 msg="downloading fd52b10ee3ee in 1 455 B part(s)"
[GIN] 2024/02/14 - 11:09:00 | 200 | 47.916844057s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/02/14 - 11:09:00 | 200 |      246.74µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-14T11:09:00.202+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-14T11:09:00.202+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-14T11:09:00.202+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama3919935792/rocm_v1/libext_server.so
time=2024-02-14T11:09:00.465+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3919935792/rocm_v1/libext_server.so"
time=2024-02-14T11:09:00.465+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 19 key-value pairs and 237 tensors from /home/chris/.ollama/models/blobs/sha256:66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = pankajmathur
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 3200
llama_model_loader: - kv   4:                          llama.block_count u32              = 26
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 8640
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 100
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   53 tensors
llama_model_loader: - type q4_0:  183 tensors
llama_model_loader: - type q8_0:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 3200
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 26
llm_load_print_meta: n_rot            = 100
llm_load_print_meta: n_embd_head_k    = 100
llm_load_print_meta: n_embd_head_v    = 100
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3200
llm_load_print_meta: n_embd_v_gqa     = 3200
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 8640
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 3.43 B
llm_load_print_meta: model size       = 1.84 GiB (4.62 BPW)
llm_load_print_meta: general.name     = pankajmathur
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.18 MiB
llm_load_tensors: offloading 26 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 27/27 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  1832.60 MiB
llm_load_tensors:        CPU buffer size =    54.93 MiB
.............................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   650.00 MiB
llama_new_context_with_model: KV self size  =  650.00 MiB, K (f16):  325.00 MiB, V (f16):  325.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    10.26 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   165.83 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     6.88 MiB
llama_new_context_with_model: graph splits (measure): 3
time=2024-02-14T11:09:03.068+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[GIN] 2024/02/14 - 11:09:06 | 200 |  6.414590625s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2024/02/14 - 11:10:04 | 200 |      20.339µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/14 - 11:10:04 | 200 |      322.53µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/14 - 11:10:08 | 200 |  4.155772363s |       127.0.0.1 | POST     "/api/generate"
ollama run
~/s/A/ollama-rocm ❯❯❯ ollama run orca-mini "please provide a thorough explaination of arch linux"
 Arch Linux is a free and open-source operating system that is based on the Linux kernel. It was
created by MikeMcGee in 1996 and has since developed into one of the most popular Linux
distributions. Here are some key features and benefits of using Arch Linux:

1. Lightweight: Arch Linux is known for being very lightweight and efficient, which makes it a
great choice for servers, workstations, and other resource-intensive applications.

2. Stable: The Arch Linux community aims to provide a stable and reliable operating system that
can be used for any purpose without the risk of frequent updates or crashes.

3. User-friendly: Arch Linux is designed to be user-friendly and easy to install, with a
graphical installation tool and a simple command-line interface.

4. Customizable: The Arch Linux system is highly customizable, allowing users to customize their
desktop environment, configure the kernel settings, and even install third-party packages to add
functionality.

5. Forked from Debian: Arch Linux was forked from Debian in 2013, which means that it shares many
of Debian's codebase and repositories but also has its own unique packages and configurations.

6. Security Focused: Arch Linux is known for its strong security features, including a built-in
firewall, package management that includes security updates, and a system boot process that
requires sudo access to execute any changes.

Overall, Arch Linux is a great choice for users who want a stable, secure, and highly
customizable operating system that can be used for any purpose.

I confirmed GPU loads with nvtop.

<!-- gh-comment-id:1943449490 --> @shtrophic commented on GitHub (Feb 14, 2024): I am on a 6700xt with `HSA_OVERRIDE_GFX_VERSION=10.3.0` and with your `sed` patch of `llm/generate/gen_common.sh` and `-fcf-protection=none` I can run it first try: <details> <summary>ollama serve</summary> ```console ~/s/A/ollama-rocm ❯❯❯ ollama serve time=2024-02-14T11:07:40.202+01:00 level=INFO source=images.go:863 msg="total blobs: 0" time=2024-02-14T11:07:40.202+01:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/jmorganca/ollama/server.PullModelHandler (5 handlers) [GIN-debug] POST /api/generate --> github.com/jmorganca/ollama/server.GenerateHandler (5 handlers) [GIN-debug] POST /api/chat --> github.com/jmorganca/ollama/server.ChatHandler (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/jmorganca/ollama/server.EmbeddingHandler (5 handlers) [GIN-debug] POST /api/create --> github.com/jmorganca/ollama/server.CreateModelHandler (5 handlers) [GIN-debug] POST /api/push --> github.com/jmorganca/ollama/server.PushModelHandler (5 handlers) [GIN-debug] POST /api/copy --> github.com/jmorganca/ollama/server.CopyModelHandler (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/jmorganca/ollama/server.DeleteModelHandler (5 handlers) [GIN-debug] POST /api/show --> github.com/jmorganca/ollama/server.ShowModelHandler (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/jmorganca/ollama/server.CreateBlobHandler (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/jmorganca/ollama/server.HeadBlobHandler (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/jmorganca/ollama/server.ChatHandler (6 handlers) [GIN-debug] GET / --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] GET /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers) [GIN-debug] GET /api/version --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] HEAD / --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers) [GIN-debug] HEAD /api/version --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) time=2024-02-14T11:07:40.202+01:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)" time=2024-02-14T11:07:40.202+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..." time=2024-02-14T11:07:40.278+01:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu rocm_v1 cpu_avx cpu_avx2]" time=2024-02-14T11:07:40.278+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-14T11:07:40.278+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-14T11:07:40.284+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: []" time=2024-02-14T11:07:40.284+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-14T11:07:40.285+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.1.0]" time=2024-02-14T11:07:40.291+01:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-14T11:07:40.291+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [GIN] 2024/02/14 - 11:08:12 | 200 | 34.5µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/14 - 11:08:12 | 404 | 91.37µs | 127.0.0.1 | POST "/api/show" time=2024-02-14T11:08:14.051+01:00 level=INFO source=download.go:136 msg="downloading 66002b78c70a in 20 100 MB part(s)" time=2024-02-14T11:08:47.296+01:00 level=INFO source=download.go:136 msg="downloading dd90d0f2b7ee in 1 95 B part(s)" time=2024-02-14T11:08:50.789+01:00 level=INFO source=download.go:136 msg="downloading 93ca9b3d83dc in 1 89 B part(s)" time=2024-02-14T11:08:53.877+01:00 level=INFO source=download.go:136 msg="downloading 33eb43a1488d in 1 52 B part(s)" time=2024-02-14T11:08:57.044+01:00 level=INFO source=download.go:136 msg="downloading fd52b10ee3ee in 1 455 B part(s)" [GIN] 2024/02/14 - 11:09:00 | 200 | 47.916844057s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/02/14 - 11:09:00 | 200 | 246.74µs | 127.0.0.1 | POST "/api/show" time=2024-02-14T11:09:00.202+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-14T11:09:00.202+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-14T11:09:00.202+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library /tmp/ollama3919935792/rocm_v1/libext_server.so time=2024-02-14T11:09:00.465+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3919935792/rocm_v1/libext_server.so" time=2024-02-14T11:09:00.465+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no llama_model_loader: loaded meta data with 19 key-value pairs and 237 tensors from /home/chris/.ollama/models/blobs/sha256:66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = pankajmathur llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 3200 llama_model_loader: - kv 4: llama.block_count u32 = 26 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8640 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 100 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - type f32: 53 tensors llama_model_loader: - type q4_0: 183 tensors llama_model_loader: - type q8_0: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 3200 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 26 llm_load_print_meta: n_rot = 100 llm_load_print_meta: n_embd_head_k = 100 llm_load_print_meta: n_embd_head_v = 100 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3200 llm_load_print_meta: n_embd_v_gqa = 3200 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 8640 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 3.43 B llm_load_print_meta: model size = 1.84 GiB (4.62 BPW) llm_load_print_meta: general.name = pankajmathur llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.18 MiB llm_load_tensors: offloading 26 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 27/27 layers to GPU llm_load_tensors: ROCm0 buffer size = 1832.60 MiB llm_load_tensors: CPU buffer size = 54.93 MiB ............................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 650.00 MiB llama_new_context_with_model: KV self size = 650.00 MiB, K (f16): 325.00 MiB, V (f16): 325.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 10.26 MiB llama_new_context_with_model: ROCm0 compute buffer size = 165.83 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 6.88 MiB llama_new_context_with_model: graph splits (measure): 3 time=2024-02-14T11:09:03.068+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [GIN] 2024/02/14 - 11:09:06 | 200 | 6.414590625s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/02/14 - 11:10:04 | 200 | 20.339µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/14 - 11:10:04 | 200 | 322.53µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/14 - 11:10:08 | 200 | 4.155772363s | 127.0.0.1 | POST "/api/generate" ``` </details> <details> <summary>ollama run</summary> ```console ~/s/A/ollama-rocm ❯❯❯ ollama run orca-mini "please provide a thorough explaination of arch linux" Arch Linux is a free and open-source operating system that is based on the Linux kernel. It was created by MikeMcGee in 1996 and has since developed into one of the most popular Linux distributions. Here are some key features and benefits of using Arch Linux: 1. Lightweight: Arch Linux is known for being very lightweight and efficient, which makes it a great choice for servers, workstations, and other resource-intensive applications. 2. Stable: The Arch Linux community aims to provide a stable and reliable operating system that can be used for any purpose without the risk of frequent updates or crashes. 3. User-friendly: Arch Linux is designed to be user-friendly and easy to install, with a graphical installation tool and a simple command-line interface. 4. Customizable: The Arch Linux system is highly customizable, allowing users to customize their desktop environment, configure the kernel settings, and even install third-party packages to add functionality. 5. Forked from Debian: Arch Linux was forked from Debian in 2013, which means that it shares many of Debian's codebase and repositories but also has its own unique packages and configurations. 6. Security Focused: Arch Linux is known for its strong security features, including a built-in firewall, package management that includes security updates, and a system boot process that requires sudo access to execute any changes. Overall, Arch Linux is a great choice for users who want a stable, secure, and highly customizable operating system that can be used for any purpose. ``` </details> I confirmed GPU loads with `nvtop`.
Author
Owner

@kescherCode commented on GitHub (Feb 14, 2024):

@Erihel your GPU is a gfx1010, which is an older ISA than gfx1030. So your HSA override doesn't do much.

<!-- gh-comment-id:1943622539 --> @kescherCode commented on GitHub (Feb 14, 2024): @Erihel your GPU is a gfx1010, which is an older ISA than gfx1030. So your HSA override doesn't do much.
Author
Owner

@Erihel commented on GitHub (Feb 14, 2024):

I know that. RDNA1/Navi 10 is not officially supported as far as I know (older and newer cards are). Some people had luck with setting that env to RDNA2 that's why I checked that.

<!-- gh-comment-id:1943638661 --> @Erihel commented on GitHub (Feb 14, 2024): I know that. RDNA1/Navi 10 is not officially supported as far as I know (older and newer cards are). Some people had luck with setting that env to RDNA2 that's why I checked that.
Author
Owner

@Th3Rom3 commented on GitHub (Feb 14, 2024):

Just a quick pointer to #738 for better visibility on both ends.

<!-- gh-comment-id:1944187023 --> @Th3Rom3 commented on GitHub (Feb 14, 2024): Just a quick pointer to #738 for better visibility on both ends.
Author
Owner

@sigma-957 commented on GitHub (Feb 14, 2024):

Can confirm the results here with my 6650XT.

edit: Using rocm 6.0.0.

<!-- gh-comment-id:1944852769 --> @sigma-957 commented on GitHub (Feb 14, 2024): Can confirm the results [here](https://github.com/ollama/ollama/issues/2473#issuecomment-1943449490) with my 6650XT. edit: Using rocm 6.0.0.
Author
Owner

@ms178 commented on GitHub (Feb 15, 2024):

This PKGBUILD is my attempt, but unfortunately cannot get a model to load successfully.

"Failed to load dynamic library /tmp/ollama2932610396/cpu/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama2932610396/cpu/libext_server.so: undefined symbol: hipGetDevice"

One attempt consists of using:

export CC=/opt/rocm/llvm/bin/clang
export CXX=/opt/rocm/llvm/bin/clang++

(and remember to keep flags in /etc/makepkg.conf that are compatible with Clang-17)

But that doesn't seem to work well: source=llm.go:77 msg="GPU not available, falling back to CPU"

In another attempt with GCC and without HIPBLAS and not defining the AMDGPU-Target, I've seen ROCm getting initialized. But still see the AI model failing to load.

pkgname=ollama
pkgdesc='Create, run and share large language models (LLMs)'
pkgver=0.1.24
pkgrel=4.1
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
_ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24
# The llama.cpp git submodule commit hash can be found here:
# https://github.com/jmorganca/ollama/tree/v0.1.24/llm
_llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c
makedepends=(cmake git go)
source=(git+$url#commit=$_ollamacommit
        llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit
        sysusers.conf
        tmpfiles.d
        ollama.service)
b2sums=('SKIP'
        'SKIP'
        '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec'
        'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed'
        'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124')

prepare() {
  cd $pkgname

  rm -frv llm/llama.cpp

  # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks
  cp -r "$srcdir/llama.cpp" llm/llama.cpp

  # Turn LTO on and set the build type to Release
  sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -DLLAMA_HIPBLAS=1 -D AMDGPU_TARGETS=gfx1030 -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh
  sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh
}

build() {
  cd $pkgname
  export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
  export OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DAMDGPU_TARGETS=gfx1030 -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_LTO=on -DLLAMA_HIPBLAS=1 -DCMAKE_BUILD_TYPE=Release"
  go generate ./...
  go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \
    -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver"
}


check() {
  cd $pkgname
  go test ./api ./format
  ./ollama --version >/dev/null
}

package() {
  install -Dm755 $pkgname/$pkgname "$pkgdir/usr/bin/$pkgname"
  install -dm755 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 $pkgname/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}
<!-- gh-comment-id:1946817202 --> @ms178 commented on GitHub (Feb 15, 2024): This PKGBUILD is my attempt, but unfortunately cannot get a model to load successfully. `"Failed to load dynamic library /tmp/ollama2932610396/cpu/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama2932610396/cpu/libext_server.so: undefined symbol: hipGetDevice"` One attempt consists of using: export CC=/opt/rocm/llvm/bin/clang export CXX=/opt/rocm/llvm/bin/clang++ (and remember to keep flags in /etc/makepkg.conf that are compatible with Clang-17) But that doesn't seem to work well: `source=llm.go:77 msg="GPU not available, falling back to CPU"` In another attempt with GCC and without HIPBLAS and not defining the AMDGPU-Target, I've seen ROCm getting initialized. But still see the AI model failing to load. ``` pkgname=ollama pkgdesc='Create, run and share large language models (LLMs)' pkgver=0.1.24 pkgrel=4.1 arch=(x86_64) url='https://github.com/jmorganca/ollama' license=(MIT) _ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24 # The llama.cpp git submodule commit hash can be found here: # https://github.com/jmorganca/ollama/tree/v0.1.24/llm _llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c makedepends=(cmake git go) source=(git+$url#commit=$_ollamacommit llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit sysusers.conf tmpfiles.d ollama.service) b2sums=('SKIP' 'SKIP' '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec' 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed' 'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124') prepare() { cd $pkgname rm -frv llm/llama.cpp # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks cp -r "$srcdir/llama.cpp" llm/llama.cpp # Turn LTO on and set the build type to Release sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -DLLAMA_HIPBLAS=1 -D AMDGPU_TARGETS=gfx1030 -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh } build() { cd $pkgname export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS" export OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DAMDGPU_TARGETS=gfx1030 -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_LTO=on -DLLAMA_HIPBLAS=1 -DCMAKE_BUILD_TYPE=Release" go generate ./... go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \ -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" } check() { cd $pkgname go test ./api ./format ./ollama --version >/dev/null } package() { install -Dm755 $pkgname/$pkgname "$pkgdir/usr/bin/$pkgname" install -dm755 "$pkgdir/var/lib/ollama" install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" install -Dm644 $pkgname/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" } ```
Author
Owner

@sigma-957 commented on GitHub (Feb 16, 2024):

This is what worked for me:
export AMDGPU_TARGET=gfx1030 HSA_OVERRIDE_GFX_VERSION=10.3.0 ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast

PKGBUILD:

pkgname=ollama-rocm
pkgdesc='Create, run and share large language models (LLMs) with ROCm'
pkgver=0.1.24
pkgrel=1
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
_ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24
_llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c
makedepends=(clblast cmake git go rocm-hip-sdk rocm-opencl-sdk)
provides=(ollama)
conflicts=(ollama)
source=(git+$url#tag=v$pkgver
        llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit
        ollama.service
        sysusers.conf
        tmpfiles.d)
b2sums=('SKIP'
        'SKIP'
        'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124'
        '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec'
        'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed')

prepare() {
  cd ${pkgname/-rocm}
  rm -frv llm/llama.cpp

  # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
  cp -r "$srcdir/llama.cpp" llm/llama.cpp

  # Turn LTO on and set the build type to Release
  sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh
  sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh
}

build() {
  cd ${pkgname/-rocm}
  export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
  go generate ./...
  go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \
    -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" -tags rocm
}

check() {
  cd ${pkgname/-rocm}
  go test -tags rocm ./api ./format
  ./ollama --version > /dev/null
}

package() {
  install -Dm755 ${pkgname/-rocm}/${pkgname/-rocm} "$pkgdir/usr/bin/${pkgname/-rocm}"
  install -dm755 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 ${pkgname/-rocm}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}
<!-- gh-comment-id:1948524969 --> @sigma-957 commented on GitHub (Feb 16, 2024): This is what worked for me: `export AMDGPU_TARGET=gfx1030 HSA_OVERRIDE_GFX_VERSION=10.3.0 ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast` PKGBUILD: ``` pkgname=ollama-rocm pkgdesc='Create, run and share large language models (LLMs) with ROCm' pkgver=0.1.24 pkgrel=1 arch=(x86_64) url='https://github.com/jmorganca/ollama' license=(MIT) _ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24 _llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c makedepends=(clblast cmake git go rocm-hip-sdk rocm-opencl-sdk) provides=(ollama) conflicts=(ollama) source=(git+$url#tag=v$pkgver llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit ollama.service sysusers.conf tmpfiles.d) b2sums=('SKIP' 'SKIP' 'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124' '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec' 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed') prepare() { cd ${pkgname/-rocm} rm -frv llm/llama.cpp # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks. cp -r "$srcdir/llama.cpp" llm/llama.cpp # Turn LTO on and set the build type to Release sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh } build() { cd ${pkgname/-rocm} export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS" go generate ./... go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \ -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" -tags rocm } check() { cd ${pkgname/-rocm} go test -tags rocm ./api ./format ./ollama --version > /dev/null } package() { install -Dm755 ${pkgname/-rocm}/${pkgname/-rocm} "$pkgdir/usr/bin/${pkgname/-rocm}" install -dm755 "$pkgdir/var/lib/ollama" install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" install -Dm644 ${pkgname/-rocm}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" } ```
Author
Owner

@Sematre commented on GitHub (Feb 18, 2024):

This is what worked for me: export AMDGPU_TARGET=gfx1030 HSA_OVERRIDE_GFX_VERSION=10.3.0 ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast

PKGBUILD:

pkgname=ollama-rocm
pkgdesc='Create, run and share large language models (LLMs) with ROCm'
pkgver=0.1.24
pkgrel=1
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
_ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24
_llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c
makedepends=(clblast cmake git go rocm-hip-sdk rocm-opencl-sdk)
provides=(ollama)
conflicts=(ollama)
source=(git+$url#tag=v$pkgver
        llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit
        ollama.service
        sysusers.conf
        tmpfiles.d)
b2sums=('SKIP'
        'SKIP'
        'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124'
        '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec'
        'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed')

prepare() {
  cd ${pkgname/-rocm}
  rm -frv llm/llama.cpp

  # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
  cp -r "$srcdir/llama.cpp" llm/llama.cpp

  # Turn LTO on and set the build type to Release
  sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh
  sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh
}

build() {
  cd ${pkgname/-rocm}
  export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
  go generate ./...
  go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \
    -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" -tags rocm
}

check() {
  cd ${pkgname/-rocm}
  go test -tags rocm ./api ./format
  ./ollama --version > /dev/null
}

package() {
  install -Dm755 ${pkgname/-rocm}/${pkgname/-rocm} "$pkgdir/usr/bin/${pkgname/-rocm}"
  install -dm755 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 ${pkgname/-rocm}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

I can confirm that I was able to build this package with this script. I had to remove the -fcf-protection flag from /etc/makepkg.conf to make it compile.

But when trying to load a model with ollama it crashed on my RX 6600 XT:

ollama[122645]: rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1032
ollama[122645]:  List of available TensileLibrary Files :
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat"
ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
ollama[122645]: loading library /tmp/ollama3077773922/rocm_v1/libext_server.so
systemd[1]: ollama.service: Main process exited, code=dumped, status=6/ABRT
systemd[1]: ollama.service: Failed with result 'core-dump'.

I was able to make it work by adding an environment variable to the service file: sudo systemctl edit ollama.service

[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"

Now it's running:

ollama[123372]: ggml_init_cublas: found 1 ROCm devices:
ollama[123372]:   Device 0: AMD Radeon RX 6600 XT, compute capability 10.3, VMM: no
<!-- gh-comment-id:1951414014 --> @Sematre commented on GitHub (Feb 18, 2024): > This is what worked for me: `export AMDGPU_TARGET=gfx1030 HSA_OVERRIDE_GFX_VERSION=10.3.0 ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/cmake/CLBlast` > > PKGBUILD: > > ``` > pkgname=ollama-rocm > pkgdesc='Create, run and share large language models (LLMs) with ROCm' > pkgver=0.1.24 > pkgrel=1 > arch=(x86_64) > url='https://github.com/jmorganca/ollama' > license=(MIT) > _ollamacommit=69f392c9b7ea7c5cc3d46c29774e37fdef51abd8 # tag: v0.1.24 > _llama_cpp_commit=f57fadc009cbff741a1961cb7896c47d73978d2c > makedepends=(clblast cmake git go rocm-hip-sdk rocm-opencl-sdk) > provides=(ollama) > conflicts=(ollama) > source=(git+$url#tag=v$pkgver > llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit > ollama.service > sysusers.conf > tmpfiles.d) > b2sums=('SKIP' > 'SKIP' > 'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124' > '3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec' > 'e8f2b19e2474f30a4f984b45787950012668bf0acb5ad1ebb25cd9776925ab4a6aa927f8131ed53e35b1c71b32c504c700fe5b5145ecd25c7a8284373bb951ed') > > prepare() { > cd ${pkgname/-rocm} > rm -frv llm/llama.cpp > > # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks. > cp -r "$srcdir/llama.cpp" llm/llama.cpp > > # Turn LTO on and set the build type to Release > sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh > sed -i 's,g++,/opt/rocm/llvm/bin/clang++,g' llm/generate/gen_common.sh > } > > build() { > cd ${pkgname/-rocm} > export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS" > go generate ./... > go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \ > -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" -tags rocm > } > > check() { > cd ${pkgname/-rocm} > go test -tags rocm ./api ./format > ./ollama --version > /dev/null > } > > package() { > install -Dm755 ${pkgname/-rocm}/${pkgname/-rocm} "$pkgdir/usr/bin/${pkgname/-rocm}" > install -dm755 "$pkgdir/var/lib/ollama" > install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" > install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" > install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" > install -Dm644 ${pkgname/-rocm}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" > } > ``` I can confirm that I was able to build this package with this script. I had to remove the `-fcf-protection` flag from `/etc/makepkg.conf` to make it compile. But when trying to load a model with ollama it crashed on my RX 6600 XT: ``` ollama[122645]: rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1032 ollama[122645]: List of available TensileLibrary Files : ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat" ollama[122645]: "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat" ollama[122645]: loading library /tmp/ollama3077773922/rocm_v1/libext_server.so systemd[1]: ollama.service: Main process exited, code=dumped, status=6/ABRT systemd[1]: ollama.service: Failed with result 'core-dump'. ``` I was able to make it work by adding an environment variable to the service file: `sudo systemctl edit ollama.service` ``` [Service] Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" ``` Now it's running: ``` ollama[123372]: ggml_init_cublas: found 1 ROCm devices: ollama[123372]: Device 0: AMD Radeon RX 6600 XT, compute capability 10.3, VMM: no ```
Author
Owner

@sigma-957 commented on GitHub (Feb 19, 2024):

Ah yes, I forgot I also had to add that env var.

<!-- gh-comment-id:1951567600 --> @sigma-957 commented on GitHub (Feb 19, 2024): Ah yes, I forgot I also had to add that env var.
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

The latest release 0.1.29 makes some significant updates to our ROCm handling. Hopefully this will make it a little easier for downstream packaging.

<!-- gh-comment-id:1989704950 --> @dhiltgen commented on GitHub (Mar 12, 2024): The latest release 0.1.29 makes some significant updates to our ROCm handling. Hopefully this will make it a little easier for downstream packaging.
Author
Owner

@hsgg commented on GitHub (Apr 29, 2024):

@xyproto For what it's worth, the AUR package at https://aur.archlinux.org/ollama-rocm-git.git worked perfectly, on gpu_type=gfx1102, with ollama version 0.1.33.g7e432cdf. Happy to do some additional testing, if you want me to.

<!-- gh-comment-id:2081938714 --> @hsgg commented on GitHub (Apr 29, 2024): @xyproto For what it's worth, the AUR package at https://aur.archlinux.org/ollama-rocm-git.git worked perfectly, on `gpu_type=gfx1102`, with ollama version `0.1.33.g7e432cdf`. Happy to do some additional testing, if you want me to.
Author
Owner

@xyproto commented on GitHub (Apr 29, 2024):

I just packaged ollama-rocm for Arch Linux, please test if it works.

If there are issues with the packaging, it can be reported here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues

<!-- gh-comment-id:2082084288 --> @xyproto commented on GitHub (Apr 29, 2024): I just packaged `ollama-rocm` for Arch Linux, please test if it works. If there are issues with the packaging, it can be reported here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues
Author
Owner

@dhiltgen commented on GitHub (Jun 1, 2024):

It sounds like we can close this now based on the new rocm packaging model we use.

<!-- gh-comment-id:2143576444 --> @dhiltgen commented on GitHub (Jun 1, 2024): It sounds like we can close this now based on the new rocm packaging model we use.
Author
Owner

@last-partizan commented on GitHub (Jun 3, 2024):

@xyproto thank you for your work. It works great!

I just resized my root partition just to try it, and from few tokens per second on CPU it went to generating a wall of text in a second.

<!-- gh-comment-id:2145627373 --> @last-partizan commented on GitHub (Jun 3, 2024): @xyproto thank you for your work. It works great! I just resized my root partition just to try it, and from few tokens per second on CPU it went to generating a wall of text in a second.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27206