[GH-ISSUE #2120] How to install libnvidia-ml.so? #1211

Closed
opened 2026-04-12 10:59:11 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @silverwind63 on GitHub (Jan 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2120

Hi guys! I have been using ollama with ollama webui this month.However,it output

WARNING:

You should always run with libnvidia-ml.so that is installed with your
NVIDIA Display Driver. By default it's installed in /usr/lib and /usr/lib64.
libnvidia-ml.so in GDK package is a stub library that is attached only for
build purposes (e.g. machine that you build your application doesn't have
to have Display Driver installed).

And whenever I want to run any model(which is capable to load it and the speed is about 5 tokens/s) it will always run into cuda memory error.
My system:
RAM:16GB
GPU:3060ti 8GB
SYSTEM:archlinux
Kernel:6.7.0-arch3-1
Nvidia GPU Driver:nvidia-dkms 545.29.06-1
I have also installed following package which is related to nvidia:

lib32-nvidia-utils 545.29.06-1
libnvidia-container 1.14.3-1
libnvidia-container-tools 1.14.3-1
libva-nvidia-driver-git 0.0.11.r1.gea6d862-1
nvidia-container-toolkit 1.14.3-9
nvidia-docker-compose 0.1.6-1
nvidia-utils 545.29.06-1
Originally created by @silverwind63 on GitHub (Jan 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2120 Hi guys! I have been using ollama with ollama webui this month.However,it output ``` WARNING: You should always run with libnvidia-ml.so that is installed with your NVIDIA Display Driver. By default it's installed in /usr/lib and /usr/lib64. libnvidia-ml.so in GDK package is a stub library that is attached only for build purposes (e.g. machine that you build your application doesn't have to have Display Driver installed). ``` And whenever I want to run any model(which is capable to load it and the speed is about 5 tokens/s) it will always run into cuda memory error. My system: RAM:16GB GPU:3060ti 8GB SYSTEM:archlinux Kernel:6.7.0-arch3-1 Nvidia GPU Driver:nvidia-dkms 545.29.06-1 I have also installed following package which is related to nvidia: ``` lib32-nvidia-utils 545.29.06-1 libnvidia-container 1.14.3-1 libnvidia-container-tools 1.14.3-1 libva-nvidia-driver-git 0.0.11.r1.gea6d862-1 nvidia-container-toolkit 1.14.3-9 nvidia-docker-compose 0.1.6-1 nvidia-utils 545.29.06-1 ```
Author
Owner

@Nowai commented on GitHub (Jan 21, 2024):

I have the same Issue. Basically the same arch setup.
What is your output for Discovered GPU libraries?

INFO Discovered GPU libraries: [/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so /usr/lib/libnvidia-ml.so.545.29.06 /usr/lib32/libnvidia-ml.so.545.29.06 /usr/lib64/libnvidia-ml.so.545.29.06]

It appears that it tries to load the wrong file, even though the correct one is available.

llama.cpp runs perfectly with GPU support.

EDIT:

With the help from Socialnetwooky from discord I could fix it. It seems that the ollama package is currently broken. The steps to fix it are as follows:

Make a new tmp folder.
Copy the PKGBUILD
Copy the remaining files from: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-cuda
execute makepkg -sri

PKGBUILD:

pkgname=ollama-cuda
pkgdesc='Create, run and share large language models (LLMs) with CUDA'
pkgver=0.1.20
pkgrel=3
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
_ollamacommit=ab6be852c77064d7abeffb0b03c096aab90e95fe # tag: v0.1.20
# The llama.cpp git submodule commit hash can be found here:
# https://github.com/jmorganca/ollama/tree/v0.1.20/llm
_llama_cpp_commit=328b83de23b33240e28f4e74900d1d06726f5eb1
makedepends=(cmake cuda git go)
provides=(ollama)
conflicts=(ollama)
source=(git+$url#commit=$_ollamacommit
        llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit
        sysusers.conf
        tmpfiles.d
        ollama.service)
b2sums=('SKIP'
        'SKIP'
	'SKIP'
	'SKIP'
	'SKIP')
#        '65d39053cd1dd09562473c2e58f66a447ce0225b32607685f60350596b3288d6568c1cb897393b20236260e632427de1a952e72fe358407020f6cc7820fd4f60'
#        '6f0b6886108e8d5f385bf7f9bebc60218797c53d4b88a69cc98564ad02c558cb86633b4f713ddd919d618146c04ac0a9215aa6c32ae192701af9d7850264dd56' 
#	'be72a39e823d6631095ce407c92af6aee8650302eeaaa55a970a43592daaa141369d3a5a5eb6a992f6c2f5461370228ac7a84f3318422419fd868575454487d6') 

prepare() {
  cd ${pkgname/-cuda}

  rm -frv llm/llama.cpp
  
  export GIN_MODE=release

  # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
  cp -r "$srcdir/llama.cpp" llm/llama.cpp

  # Turn LTO on and set the build type to Release
  sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=off -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh

  sed -i 's,var mode string = gin.DebugMode,var mode string = gin.ReleaseMode,g' server/routes.go
  # Let gen_linux.sh find libcudart.so
  sed -i 's,/usr/local/cuda/lib64,/opt/cuda/targets/x86_64-linux/lib,g' llm/generate/gen_linux.sh

  # Let gpu.go find libnvidia-ml.so from the cuda package
  #sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so*,g' gpu/gpu.go
  sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/lib/libnvidia-ml.so*,g' gpu/gpu.go
}

build() {
  cd ${pkgname/-cuda}
  export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
  go generate ./...
  go build -buildmode=pie -ldflags=-fno-lto -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \
    -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver"
}

check() {
  cd ${pkgname/-cuda}
  go test ./api ./format
  ./ollama --version > /dev/null
}

package() {
  install -Dm755 ${pkgname/-cuda}/${pkgname/-cuda} "$pkgdir/usr/bin/${pkgname/-cuda}"
  install -dm700 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 ${pkgname/-cuda}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}
<!-- gh-comment-id:1902612730 --> @Nowai commented on GitHub (Jan 21, 2024): I have the same Issue. Basically the same arch setup. What is your output for Discovered GPU libraries? `INFO Discovered GPU libraries: [/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so /usr/lib/libnvidia-ml.so.545.29.06 /usr/lib32/libnvidia-ml.so.545.29.06 /usr/lib64/libnvidia-ml.so.545.29.06]` It appears that it tries to load the wrong file, even though the correct one is available. llama.cpp runs perfectly with GPU support. EDIT: With the help from Socialnetwooky from discord I could fix it. It seems that the ollama package is currently broken. The steps to fix it are as follows: Make a new tmp folder. Copy the PKGBUILD Copy the remaining files from: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-cuda execute makepkg -sri PKGBUILD: ``` pkgname=ollama-cuda pkgdesc='Create, run and share large language models (LLMs) with CUDA' pkgver=0.1.20 pkgrel=3 arch=(x86_64) url='https://github.com/jmorganca/ollama' license=(MIT) _ollamacommit=ab6be852c77064d7abeffb0b03c096aab90e95fe # tag: v0.1.20 # The llama.cpp git submodule commit hash can be found here: # https://github.com/jmorganca/ollama/tree/v0.1.20/llm _llama_cpp_commit=328b83de23b33240e28f4e74900d1d06726f5eb1 makedepends=(cmake cuda git go) provides=(ollama) conflicts=(ollama) source=(git+$url#commit=$_ollamacommit llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit sysusers.conf tmpfiles.d ollama.service) b2sums=('SKIP' 'SKIP' 'SKIP' 'SKIP' 'SKIP') # '65d39053cd1dd09562473c2e58f66a447ce0225b32607685f60350596b3288d6568c1cb897393b20236260e632427de1a952e72fe358407020f6cc7820fd4f60' # '6f0b6886108e8d5f385bf7f9bebc60218797c53d4b88a69cc98564ad02c558cb86633b4f713ddd919d618146c04ac0a9215aa6c32ae192701af9d7850264dd56' # 'be72a39e823d6631095ce407c92af6aee8650302eeaaa55a970a43592daaa141369d3a5a5eb6a992f6c2f5461370228ac7a84f3318422419fd868575454487d6') prepare() { cd ${pkgname/-cuda} rm -frv llm/llama.cpp export GIN_MODE=release # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks. cp -r "$srcdir/llama.cpp" llm/llama.cpp # Turn LTO on and set the build type to Release sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=off -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh sed -i 's,var mode string = gin.DebugMode,var mode string = gin.ReleaseMode,g' server/routes.go # Let gen_linux.sh find libcudart.so sed -i 's,/usr/local/cuda/lib64,/opt/cuda/targets/x86_64-linux/lib,g' llm/generate/gen_linux.sh # Let gpu.go find libnvidia-ml.so from the cuda package #sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so*,g' gpu/gpu.go sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/lib/libnvidia-ml.so*,g' gpu/gpu.go } build() { cd ${pkgname/-cuda} export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS" go generate ./... go build -buildmode=pie -ldflags=-fno-lto -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \ -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" } check() { cd ${pkgname/-cuda} go test ./api ./format ./ollama --version > /dev/null } package() { install -Dm755 ${pkgname/-cuda}/${pkgname/-cuda} "$pkgdir/usr/bin/${pkgname/-cuda}" install -dm700 "$pkgdir/var/lib/ollama" install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" install -Dm644 ${pkgname/-cuda}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" } ```
Author
Owner

@silverwind63 commented on GitHub (Jan 24, 2024):

I have the same Issue. Basically the same arch setup. What is your output for Discovered GPU libraries?

INFO Discovered GPU libraries: [/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so /usr/lib/libnvidia-ml.so.545.29.06 /usr/lib32/libnvidia-ml.so.545.29.06 /usr/lib64/libnvidia-ml.so.545.29.06]

It appears that it tries to load the wrong file, even though the correct one is available.

llama.cpp runs perfectly with GPU support.

EDIT:

With the help from Socialnetwooky from discord I could fix it. It seems that the ollama package is currently broken. The steps to fix it are as follows:

Make a new tmp folder. Copy the PKGBUILD Copy the remaining files from: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-cuda execute makepkg -sri

PKGBUILD:

pkgname=ollama-cuda
pkgdesc='Create, run and share large language models (LLMs) with CUDA'
pkgver=0.1.20
pkgrel=3
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
_ollamacommit=ab6be852c77064d7abeffb0b03c096aab90e95fe # tag: v0.1.20
# The llama.cpp git submodule commit hash can be found here:
# https://github.com/jmorganca/ollama/tree/v0.1.20/llm
_llama_cpp_commit=328b83de23b33240e28f4e74900d1d06726f5eb1
makedepends=(cmake cuda git go)
provides=(ollama)
conflicts=(ollama)
source=(git+$url#commit=$_ollamacommit
        llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit
        sysusers.conf
        tmpfiles.d
        ollama.service)
b2sums=('SKIP'
        'SKIP'
	'SKIP'
	'SKIP'
	'SKIP')
#        '65d39053cd1dd09562473c2e58f66a447ce0225b32607685f60350596b3288d6568c1cb897393b20236260e632427de1a952e72fe358407020f6cc7820fd4f60'
#        '6f0b6886108e8d5f385bf7f9bebc60218797c53d4b88a69cc98564ad02c558cb86633b4f713ddd919d618146c04ac0a9215aa6c32ae192701af9d7850264dd56' 
#	'be72a39e823d6631095ce407c92af6aee8650302eeaaa55a970a43592daaa141369d3a5a5eb6a992f6c2f5461370228ac7a84f3318422419fd868575454487d6') 

prepare() {
  cd ${pkgname/-cuda}

  rm -frv llm/llama.cpp
  
  export GIN_MODE=release

  # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
  cp -r "$srcdir/llama.cpp" llm/llama.cpp

  # Turn LTO on and set the build type to Release
  sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=off -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh

  sed -i 's,var mode string = gin.DebugMode,var mode string = gin.ReleaseMode,g' server/routes.go
  # Let gen_linux.sh find libcudart.so
  sed -i 's,/usr/local/cuda/lib64,/opt/cuda/targets/x86_64-linux/lib,g' llm/generate/gen_linux.sh

  # Let gpu.go find libnvidia-ml.so from the cuda package
  #sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so*,g' gpu/gpu.go
  sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/lib/libnvidia-ml.so*,g' gpu/gpu.go
}

build() {
  cd ${pkgname/-cuda}
  export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
  go generate ./...
  go build -buildmode=pie -ldflags=-fno-lto -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \
    -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver"
}

check() {
  cd ${pkgname/-cuda}
  go test ./api ./format
  ./ollama --version > /dev/null
}

package() {
  install -Dm755 ${pkgname/-cuda}/${pkgname/-cuda} "$pkgdir/usr/bin/${pkgname/-cuda}"
  install -dm700 "$pkgdir/var/lib/ollama"
  install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
  install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
  install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
  install -Dm644 ${pkgname/-cuda}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

When I run makepkg -sri to install it,it show me these errors:

-- The C compiler identification is GNU 13.2.1
-- The CXX compiler identification is GNU 13.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.43.0") 
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- Could not find nvcc, please set CUDAToolkit_ROOT.
CMake Warning at CMakeLists.txt:356 (message):
  cuBLAS not found


-- CUDA host compiler is GNU 
CMake Error at CMakeLists.txt:532 (get_flags):
  get_flags Function invoked with incorrect arguments for function named:
  get_flags


-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring incomplete, errors occurred!
llm/generate/generate_linux.go:3: running "bash": exit status 1
==> ERROR: A failure occurred in build().
    Aborting...
<!-- gh-comment-id:1907483858 --> @silverwind63 commented on GitHub (Jan 24, 2024): > I have the same Issue. Basically the same arch setup. What is your output for Discovered GPU libraries? > > `INFO Discovered GPU libraries: [/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so /usr/lib/libnvidia-ml.so.545.29.06 /usr/lib32/libnvidia-ml.so.545.29.06 /usr/lib64/libnvidia-ml.so.545.29.06]` > > It appears that it tries to load the wrong file, even though the correct one is available. > > llama.cpp runs perfectly with GPU support. > > EDIT: > > With the help from Socialnetwooky from discord I could fix it. It seems that the ollama package is currently broken. The steps to fix it are as follows: > > Make a new tmp folder. Copy the PKGBUILD Copy the remaining files from: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-cuda execute makepkg -sri > > PKGBUILD: > > ``` > pkgname=ollama-cuda > pkgdesc='Create, run and share large language models (LLMs) with CUDA' > pkgver=0.1.20 > pkgrel=3 > arch=(x86_64) > url='https://github.com/jmorganca/ollama' > license=(MIT) > _ollamacommit=ab6be852c77064d7abeffb0b03c096aab90e95fe # tag: v0.1.20 > # The llama.cpp git submodule commit hash can be found here: > # https://github.com/jmorganca/ollama/tree/v0.1.20/llm > _llama_cpp_commit=328b83de23b33240e28f4e74900d1d06726f5eb1 > makedepends=(cmake cuda git go) > provides=(ollama) > conflicts=(ollama) > source=(git+$url#commit=$_ollamacommit > llama.cpp::git+https://github.com/ggerganov/llama.cpp#commit=$_llama_cpp_commit > sysusers.conf > tmpfiles.d > ollama.service) > b2sums=('SKIP' > 'SKIP' > 'SKIP' > 'SKIP' > 'SKIP') > # '65d39053cd1dd09562473c2e58f66a447ce0225b32607685f60350596b3288d6568c1cb897393b20236260e632427de1a952e72fe358407020f6cc7820fd4f60' > # '6f0b6886108e8d5f385bf7f9bebc60218797c53d4b88a69cc98564ad02c558cb86633b4f713ddd919d618146c04ac0a9215aa6c32ae192701af9d7850264dd56' > # 'be72a39e823d6631095ce407c92af6aee8650302eeaaa55a970a43592daaa141369d3a5a5eb6a992f6c2f5461370228ac7a84f3318422419fd868575454487d6') > > prepare() { > cd ${pkgname/-cuda} > > rm -frv llm/llama.cpp > > export GIN_MODE=release > > # Copy git submodule files instead of symlinking because the build process is sensitive to symlinks. > cp -r "$srcdir/llama.cpp" llm/llama.cpp > > # Turn LTO on and set the build type to Release > sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=off -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh > > sed -i 's,var mode string = gin.DebugMode,var mode string = gin.ReleaseMode,g' server/routes.go > # Let gen_linux.sh find libcudart.so > sed -i 's,/usr/local/cuda/lib64,/opt/cuda/targets/x86_64-linux/lib,g' llm/generate/gen_linux.sh > > # Let gpu.go find libnvidia-ml.so from the cuda package > #sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so*,g' gpu/gpu.go > sed -i 's,/opt/cuda/lib64/libnvidia-ml.so*,/lib/libnvidia-ml.so*,g' gpu/gpu.go > } > > build() { > cd ${pkgname/-cuda} > export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS" > go generate ./... > go build -buildmode=pie -ldflags=-fno-lto -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external \ > -ldflags=-buildid='' -ldflags="-X=github.com/jmorganca/ollama/version.Version=$pkgver" > } > > check() { > cd ${pkgname/-cuda} > go test ./api ./format > ./ollama --version > /dev/null > } > > package() { > install -Dm755 ${pkgname/-cuda}/${pkgname/-cuda} "$pkgdir/usr/bin/${pkgname/-cuda}" > install -dm700 "$pkgdir/var/lib/ollama" > install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service" > install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf" > install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf" > install -Dm644 ${pkgname/-cuda}/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" > } > ``` When I run makepkg -sri to install it,it show me these errors: ``` -- The C compiler identification is GNU 13.2.1 -- The CXX compiler identification is GNU 13.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Could not find nvcc, please set CUDAToolkit_ROOT. CMake Warning at CMakeLists.txt:356 (message): cuBLAS not found -- CUDA host compiler is GNU CMake Error at CMakeLists.txt:532 (get_flags): get_flags Function invoked with incorrect arguments for function named: get_flags -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring incomplete, errors occurred! llm/generate/generate_linux.go:3: running "bash": exit status 1 ==> ERROR: A failure occurred in build(). Aborting... ```
Author
Owner

@dhiltgen commented on GitHub (Jan 26, 2024):

We've moved the stub library to the bottom of the list we try and this fix is in 0.1.22. I believe this should be resolved. Please re-open if you're still seeing the problem on 0.1.22.

<!-- gh-comment-id:1912696789 --> @dhiltgen commented on GitHub (Jan 26, 2024): We've moved the stub library to the [bottom of the list](https://github.com/ollama/ollama/blob/main/gpu/gpu.go#L50) we try and this fix is in 0.1.22. I believe this should be resolved. Please re-open if you're still seeing the problem on 0.1.22.
Author
Owner

@Rabcor commented on GitHub (Jan 27, 2024):

When I run makepkg -sri to install it,it show me these errors:

-- The C compiler identification is GNU 13.2.1
-- The CXX compiler identification is GNU 13.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.43.0") 
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- Could not find nvcc, please set CUDAToolkit_ROOT.
CMake Warning at CMakeLists.txt:356 (message):
  cuBLAS not found


-- CUDA host compiler is GNU 
CMake Error at CMakeLists.txt:532 (get_flags):
  get_flags Function invoked with incorrect arguments for function named:
  get_flags


-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring incomplete, errors occurred!
llm/generate/generate_linux.go:3: running "bash": exit status 1
==> ERROR: A failure occurred in build().
    Aborting...

Same here,, i tried adding nvidia root in the build function then it threw out a different error about not finding default cuda architectures, so i threw in a variable that told it where the nvcc compiler is, and the result from that was just even more errors :(

<!-- gh-comment-id:1913123791 --> @Rabcor commented on GitHub (Jan 27, 2024): > When I run makepkg -sri to install it,it show me these errors: > > ``` > -- The C compiler identification is GNU 13.2.1 > -- The CXX compiler identification is GNU 13.2.1 > -- Detecting C compiler ABI info > -- Detecting C compiler ABI info - done > -- Check for working C compiler: /usr/bin/cc - skipped > -- Detecting C compile features > -- Detecting C compile features - done > -- Detecting CXX compiler ABI info > -- Detecting CXX compiler ABI info - done > -- Check for working CXX compiler: /usr/bin/c++ - skipped > -- Detecting CXX compile features > -- Detecting CXX compile features - done > -- Found Git: /usr/bin/git (found version "2.43.0") > -- Performing Test CMAKE_HAVE_LIBC_PTHREAD > -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success > -- Found Threads: TRUE > -- Could not find nvcc, please set CUDAToolkit_ROOT. > CMake Warning at CMakeLists.txt:356 (message): > cuBLAS not found > > > -- CUDA host compiler is GNU > CMake Error at CMakeLists.txt:532 (get_flags): > get_flags Function invoked with incorrect arguments for function named: > get_flags > > > -- CMAKE_SYSTEM_PROCESSOR: x86_64 > -- x86 detected > -- Configuring incomplete, errors occurred! > llm/generate/generate_linux.go:3: running "bash": exit status 1 > ==> ERROR: A failure occurred in build(). > Aborting... > ``` Same here,, i tried adding nvidia root in the build function then it threw out a different error about not finding default cuda architectures, so i threw in a variable that told it where the nvcc compiler is, and the result from that was just even more errors :(
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1211