[GH-ISSUE #1704] Make detecting CUDA libraries more general #62999

Closed
opened 2026-05-03 11:13:06 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @mongolu on GitHub (Dec 25, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1704

Originally assigned to: @dhiltgen on GitHub.

Hi,
I've built my Dockerfile from scratch, with ollama from source, also adding litellm, autogen in it.
Until now i'm successful with it, but ollama gave me headaches for it's not using nvidia GPU.

What i've found until now and after all this and I made it work.

In ./llm/llama.cpp/gen_linux.sh, CUDACXX is thought to be in /usr/local/cuda/bin/nvcc.

if [ -z "${CUDACXX}" -a -x /usr/local/cuda/bin/nvcc ]; then
    export CUDACXX=/usr/local/cuda/bin/nvcc
fi

I'm using conda to install cuda-toolkit, so nvcc it's in my miniconda3 installed dir.
Instead of hardcoding this path, maybe $(which nvcc) is a better way.

Also, in this file, after building the cpu, when trying to detect if cuda livraries are available like this:
if [ -d /usr/local/cuda/lib64/ ]; then it gives false.
I resolved my environment making symlinks:

    mkdir -p /usr/local/cuda && ln -s /opt/miniconda3/lib /usr/local/cuda/lib64
    mkdir -p /usr/lib/wsl && ln -s /usr/lib/x86_64-linux-gnu /usr/lib/wsl/lib

Without this, when starting ollama serve, it says:
gpu.go:38: CUDA not detected: Unable to load libnvidia-ml.so library to query for Nvidia GPUs: /usr/lib/wsl/lib/libnvidia-ml.so.1: cannot open shared object file: No such file or directory

10x,
Now I'm a happy user !

Originally created by @mongolu on GitHub (Dec 25, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1704 Originally assigned to: @dhiltgen on GitHub. Hi, I've built my Dockerfile from scratch, with ollama from source, also adding litellm, autogen in it. Until now i'm successful with it, but ollama gave me headaches for it's not using nvidia GPU. What i've found until now and after all this and I made it work. In `./llm/llama.cpp/gen_linux.sh`, CUDACXX is thought to be in /usr/local/cuda/bin/nvcc. ``` if [ -z "${CUDACXX}" -a -x /usr/local/cuda/bin/nvcc ]; then export CUDACXX=/usr/local/cuda/bin/nvcc fi ``` I'm using conda to install cuda-toolkit, so nvcc it's in my miniconda3 installed dir. Instead of hardcoding this path, maybe `$(which nvcc)` is a better way. Also, in this file, after building the cpu, when trying to detect if cuda livraries are available like this: `if [ -d /usr/local/cuda/lib64/ ]; then` it gives false. I resolved my environment making symlinks: ``` mkdir -p /usr/local/cuda && ln -s /opt/miniconda3/lib /usr/local/cuda/lib64 mkdir -p /usr/lib/wsl && ln -s /usr/lib/x86_64-linux-gnu /usr/lib/wsl/lib ``` Without this, when starting `ollama serve`, it says: ` gpu.go:38: CUDA not detected: Unable to load libnvidia-ml.so library to query for Nvidia GPUs: /usr/lib/wsl/lib/libnvidia-ml.so.1: cannot open shared object file: No such file or directory` 10x, Now I'm a happy user !
GiteaMirror added the bug label 2026-05-03 11:13:06 -05:00
Author
Owner

@djmaze commented on GitHub (Dec 25, 2023):

Same problem here when using an nvidia/cuda base image (which is necessary for use on docker swarm).

These are the changes I needed to make:

diff --git a/gpu/gpu_info_cuda.c b/gpu/gpu_info_cuda.c
index 20055ed..5e22604 100644
--- a/gpu/gpu_info_cuda.c
+++ b/gpu/gpu_info_cuda.c
@@ -8,6 +8,7 @@
 const char *cuda_lib_paths[] = {
     "libnvidia-ml.so",
     "/usr/local/cuda/lib64/libnvidia-ml.so",
+    "/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1",
     "/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so",
     "/usr/lib/wsl/lib/libnvidia-ml.so.1",  // TODO Maybe glob?
     NULL,
diff --git a/llm/llama.cpp/gen_linux.sh b/llm/llama.cpp/gen_linux.sh
index 3d659ff..b67aff2 100755
--- a/llm/llama.cpp/gen_linux.sh
+++ b/llm/llama.cpp/gen_linux.sh
@@ -18,7 +18,7 @@ set -o pipefail
 
 echo "Starting linux generate script"
 if [ -z "${CUDACXX}" -a -x /usr/local/cuda/bin/nvcc ]; then
-    export CUDACXX=/usr/local/cuda/bin/nvcc
+    export CUDACXX=$(which nvcc)
 fi
 COMMON_CMAKE_DEFS="-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_ACCELERATE=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off"
 OLLAMA_DYN_LIB_DIR="gguf/build/lib"
<!-- gh-comment-id:1869153508 --> @djmaze commented on GitHub (Dec 25, 2023): Same problem here when using an `nvidia/cuda` base image (which is necessary for use on docker swarm). These are the changes I needed to make: ```diff diff --git a/gpu/gpu_info_cuda.c b/gpu/gpu_info_cuda.c index 20055ed..5e22604 100644 --- a/gpu/gpu_info_cuda.c +++ b/gpu/gpu_info_cuda.c @@ -8,6 +8,7 @@ const char *cuda_lib_paths[] = { "libnvidia-ml.so", "/usr/local/cuda/lib64/libnvidia-ml.so", + "/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1", "/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so", "/usr/lib/wsl/lib/libnvidia-ml.so.1", // TODO Maybe glob? NULL, diff --git a/llm/llama.cpp/gen_linux.sh b/llm/llama.cpp/gen_linux.sh index 3d659ff..b67aff2 100755 --- a/llm/llama.cpp/gen_linux.sh +++ b/llm/llama.cpp/gen_linux.sh @@ -18,7 +18,7 @@ set -o pipefail echo "Starting linux generate script" if [ -z "${CUDACXX}" -a -x /usr/local/cuda/bin/nvcc ]; then - export CUDACXX=/usr/local/cuda/bin/nvcc + export CUDACXX=$(which nvcc) fi COMMON_CMAKE_DEFS="-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_ACCELERATE=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off" OLLAMA_DYN_LIB_DIR="gguf/build/lib" ```
Author
Owner

@aidankmcl commented on GitHub (Dec 28, 2023):

In case it's helpful for others, I wanted to throw out that when working in WSL, I kept getting

CUDA not detected: nvml vram init failure: 9

until I ensured that the libnvidia-ml.so.1 path I wanted was at the front of the cuda_lib_paths array. In my case I wanted "/usr/lib/wsl/lib/libnvidia-ml.so.1", so I made the following change:

ollama/gpu/gpu_info_cuda.c

#ifndef _WIN32
const char *cuda_lib_paths[] = {
    "/usr/lib/wsl/lib/libnvidia-ml.so.1",  // Success when trying this path first
    "libnvidia-ml.so",
    "/usr/local/cuda/lib64/libnvidia-ml.so",
    "/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so",
    // Moved to the front from here "/usr/lib/wsl/lib/libnvidia-ml.so.1",  // TODO Maybe glob?
    NULL,
};
#else
const char *cuda_lib_paths[] = {
    "nvml.dll",
    "",
    NULL,
};
#endif

A side note, #ifndef _WIN32 will be true for WSL, so the change needs to be in the top definition of cuda_lib_paths. You can always simplify and replace the above conditional definition with

const char *cuda_lib_paths[] = {
  "/usr/lib/wsl/lib/libnvidia-ml.so.1",  // or whatever your libnvidia-ml.so path is
  NULL
};

Once you make the edit, you can rebuild with go build . at the base and I was in business. Important note: if this doesn't work for you, you might need to follow instructions from this comment which I tried first. Ultimately, I was able to undo the changes in it so I think the path ordering change described above is what fixed it for me. However, the mention of the NumGPU parameter in ollama/api/types.go is still helpful as I find that -1 chose a more conservative number of layers to offload to GPU than I would, and I got some improved inference speed when setting my own value (for ./ollama run mixtral I am using NumGPU: 30 on 4090 + 64GB RAM)

I haven't dug more into the why - none of the other paths exist in WSL so it feels like the loop lower down in ollama/gpu/gpu_info_cuda.c should successfully iterate till hitting the working path, but I'm rusty in C!

<!-- gh-comment-id:1871307855 --> @aidankmcl commented on GitHub (Dec 28, 2023): In case it's helpful for others, I wanted to throw out that when working in WSL, I kept getting ``` CUDA not detected: nvml vram init failure: 9 ``` until I ensured that the `libnvidia-ml.so.1` path I wanted was at the front of the `cuda_lib_paths` array. In my case I wanted `"/usr/lib/wsl/lib/libnvidia-ml.so.1"`, so I made the following change: **ollama/gpu/gpu_info_cuda.c** ```C #ifndef _WIN32 const char *cuda_lib_paths[] = { "/usr/lib/wsl/lib/libnvidia-ml.so.1", // Success when trying this path first "libnvidia-ml.so", "/usr/local/cuda/lib64/libnvidia-ml.so", "/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so", // Moved to the front from here "/usr/lib/wsl/lib/libnvidia-ml.so.1", // TODO Maybe glob? NULL, }; #else const char *cuda_lib_paths[] = { "nvml.dll", "", NULL, }; #endif ``` A side note, `#ifndef _WIN32` will be true for WSL, so the change needs to be in the top definition of `cuda_lib_paths`. You can always simplify and replace the above conditional definition with ```C const char *cuda_lib_paths[] = { "/usr/lib/wsl/lib/libnvidia-ml.so.1", // or whatever your libnvidia-ml.so path is NULL }; ``` Once you make the edit, you can rebuild with `go build .` at the base and I was in business. Important note: if this doesn't work for you, you might need to follow instructions from [this comment](https://github.com/jmorganca/ollama/issues/259#issuecomment-1693959312) which I tried first. Ultimately, I was able to undo the changes in it so I think the path ordering change described above is what fixed it for me. However, the mention of the `NumGPU` parameter in `ollama/api/types.go` is still helpful as I find that `-1` chose a more conservative number of layers to offload to GPU than I would, and I got some improved inference speed when setting my own value (for `./ollama run mixtral` I am using `NumGPU: 30` on 4090 + 64GB RAM) I haven't dug more into the why - none of the other paths exist in WSL so it feels like the loop lower down in `ollama/gpu/gpu_info_cuda.c` should successfully iterate till hitting the working path, but I'm rusty in C!
Author
Owner

@hansesm commented on GitHub (Jan 8, 2024):

I am currently facing the same issue :)

<!-- gh-comment-id:1881579808 --> @hansesm commented on GitHub (Jan 8, 2024): I am currently facing the same issue :)
Author
Owner

@fpreiss commented on GitHub (Jan 8, 2024):

I have a similar problem when trying to build v0.1.18 on arch linux with cuda.
I had to apply the following change:

diff --git a/gpu/gpu_info_cuda.c b/gpu/gpu_info_cuda.c
index 5273871..aebb779 100644
--- a/gpu/gpu_info_cuda.c
+++ b/gpu/gpu_info_cuda.c
@@ -9,6 +9,7 @@ const char *cuda_lib_paths[] = {
     "libnvidia-ml.so",
     "/usr/local/cuda/lib64/libnvidia-ml.so",
     "/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so",
+     "/usr/lib/libnvidia-ml.so",
     "/usr/lib/wsl/lib/libnvidia-ml.so.1",  // TODO Maybe glob?
     NULL,
 };

On arch linux CUDA_LIB_DIR is additionally not getting set correctly in
llm/llama.cpp/gen_linux.sh, as the cuda package is installed into /opt/cuda
instead of /usr/local/cuda. Maybe a change similar to the one below could help:

+if [ -z "${CUDA_LIB_DIR}" ]; then
+    # Try the default location in case it exists
+    CUDA_LIB_DIR=/usr/local/cuda/lib64
+fi
-if [ -d /usr/local/cuda/lib64/ ]; then
+if [ -d "${CUDA_LIB_DIR}" ]; then
     echo "CUDA libraries detected - building dynamic CUDA library"
     init_vars
     CMAKE_DEFS="-DLLAMA_CUBLAS=on ${COMMON_CMAKE_DEFS} ${CMAKE_DEFS}"
     BUILD_DIR="gguf/build/linux/cuda"
-     CUDA_LIB_DIR=/usr/local/cuda/lib64
<!-- gh-comment-id:1881706640 --> @fpreiss commented on GitHub (Jan 8, 2024): I have a similar problem when trying to build `v0.1.18` on arch linux with cuda. I had to apply the following change: ```diff diff --git a/gpu/gpu_info_cuda.c b/gpu/gpu_info_cuda.c index 5273871..aebb779 100644 --- a/gpu/gpu_info_cuda.c +++ b/gpu/gpu_info_cuda.c @@ -9,6 +9,7 @@ const char *cuda_lib_paths[] = { "libnvidia-ml.so", "/usr/local/cuda/lib64/libnvidia-ml.so", "/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so", + "/usr/lib/libnvidia-ml.so", "/usr/lib/wsl/lib/libnvidia-ml.so.1", // TODO Maybe glob? NULL, }; ``` On arch linux `CUDA_LIB_DIR` is additionally not getting set correctly in `llm/llama.cpp/gen_linux.sh`, as the cuda package is installed into `/opt/cuda` instead of `/usr/local/cuda`. Maybe a change similar to the one below could help: ```diff +if [ -z "${CUDA_LIB_DIR}" ]; then + # Try the default location in case it exists + CUDA_LIB_DIR=/usr/local/cuda/lib64 +fi -if [ -d /usr/local/cuda/lib64/ ]; then +if [ -d "${CUDA_LIB_DIR}" ]; then echo "CUDA libraries detected - building dynamic CUDA library" init_vars CMAKE_DEFS="-DLLAMA_CUBLAS=on ${COMMON_CMAKE_DEFS} ${CMAKE_DEFS}" BUILD_DIR="gguf/build/linux/cuda" - CUDA_LIB_DIR=/usr/local/cuda/lib64 ```
Author
Owner

@syllith commented on GitHub (Jan 9, 2024):

I'm also experiencing this issue: nvml vram init failure: 9

I can run the nvidia-smi command just fine, and can even run things like h2oGPT with no problem, so I think this is specific to ollama.

<!-- gh-comment-id:1883492113 --> @syllith commented on GitHub (Jan 9, 2024): I'm also experiencing this issue: nvml vram init failure: 9 I can run the nvidia-smi command just fine, and can even run things like h2oGPT with no problem, so I think this is specific to ollama.
Author
Owner

@Zenopheus commented on GitHub (Jan 9, 2024):

@dhiltgen It looks like everyone has honed in on the 'libnvidia-ml' loading problem. This should fix it and close out a bunch of issues. It should also work for AMD people who can't load 'librocm_smi64'.

The CUDA initialization ('cuda_init()') function is loading the wrong 'libnvidia-ml' library that does not have the symbols ollama needs. It gives up prematurely instead of trying the other libraries in the array. In my case, 'libnvidia-ml.so' was found in '/lib/x86_64-linux-gnu'. You can check this by typing:

ldconfig -p | grep libnvidia-ml

To get this to work all you really need to do is create a symbolic link:

sudo ln -s /usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/lib/libnvidia-ml.so
sudo ldconfig

Note: This works as long as you load ollama directly (ollama serv) but it doesn't work via systemctl because the link is removed when restarting. At least that's what I can tell right now. There are similar fixes that you can read about here.

It would be better to address this in the code if someone would like to create a PR. Here's something that works, but it could use some clean up:

See #1898 for an update and a link to the code changes

<!-- gh-comment-id:1883522236 --> @Zenopheus commented on GitHub (Jan 9, 2024): @dhiltgen It looks like everyone has honed in on the 'libnvidia-ml' loading problem. **This should fix it and close out a bunch of issues.** It should also work for AMD people who can't load 'librocm_smi64'. The CUDA initialization ('cuda_init()') function is loading the wrong 'libnvidia-ml' library that does not have the symbols ollama needs. It **gives up prematurely** instead of trying the other libraries in the array. In my case, 'libnvidia-ml.so' was found in '/lib/x86_64-linux-gnu'. You can check this by typing: ``` ldconfig -p | grep libnvidia-ml ``` To get this to work all you really need to do is create a symbolic link: ``` sudo ln -s /usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/lib/libnvidia-ml.so sudo ldconfig ``` Note: This works as long as you load ollama directly (ollama serv) but it doesn't work via systemctl because the link is removed when restarting. At least that's what I can tell right now. There are similar fixes that you can read about [here](https://forums.developer.nvidia.com/t/wsl2-libcuda-so-and-libcuda-so-1-should-be-symlink/236301). It would be better to address this in the code if someone would like to create a PR. Here's something that works, but it could use some clean up: See #1898 for an update and a link to the code changes
Author
Owner

@dhiltgen commented on GitHub (Jan 10, 2024):

The runtime portion of this issue is covered by #1914 but there may still be some room to improve the build-time script logic, so I'll keep this issue open to track that.

<!-- gh-comment-id:1885903658 --> @dhiltgen commented on GitHub (Jan 10, 2024): The runtime portion of this issue is covered by #1914 but there may still be some room to improve the build-time script logic, so I'll keep this issue open to track that.
Author
Owner

@fpreiss commented on GitHub (Jan 11, 2024):

The runtime portion of this issue is covered by #1914 but there may still be some room to improve the build-time script logic, so I'll keep this issue open to track that.

I made a pull request (#1880) for the build script logic. As long as the correct environment variables are provided this issue here should mostly be covered.

<!-- gh-comment-id:1887978681 --> @fpreiss commented on GitHub (Jan 11, 2024): > The runtime portion of this issue is covered by #1914 but there may still be some room to improve the build-time script logic, so I'll keep this issue open to track that. I made a pull request (#1880) for the build script logic. As long as the correct environment variables are provided this issue here should mostly be covered.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62999