[GH-ISSUE #5493] unable to load nvcuda #3436

Closed
opened 2026-04-12 14:05:40 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @yake-cyber on GitHub (Jul 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5493

What is the issue?

my ollama does not run on the NVIDIA gpu and i use the debug mode and find this message
"time=2024-07-04T17:13:20.134+08:00 level=DEBUG source=gpu.go:385 msg="Unable to load nvcuda" library=/usr/lib/libcuda.so.418.74 error="Unable to load /usr/lib/libcuda.so.418.74 library to query for Nvidia GPUs: /usr/lib/libcuda.so.418.74: wrong ELF class: ELFCLASS32"
dlerr: /usr/lib64/libcuda.so.418.74: undefined symbol: cuCtxCreate_v3
time=2024-07-04T17:13:20.135+08:00 level=DEBUG source=gpu.go:385 msg="Unable to load nvcuda" library=/usr/lib64/libcuda.so.418.74 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib64/libcuda.so.418.74: undefined symbol: cuCtxCreate_v3"
time=2024-07-04T17:13:20.135+08:00 level=DEBUG source=gpu.go:304 msg="Searching for GPU library" name=libcudart.so*"
could you please clarify what this means and how can i resolving this issue?
debug-ollama.txt
nvidia-smi
log20240704.txt

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.40

Originally created by @yake-cyber on GitHub (Jul 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5493 ### What is the issue? my ollama does not run on the NVIDIA gpu and i use the debug mode and find this message "time=2024-07-04T17:13:20.134+08:00 level=DEBUG source=gpu.go:385 msg="Unable to load nvcuda" library=/usr/lib/libcuda.so.418.74 error="Unable to load /usr/lib/libcuda.so.418.74 library to query for Nvidia GPUs: /usr/lib/libcuda.so.418.74: wrong ELF class: ELFCLASS32" dlerr: /usr/lib64/libcuda.so.418.74: undefined symbol: cuCtxCreate_v3 time=2024-07-04T17:13:20.135+08:00 level=DEBUG source=gpu.go:385 msg="Unable to load nvcuda" library=/usr/lib64/libcuda.so.418.74 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib64/libcuda.so.418.74: undefined symbol: cuCtxCreate_v3" time=2024-07-04T17:13:20.135+08:00 level=DEBUG source=gpu.go:304 msg="Searching for GPU library" name=libcudart.so*" could you please clarify what this means and how can i resolving this issue? [debug-ollama.txt](https://github.com/user-attachments/files/16104245/debug-ollama.txt) ![nvidia-smi](https://github.com/ollama/ollama/assets/174697336/16f66b47-fa5c-4be6-a748-7e5173e69279) [log20240704.txt](https://github.com/user-attachments/files/16104248/log20240704.txt) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.40
GiteaMirror added the bug label 2026-04-12 14:05:40 -05:00
Author
Owner

@yake-cyber commented on GitHub (Jul 5, 2024):

cat /proc/driver/nvidia/version,and the result is
NVRM version: NVIDIA UNIX x86_64 Kernel Module 418.74 Wed May 1 11:49:41 CDT 2019

<!-- gh-comment-id:2210251365 --> @yake-cyber commented on GitHub (Jul 5, 2024): cat /proc/driver/nvidia/version,and the result is NVRM version: NVIDIA UNIX x86_64 Kernel Module 418.74 Wed May 1 11:49:41 CDT 2019
Author
Owner

@jmorganca commented on GitHub (Jul 11, 2024):

Hi there, I believe you'll need Cuda 11 or later for Ollama – installing the latest Nvidia drivers for your graphics card should fix this.

<!-- gh-comment-id:2222347725 --> @jmorganca commented on GitHub (Jul 11, 2024): Hi there, I believe you'll need Cuda 11 or later for Ollama – installing the latest Nvidia drivers for your graphics card should fix this.
Author
Owner

@yake-cyber commented on GitHub (Jul 11, 2024):

update nvidia driver from 418.74 to 515.105.01
update cuda from 10.1 to 11.7

<!-- gh-comment-id:2222348460 --> @yake-cyber commented on GitHub (Jul 11, 2024): update nvidia driver from 418.74 to 515.105.01 update cuda from 10.1 to 11.7
Author
Owner

@birkhoff2017 commented on GitHub (Jul 24, 2024):

update nvidia driver from 418.74 to 515.105.01 update cuda from 10.1 to 11.7

does update cuda version solved this problem?

<!-- gh-comment-id:2247462892 --> @birkhoff2017 commented on GitHub (Jul 24, 2024): > update nvidia driver from 418.74 to 515.105.01 update cuda from 10.1 to 11.7 does update cuda version solved this problem?
Author
Owner

@thomasWos commented on GitHub (Aug 2, 2024):

same error on my laptop:

time=2024-08-02T12:29:29.168+10:00 level=INFO source=routes.go:1156 msg="Listening on 127.0.0.1:11434 (version 0.3.2)"
time=2024-08-02T12:29:29.170+10:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]"
time=2024-08-02T12:29:29.170+10:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-02T12:29:29.208+10:00 level=INFO source=gpu.go:564 msg="unable to load cuda driver library" library=C:\WINDOWS\system32\nvcuda.dll error="symbol lookup for cuCtxCreate_v3 failed: The specified procedure could not be found.\r\n"
time=2024-08-02T12:29:29.222+10:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"

CUDA Version: 11.0

<!-- gh-comment-id:2264396628 --> @thomasWos commented on GitHub (Aug 2, 2024): same error on my laptop: time=2024-08-02T12:29:29.168+10:00 level=INFO source=routes.go:1156 msg="Listening on 127.0.0.1:11434 (version 0.3.2)" time=2024-08-02T12:29:29.170+10:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]" time=2024-08-02T12:29:29.170+10:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-08-02T12:29:29.208+10:00 level=INFO source=gpu.go:564 msg="unable to load cuda driver library" library=C:\WINDOWS\system32\nvcuda.dll error="symbol lookup for cuCtxCreate_v3 failed: The specified procedure could not be found.\r\n" time=2024-08-02T12:29:29.222+10:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" CUDA Version: 11.0
Author
Owner

@Yuhao-Luo commented on GitHub (Aug 12, 2024):

same error on p40
NVIDIA-SMI 460.73.01
CUDA Version: 11.2

> ollama serve
2024/08/12 19:46:02 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-12T19:46:02.378+08:00 level=INFO source=images.go:751 msg="total blobs: 0"
time=2024-08-12T19:46:02.378+08:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-08-12T19:46:02.379+08:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)"
time=2024-08-12T19:46:02.379+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3129743963/runners
time=2024-08-12T19:46:07.169+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60101 cpu cpu_avx]"
time=2024-08-12T19:46:07.169+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-12T19:46:07.176+08:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.460.73.01 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.73.01: undefined symbol: cuCtxCreate_v3"
time=2024-08-12T19:46:07.676+08:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-de51333b-d42b-ed7c-98c6-29465e69005d library=cuda compute=6.1 driver=0.0 name="" total="22.4 GiB" available="22.2 GiB"
<!-- gh-comment-id:2283753492 --> @Yuhao-Luo commented on GitHub (Aug 12, 2024): same error on p40 NVIDIA-SMI 460.73.01 CUDA Version: 11.2 ``` > ollama serve 2024/08/12 19:46:02 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-12T19:46:02.378+08:00 level=INFO source=images.go:751 msg="total blobs: 0" time=2024-08-12T19:46:02.378+08:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0" time=2024-08-12T19:46:02.379+08:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)" time=2024-08-12T19:46:02.379+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3129743963/runners time=2024-08-12T19:46:07.169+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60101 cpu cpu_avx]" time=2024-08-12T19:46:07.169+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-08-12T19:46:07.176+08:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.460.73.01 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.73.01: undefined symbol: cuCtxCreate_v3" time=2024-08-12T19:46:07.676+08:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-de51333b-d42b-ed7c-98c6-29465e69005d library=cuda compute=6.1 driver=0.0 name="" total="22.4 GiB" available="22.2 GiB" ```
Author
Owner

@thomasWos commented on GitHub (Aug 13, 2024):

I have upgraded my drivers, CUDA Version: 12.2.
Works fine since

<!-- gh-comment-id:2285281010 --> @thomasWos commented on GitHub (Aug 13, 2024): I have upgraded my drivers, CUDA Version: 12.2. Works fine since
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3436