[GH-ISSUE #3751] Unable to load cudart CUDA management library #2314

Closed
opened 2026-04-12 12:37:13 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @andrewssobral on GitHub (Apr 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3751

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hello everyone,
Anyone knows how to fix that?

~$ docker run -d --gpus=all -e OLLAMA_DEBUG=1 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
~$ docker logs -f ollama
0d91d49d1509746d6a702c9dac6a36174a8f92e06a81fb58a77a10575b7da24f
time=2024-04-19T09:05:39.733Z level=INFO source=images.go:817 msg="total blobs: 33"
time=2024-04-19T09:05:39.734Z level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-19T09:05:39.734Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)"
time=2024-04-19T09:05:39.735Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2748111949/runners
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz
time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz
time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cpu
time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cpu_avx
time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cpu_avx2
time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cuda_v11
time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/rocm_v60002
time=2024-04-19T09:05:43.102Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:42 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-04-19T09:05:43.102Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-19T09:05:43.102Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-19T09:05:43.102Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama2748111949/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]"
time=2024-04-19T09:05:43.102Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2748111949/runners/cuda_v11/libcudart.so.11.0]"
wiring cudart library functions in /tmp/ollama2748111949/runners/cuda_v11/libcudart.so.11.0
dlsym: cudaSetDevice
dlsym: cudaDeviceSynchronize
dlsym: cudaDeviceReset
dlsym: cudaMemGetInfo
dlsym: cudaGetDeviceCount
dlsym: cudaDeviceGetAttribute
dlsym: cudaDriverGetVersion
cudaSetDevice err: 999
time=2024-04-19T09:05:43.129Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2748111949/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 999"
time=2024-04-19T09:05:43.129Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-19T09:05:43.129Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]"
time=2024-04-19T09:05:43.130Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15]"
wiring nvidia management library functions in /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
CUDA driver version: 550.54.15
time=2024-04-19T09:05:43.152Z level=INFO source=gpu.go:137 msg="Nvidia GPU detected via nvidia-ml"
time=2024-04-19T09:05:43.152Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[0] CUDA device name: NVIDIA GeForce RTX 2070
[0] CUDA part number: 
nvmlDeviceGetSerial failed: 3
[0] CUDA vbios version: 90.06.18.40.BA
[0] CUDA brand: 5
[0] CUDA totalMem 8589934592
[0] CUDA freeMem 8089632768
[1] CUDA device name: NVIDIA GeForce RTX 2070
[1] CUDA part number: 
nvmlDeviceGetSerial failed: 3
[1] CUDA vbios version: 90.06.18.00.3A
[1] CUDA brand: 5
[1] CUDA totalMem 8589934592
[1] CUDA freeMem 8163295232
time=2024-04-19T09:05:43.173Z level=INFO source=gpu.go:182 msg="[nvidia-ml] NVML CUDA Compute Capability detected: 7.5"
releasing nvml library
~$ docker exec -it ollama ollama --version
ollama version is 0.1.32
~$ docker exec -it ollama nvidia-smi
Fri Apr 19 09:08:39 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 2070        Off |   00000000:0A:00.0  On |                  N/A |
|  0%   38C    P8              1W /  175W |      74MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 2070        Off |   00000000:0B:00.0 Off |                  N/A |
| 36%   33C    P8              5W /  175W |       5MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
~$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
~$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         43 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               AuthenticAMD
  Model name:            AMD Ryzen 7 2700X Eight-Core Processor
    CPU family:          23
    Model:               8
    Thread(s) per core:  2
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            2
    Frequency boost:     disabled
    CPU max MHz:         4000.0000
    CPU min MHz:         2200.0000
    BogoMIPS:            7984.94
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pd
                         pe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4
                         _1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osv
                         w skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep 
                         bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale
                          vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sme sev sev_es
Virtualization features: 
  Virtualization:        AMD-V
Caches (sum of all):     
  L1d:                   256 KiB (8 instances)
  L1i:                   512 KiB (8 instances)
  L2:                    4 MiB (8 instances)
  L3:                    16 MiB (2 instances)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:         
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Mitigation; untrained return thunk; SMT vulnerable
  Spec rstack overflow:  Mitigation; safe RET
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected

OS

Linux, Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.1.32

Originally created by @andrewssobral on GitHub (Apr 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3751 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hello everyone, Anyone knows how to fix that? ``` ~$ docker run -d --gpus=all -e OLLAMA_DEBUG=1 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ~$ docker logs -f ollama ``` ``` 0d91d49d1509746d6a702c9dac6a36174a8f92e06a81fb58a77a10575b7da24f time=2024-04-19T09:05:39.733Z level=INFO source=images.go:817 msg="total blobs: 33" time=2024-04-19T09:05:39.734Z level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-19T09:05:39.734Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)" time=2024-04-19T09:05:39.735Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2748111949/runners time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz time=2024-04-19T09:05:39.735Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cpu time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cpu_avx time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cpu_avx2 time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/cuda_v11 time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama2748111949/runners/rocm_v60002 time=2024-04-19T09:05:43.102Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" time=2024-04-19T09:05:43.102Z level=DEBUG source=payload.go:42 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-04-19T09:05:43.102Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-19T09:05:43.102Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-19T09:05:43.102Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama2748111949/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" time=2024-04-19T09:05:43.102Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2748111949/runners/cuda_v11/libcudart.so.11.0]" wiring cudart library functions in /tmp/ollama2748111949/runners/cuda_v11/libcudart.so.11.0 dlsym: cudaSetDevice dlsym: cudaDeviceSynchronize dlsym: cudaDeviceReset dlsym: cudaMemGetInfo dlsym: cudaGetDeviceCount dlsym: cudaDeviceGetAttribute dlsym: cudaDriverGetVersion cudaSetDevice err: 999 time=2024-04-19T09:05:43.129Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2748111949/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 999" time=2024-04-19T09:05:43.129Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-19T09:05:43.129Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]" time=2024-04-19T09:05:43.130Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15]" wiring nvidia management library functions in /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15 dlsym: nvmlInit_v2 dlsym: nvmlShutdown dlsym: nvmlDeviceGetHandleByIndex dlsym: nvmlDeviceGetMemoryInfo dlsym: nvmlDeviceGetCount_v2 dlsym: nvmlDeviceGetCudaComputeCapability dlsym: nvmlSystemGetDriverVersion dlsym: nvmlDeviceGetName dlsym: nvmlDeviceGetSerial dlsym: nvmlDeviceGetVbiosVersion dlsym: nvmlDeviceGetBoardPartNumber dlsym: nvmlDeviceGetBrand CUDA driver version: 550.54.15 time=2024-04-19T09:05:43.152Z level=INFO source=gpu.go:137 msg="Nvidia GPU detected via nvidia-ml" time=2024-04-19T09:05:43.152Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA device name: NVIDIA GeForce RTX 2070 [0] CUDA part number: nvmlDeviceGetSerial failed: 3 [0] CUDA vbios version: 90.06.18.40.BA [0] CUDA brand: 5 [0] CUDA totalMem 8589934592 [0] CUDA freeMem 8089632768 [1] CUDA device name: NVIDIA GeForce RTX 2070 [1] CUDA part number: nvmlDeviceGetSerial failed: 3 [1] CUDA vbios version: 90.06.18.00.3A [1] CUDA brand: 5 [1] CUDA totalMem 8589934592 [1] CUDA freeMem 8163295232 time=2024-04-19T09:05:43.173Z level=INFO source=gpu.go:182 msg="[nvidia-ml] NVML CUDA Compute Capability detected: 7.5" releasing nvml library ``` ``` ~$ docker exec -it ollama ollama --version ollama version is 0.1.32 ``` ``` ~$ docker exec -it ollama nvidia-smi Fri Apr 19 09:08:39 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 2070 Off | 00000000:0A:00.0 On | N/A | | 0% 38C P8 1W / 175W | 74MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 2070 Off | 00000000:0B:00.0 Off | N/A | | 36% 33C P8 5W / 175W | 5MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` ``` $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Thu_Mar_28_02:18:24_PDT_2024 Cuda compilation tools, release 12.4, V12.4.131 Build cuda_12.4.r12.4/compiler.34097967_0 ``` ``` ~$ cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.4 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.4 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy ``` ``` ~$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 7 2700X Eight-Core Processor CPU family: 23 Model: 8 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 2 Frequency boost: disabled CPU max MHz: 4000.0000 CPU min MHz: 2200.0000 BogoMIPS: 7984.94 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pd pe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4 _1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osv w skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sme sev sev_es Virtualization features: Virtualization: AMD-V Caches (sum of all): L1d: 256 KiB (8 instances) L1i: 512 KiB (8 instances) L2: 4 MiB (8 instances) L3: 16 MiB (2 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Mitigation; untrained return thunk; SMT vulnerable Spec rstack overflow: Mitigation; safe RET Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected ``` ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.32
GiteaMirror added the bugnvidia labels 2026-04-12 12:37:13 -05:00
Author
Owner

@mateuszwrobel commented on GitHub (Apr 19, 2024):

After restarting system it worked for me. Sometimes "turning off and on the computer" solves a problem... ^^'

<!-- gh-comment-id:2066673917 --> @mateuszwrobel commented on GitHub (Apr 19, 2024): After restarting system it worked for me. Sometimes "turning off and on the computer" solves a problem... ^^'
Author
Owner

@andrewssobral commented on GitHub (Apr 19, 2024):

@mateuszwrobel yes some times it works by rebooting the machine, however it gets back once you stop and start the ollama container again. Moreover, my main problem is that i am on a production server that is very difficult to reboot, it would be nice to have a fix or workaround to resolve this without rebooting the machine.

<!-- gh-comment-id:2067152400 --> @andrewssobral commented on GitHub (Apr 19, 2024): @mateuszwrobel yes some times it works by rebooting the machine, however it gets back once you stop and start the ollama container again. Moreover, my main problem is that i am on a production server that is very difficult to reboot, it would be nice to have a fix or workaround to resolve this without rebooting the machine.
Author
Owner

@dhiltgen commented on GitHub (Apr 19, 2024):

In version 0.1.32 we're trying 2 different strategies to discover the NVIDIA GPUs - first we try the cudart library, then if that fails, we fallback to the nvidia-ml management library. From the log output above, it looks like cudart fails with an unknown error (999) but we do find the GPUs with the nvidia-ml lib. The log didn't include any model run, so I can't see if it fell back to CPU mode, or actually worked on the GPU. I'm guessing that it doesn't work on GPU and falls back to CPU. Can you confirm?

Can you check the host logs to see if there's any nvidia driver errors/warnings being reported? sudo dmesg -l err may be helpful.

<!-- gh-comment-id:2067265289 --> @dhiltgen commented on GitHub (Apr 19, 2024): In version 0.1.32 we're trying 2 different strategies to discover the NVIDIA GPUs - first we try the cudart library, then if that fails, we fallback to the nvidia-ml management library. From the log output above, it looks like cudart fails with an unknown error (999) but we do find the GPUs with the nvidia-ml lib. The log didn't include any model run, so I can't see if it fell back to CPU mode, or actually worked on the GPU. I'm guessing that it doesn't work on GPU and falls back to CPU. Can you confirm? Can you check the host logs to see if there's any nvidia driver errors/warnings being reported? `sudo dmesg -l err` may be helpful.
Author
Owner

@andrewssobral commented on GitHub (Apr 19, 2024):

@dhiltgen thanks for your feedback. Yes, I confirm the models falls back to CPU.

Here's the command outputs:

~$ sudo dmesg -l err
[    5.396539] nvidia-gpu 0000:0a:00.3: i2c timeout error e0000000
[    5.396546] ucsi_ccg 0-0008: i2c_transfer failed -110
[    5.396549] ucsi_ccg 0-0008: ucsi_ccg_init failed - -110
[    6.417040] nvidia-gpu 0000:0b:00.3: i2c timeout error e0000000
[    6.417935] ucsi_ccg 3-0008: i2c_transfer failed -110
[    6.418822] ucsi_ccg 3-0008: ucsi_ccg_init failed - -110
[   17.388504] sep5_45: Driver loading... sym_lookup_func_addr=ffffffff8139b6c0
[  208.655318] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000a00] Failed to grab modeset ownership
[  208.656060] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000b00] Failed to grab modeset ownership
[  249.734358] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000a00] Failed to grab modeset ownership
[  249.735059] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000b00] Failed to grab modeset ownership
[39245.858769] ata2: softreset failed (device not ready)
[76182.646013] ata2: softreset failed (device not ready)

Taking a look at https://github.com/ollama/ollama/issues/2934 and here https://askubuntu.com/questions/1228423/how-do-i-fix-cuda-breaking-after-suspend
it seems the problem is more deeply, not related to ollama.

Unfortunately:

sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm

does not work for me due to active apps (desktop) is using the gpu, maybe the only solution is to do https://github.com/ollama/ollama/issues/2934#issuecomment-2011778566
or https://askubuntu.com/questions/1228423/how-do-i-fix-cuda-breaking-after-suspend , in resume:

nvidia-driver-550 with cuda-12.4 works with cccplex's solution. But there's no need to enable the suspend service in the newest drivers.

What you would want to copy and paste in the file /etc/modprobe.d/nvidia-power-management.conf is

options nvidia NVreg_PreserveVideoMemoryAllocations=1
options nvidia NVreg_TemporaryFilePath=/tmp
If you try to enable the suspend and resume services, you'll find that it's masked, DO NOT unmask it or else it gets removed. As I suppose one might expect, the service is only utilized on suspend and resume.

😔

<!-- gh-comment-id:2067291222 --> @andrewssobral commented on GitHub (Apr 19, 2024): @dhiltgen thanks for your feedback. Yes, I confirm the models falls back to CPU. Here's the command outputs: ``` ~$ sudo dmesg -l err [ 5.396539] nvidia-gpu 0000:0a:00.3: i2c timeout error e0000000 [ 5.396546] ucsi_ccg 0-0008: i2c_transfer failed -110 [ 5.396549] ucsi_ccg 0-0008: ucsi_ccg_init failed - -110 [ 6.417040] nvidia-gpu 0000:0b:00.3: i2c timeout error e0000000 [ 6.417935] ucsi_ccg 3-0008: i2c_transfer failed -110 [ 6.418822] ucsi_ccg 3-0008: ucsi_ccg_init failed - -110 [ 17.388504] sep5_45: Driver loading... sym_lookup_func_addr=ffffffff8139b6c0 [ 208.655318] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000a00] Failed to grab modeset ownership [ 208.656060] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000b00] Failed to grab modeset ownership [ 249.734358] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000a00] Failed to grab modeset ownership [ 249.735059] [drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000b00] Failed to grab modeset ownership [39245.858769] ata2: softreset failed (device not ready) [76182.646013] ata2: softreset failed (device not ready) ``` Taking a look at https://github.com/ollama/ollama/issues/2934 and here https://askubuntu.com/questions/1228423/how-do-i-fix-cuda-breaking-after-suspend it seems the problem is more deeply, not related to ollama. Unfortunately: ``` sudo rmmod nvidia_uvm sudo modprobe nvidia_uvm ``` does not work for me due to active apps (desktop) is using the gpu, maybe the only solution is to do https://github.com/ollama/ollama/issues/2934#issuecomment-2011778566 or https://askubuntu.com/questions/1228423/how-do-i-fix-cuda-breaking-after-suspend , in resume: ``` nvidia-driver-550 with cuda-12.4 works with cccplex's solution. But there's no need to enable the suspend service in the newest drivers. What you would want to copy and paste in the file /etc/modprobe.d/nvidia-power-management.conf is options nvidia NVreg_PreserveVideoMemoryAllocations=1 options nvidia NVreg_TemporaryFilePath=/tmp If you try to enable the suspend and resume services, you'll find that it's masked, DO NOT unmask it or else it gets removed. As I suppose one might expect, the service is only utilized on suspend and resume. ``` 😔
Author
Owner

@andrewssobral commented on GitHub (Apr 20, 2024):

I'm closing this because it is not related to ollama itself.

<!-- gh-comment-id:2067793009 --> @andrewssobral commented on GitHub (Apr 20, 2024): I'm closing this because it is not related to ollama itself.
Author
Owner

@dhiltgen commented on GitHub (Apr 22, 2024):

@andrewssobral if there's a good recipe for others to follow, maybe this is something you could contribute to our troubleshooting.md

<!-- gh-comment-id:2071022677 --> @dhiltgen commented on GitHub (Apr 22, 2024): @andrewssobral if there's a good recipe for others to follow, maybe this is something you could contribute to our troubleshooting.md
Author
Owner

@Pathsis commented on GitHub (May 4, 2024):

In version 0.1.32 we tried 2 different strategies for discovering NVIDIA GPUs - first we tried the cudart library, and if that failed we would fall back to the nvidia-ml management library. Looking at the log output above, cudart seems to fail with an unknown error (999), but we did find the GPU with the nvidia-ml library. The log doesn't contain any model runs, so I can't see if it falls back to CPU mode, or actually runs on the GPU. I'm guessing it doesn't work on the GPU, but falls back to the CPU. Can you confirm this? > > Can you check the host logs to see if any nvidia driver errors/warnings are reported? sudo dmesg -l err may help.

In version 0.1.32 we're trying 2 different strategies to discover the NVIDIA GPUs - first we try the cudart library, then if that fails, we fallback to the From the log output above, it looks like cudart fails with an unknown error (999) but we do find the GPUs with the nvidia-ml lib. From the log output above, it looks like cudart fails with an unknown error (999) but we do find the GPUs with the nvidia-ml lib. The log didn't include any model run, so I can't see if it fell back to CPU mode, or actually worked on the GPU. I'm guessing that it doesn't work on GPU and falls back to CPU. Can you confirm? > > Can you check the host logs to see if there's any nvidia driver errors/warnings being reported? be helpful.

Hi @dhiltgen I encounter a similar issue. Running nvidia-smi I always see the process section containing the following:

 | 0 N/A N/A 71947 C ... .unners/cuda_v11/ollama_llama_server 3670MiB | 

This seems to indicate that ollama is using v11 CUDA, but I have v12.2 installed on my machine.

 ~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

What the hell is going on here? Also, ... .unners/cuda_v11/ollama_llama_server this process often disappears on its own, to the point where I have to reboot the system to revive it.

I'm currently running the latest version (0.133) of ollama, on Ubuntu 24.04.

I also see variables in ollama.service where ollama points to the cuda-12.2 path:

Environment="PATH=/home/midtail/.local/share/pnpm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/usr/local/cuda-12.2/bin"

Much appreciated!

<!-- gh-comment-id:2094198461 --> @Pathsis commented on GitHub (May 4, 2024): > In version 0.1.32 we tried 2 different strategies for discovering NVIDIA GPUs - first we tried the cudart library, and if that failed we would fall back to the nvidia-ml management library. Looking at the log output above, cudart seems to fail with an unknown error (999), but we did find the GPU with the nvidia-ml library. The log doesn't contain any model runs, so I can't see if it falls back to CPU mode, or actually runs on the GPU. I'm guessing it doesn't work on the GPU, but falls back to the CPU. Can you confirm this? > > Can you check the host logs to see if any nvidia driver errors/warnings are reported? `sudo dmesg -l err` may help. > In version 0.1.32 we're trying 2 different strategies to discover the NVIDIA GPUs - first we try the cudart library, then if that fails, we fallback to the From the log output above, it looks like cudart fails with an unknown error (999) but we do find the GPUs with the nvidia-ml lib. From the log output above, it looks like cudart fails with an unknown error (999) but we do find the GPUs with the nvidia-ml lib. The log didn't include any model run, so I can't see if it fell back to CPU mode, or actually worked on the GPU. I'm guessing that it doesn't work on GPU and falls back to CPU. Can you confirm? > > Can you check the host logs to see if there's any nvidia driver errors/warnings being reported? be helpful. Hi @dhiltgen I encounter a similar issue. Running `nvidia-smi` I always see the process section containing the following: ```shell | 0 N/A N/A 71947 C ... .unners/cuda_v11/ollama_llama_server 3670MiB | ``` This seems to indicate that ollama is using v11 CUDA, but I have v12.2 installed on my machine. ``` shell ~$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Aug_15_22:02:13_PDT_2023 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0 ``` What the hell is going on here? Also, `... .unners/cuda_v11/ollama_llama_server` this process often disappears on its own, to the point where I have to reboot the system to revive it. I'm currently running the latest version (0.133) of ollama, on Ubuntu 24.04. I also see variables in **ollama.service** where ollama points to the cuda-12.2 path: ```shell Environment="PATH=/home/midtail/.local/share/pnpm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/usr/local/cuda-12.2/bin" ```` Much appreciated!
Author
Owner

@dhiltgen commented on GitHub (May 4, 2024):

@Pathsis we compile our official builds against cuda v11 for maximum compatibility across operating systems, driver versions and GPUs. The newer drivers are backwards compatible, but cuda v12 libraries will not work against older drivers and operating systems. You can build from source if you prefer to get it linked against a newer cuda library.

Also, ... .unners/cuda_v11/ollama_llama_server this process often disappears on its own, to the point where I have to reboot the system to revive it.

We had bugs in prior versions where this was a known problem with tmp cleaners, but that should be resolved now. Sending another request to the server should trigger it to refresh these extracted files and recover. If you're still seeing that on 0.1.33, please go ahead and file a new issue can share your server log around the time when the file disappears and the server doesn't recover.

<!-- gh-comment-id:2094367322 --> @dhiltgen commented on GitHub (May 4, 2024): @Pathsis we compile our official builds against cuda v11 for maximum compatibility across operating systems, driver versions and GPUs. The newer drivers are backwards compatible, but cuda v12 libraries will not work against older drivers and operating systems. You can build from source if you prefer to get it linked against a newer cuda library. > Also, ... .unners/cuda_v11/ollama_llama_server this process often disappears on its own, to the point where I have to reboot the system to revive it. We had bugs in prior versions where this was a known problem with tmp cleaners, but that should be resolved now. Sending another request to the server should trigger it to refresh these extracted files and recover. If you're still seeing that on 0.1.33, please go ahead and file a new issue can share your server log around the time when the file disappears and the server doesn't recover.
Author
Owner

@Pathsis commented on GitHub (May 5, 2024):

@dhiltgen Thanks for the reply! For the version of CUDA to use, I have no more problems for now.

For the ... .unners/cuda_v11/ollama_llama_server process, I still have questions.

I'm using the latest version of ollama, and I often experience this process disappearing by itself. And, when I run for example ollama run llama3 again, running nvidia-smi still doesn't see this process, running

 systemctl daemon-reload 
 systemctl restart ollama

doesn't help either.

I would like to know how to restore this process without rebooting the system. Thanks again!

<!-- gh-comment-id:2094567414 --> @Pathsis commented on GitHub (May 5, 2024): @dhiltgen Thanks for the reply! For the version of CUDA to use, I have no more problems for now. For the `... .unners/cuda_v11/ollama_llama_server ` process, I still have questions. I'm using the latest version of ollama, and I often experience this process disappearing by itself. And, when I run for example `ollama run llama3` again, running `nvidia-smi` still doesn't see this process, running ```shell systemctl daemon-reload systemctl restart ollama ``` doesn't help either. I would like to know how to restore this process without rebooting the system. Thanks again!
Author
Owner

@dhiltgen commented on GitHub (May 6, 2024):

@Pathsis what you are describing doesn't sound related to this issue. Please open a new issue describing your scenario, and please include the server log when it stops working so we can investigate.

<!-- gh-comment-id:2095020786 --> @dhiltgen commented on GitHub (May 6, 2024): @Pathsis what you are describing doesn't sound related to this issue. Please open a new issue describing your scenario, and please include the server log when it stops working so we can investigate.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2314