[GH-ISSUE #7006] Ollama can't use my Nvidia GPU anymore? #66494

Closed
opened 2026-05-04 06:48:46 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @pdavis68 on GitHub (Sep 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7006

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I'm running Ollama with the following command:

docker run --name ollama --gpus all -p 11434:11434 -e OLLAMA_DEBUG=1 -v ollama:/root/.ollama -d ollama/ollama:latest serve

During startup, the logs are getting errors initing cudart (see logs at the end) and it's clearly not using the GPU.

From inside the container, if I run nvidia-smi, it sees my RTX 3050, so that has me confused.

/usr/bin/nvidia-smi
Fri Sep 27 18:29:47 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.52.01              Driver Version: 555.99         CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3050        On  |   00000000:01:00.0  On |                  N/A |
|  0%   45C    P8             11W /  130W |    1408MiB /   8192MiB |      6%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A        42      G   /Xwayland                                   N/A      |
|    0   N/A  N/A        44      G   /Xwayland                                   N/A      |
+-----------------------------------------------------------------------------------------+

This is what nvidia-smi in the host returns:

Fri Sep 27 14:02:26 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.99                 Driver Version: 555.99         CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3050      WDDM  |   00000000:01:00.0  On |                  N/A |
|  0%   42C    P8             11W /  130W |     875MiB /   8192MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
... [a bunch of processes]
+-----------------------------------------------------------------------------------------+

It used to work. I periodically upgrade to the latest. I don't recall when it last worked with the GPU, I hadn't used it recently, but I noticed this morning it wasn't using the GPU, so I upgraded it with my normal set of upgrade commands:

docker pull ollama/ollama:latest

docker stop /ollama 
docker rm /ollama

docker run --name ollama --gpus all -p 11434:11434 -v ollama:/root/.ollama -d ollama/ollama:latest serve

Complete logs:

2024-09-27 13:31:20 2024/09/27 18:31:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.684Z level=INFO source=images.go:753 msg="total blobs: 59"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.693Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.695Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.701Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
2024-09-27 13:31:20 time=2024-09-27T18:31:20.701Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.718Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1]"
2024-09-27 13:31:20 cuInit err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.1 error="cuda driver library init failure: 500"
2024-09-27 13:31:20 cuInit err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1 error="cuda driver library init failure: 500"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so*
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.729Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-09-27 13:31:20 cudaSetDevice err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.736Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.12.4.99 error="cudart init failure: 500"
2024-09-27 13:31:20 cudaSetDevice err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.11.3.109 error="cudart init failure: 500"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="15.6 GiB" available="12.5 GiB"
2024-09-27 13:50:04 2024/09/27 18:50:04 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.245Z level=INFO source=images.go:753 msg="total blobs: 59"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.253Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.254Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.258Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.258Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1]"
2024-09-27 13:50:04 cuInit err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.1 error="cuda driver library init failure: 500"
2024-09-27 13:50:04 cuInit err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1 error="cuda driver library init failure: 500"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so*
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.492Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-09-27 13:50:04 cudaSetDevice err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.517Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.12.4.99 error="cudart init failure: 500"
2024-09-27 13:50:04 cudaSetDevice err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.11.3.109 error="cudart init failure: 500"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="15.6 GiB" available="13.9 GiB"

Update

I did some digging on the error:

CUDA_ERROR_NOT_FOUND = 500
This indicates that a named symbol was not found. Examples of symbols are global/constant variable names, driver function names, texture names, and surface names.

Some kind of driver mismatch maybe? But hope that helps.

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.3.12

Originally created by @pdavis68 on GitHub (Sep 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7006 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I'm running Ollama with the following command: `docker run --name ollama --gpus all -p 11434:11434 -e OLLAMA_DEBUG=1 -v ollama:/root/.ollama -d ollama/ollama:latest serve` During startup, the logs are getting errors initing cudart (see logs at the end) and it's clearly not using the GPU. From inside the container, if I run nvidia-smi, it sees my RTX 3050, so that has me confused. ``` /usr/bin/nvidia-smi Fri Sep 27 18:29:47 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.52.01 Driver Version: 555.99 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3050 On | 00000000:01:00.0 On | N/A | | 0% 45C P8 11W / 130W | 1408MiB / 8192MiB | 6% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 42 G /Xwayland N/A | | 0 N/A N/A 44 G /Xwayland N/A | +-----------------------------------------------------------------------------------------+ ``` This is what nvidia-smi in the host returns: ``` Fri Sep 27 14:02:26 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.99 Driver Version: 555.99 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3050 WDDM | 00000000:01:00.0 On | N/A | | 0% 42C P8 11W / 130W | 875MiB / 8192MiB | 5% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| ... [a bunch of processes] +-----------------------------------------------------------------------------------------+ ``` It used to work. I periodically upgrade to the latest. I don't recall when it last worked with the GPU, I hadn't used it recently, but I noticed this morning it wasn't using the GPU, so I upgraded it with my normal set of upgrade commands: ``` docker pull ollama/ollama:latest docker stop /ollama docker rm /ollama docker run --name ollama --gpus all -p 11434:11434 -v ollama:/root/.ollama -d ollama/ollama:latest serve ``` Complete logs: ``` 2024-09-27 13:31:20 2024/09/27 18:31:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.684Z level=INFO source=images.go:753 msg="total blobs: 59" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.693Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.695Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.701Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so* 2024-09-27 13:31:20 time=2024-09-27T18:31:20.701Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.718Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1]" 2024-09-27 13:31:20 cuInit err: 500 2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.1 error="cuda driver library init failure: 500" 2024-09-27 13:31:20 cuInit err: 500 2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1 error="cuda driver library init failure: 500" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so* 2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.729Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]" 2024-09-27 13:31:20 cudaSetDevice err: 500 2024-09-27 13:31:20 time=2024-09-27T18:31:20.736Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.12.4.99 error="cudart init failure: 500" 2024-09-27 13:31:20 cudaSetDevice err: 500 2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.11.3.109 error="cudart init failure: 500" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" 2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="15.6 GiB" available="12.5 GiB" 2024-09-27 13:50:04 2024/09/27 18:50:04 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.245Z level=INFO source=images.go:753 msg="total blobs: 59" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.253Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.254Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.258Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.258Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so* 2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1]" 2024-09-27 13:50:04 cuInit err: 500 2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.1 error="cuda driver library init failure: 500" 2024-09-27 13:50:04 cuInit err: 500 2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1 error="cuda driver library init failure: 500" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so* 2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.492Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]" 2024-09-27 13:50:04 cudaSetDevice err: 500 2024-09-27 13:50:04 time=2024-09-27T18:50:04.517Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.12.4.99 error="cudart init failure: 500" 2024-09-27 13:50:04 cudaSetDevice err: 500 2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.11.3.109 error="cudart init failure: 500" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" 2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="15.6 GiB" available="13.9 GiB" ``` ## Update I did some digging on the error: CUDA_ERROR_NOT_FOUND = 500 This indicates that a named symbol was not found. Examples of symbols are global/constant variable names, driver function names, texture names, and surface names. Some kind of driver mismatch maybe? But hope that helps. ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.12
GiteaMirror added the bugnvidiadockerneeds more info labels 2026-05-04 06:48:47 -05:00
Author
Owner

@Knotty1985 commented on GitHub (Sep 27, 2024):

By any chance is it when trying to use llama3.2 model? I've don't use the docker build but found that that the GPU wasn't being used until I switched to llama3.1 after hours of trying different things.

<!-- gh-comment-id:2380260490 --> @Knotty1985 commented on GitHub (Sep 27, 2024): By any chance is it when trying to use llama3.2 model? I've don't use the docker build but found that that the GPU wasn't being used until I switched to llama3.1 after hours of trying different things.
Author
Owner

@pdavis68 commented on GitHub (Sep 28, 2024):

@Knotty1985 No. I haven't gotten it yet. It's not even using a model. It's failing to init the CUDA drivers at startup.

<!-- gh-comment-id:2380357315 --> @pdavis68 commented on GitHub (Sep 28, 2024): @Knotty1985 No. I haven't gotten it yet. It's not even using a model. It's failing to init the CUDA drivers at startup.
Author
Owner

@Knotty1985 commented on GitHub (Sep 28, 2024):

@Knotty1985 No. I haven't gotten it yet. It's not even using a model. It's failing to init the CUDA drivers at startup.

Worth a try but I had issues until I followed this guide
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

<!-- gh-comment-id:2380922033 --> @Knotty1985 commented on GitHub (Sep 28, 2024): > @Knotty1985 No. I haven't gotten it yet. It's not even using a model. It's failing to init the CUDA drivers at startup. Worth a try but I had issues until I followed this guide https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Author
Owner

@dhiltgen commented on GitHub (Sep 28, 2024):

It used to work. I periodically upgrade to the latest. I don't recall when it last worked with the GPU, I hadn't used it recently, but I noticed this morning it wasn't using the GPU, so I upgraded it with my normal set of upgrade commands:

As suggested above, I believe this is likely a mismatch of nvidia components between the host and container runtime. Let us know if you still have problems after refreshing the nvidia container runtime.

<!-- gh-comment-id:2381016688 --> @dhiltgen commented on GitHub (Sep 28, 2024): > It used to work. I periodically upgrade to the latest. I don't recall when it last worked with the GPU, I hadn't used it recently, but I noticed this morning it wasn't using the GPU, so I upgraded it with my normal set of upgrade commands: As suggested above, I believe this is likely a mismatch of nvidia components between the host and container runtime. Let us know if you still have problems after refreshing the nvidia container runtime.
Author
Owner

@SleepingShell commented on GitHub (Sep 29, 2024):

Possibly not related to your issue, but I am on arch with the nvidia package and ollama stopped working. I had to install the cuda package (thought I already had it) and things are working again. Not sure why ollama was able to run on my GPU before without the cuda package.

<!-- gh-comment-id:2381132973 --> @SleepingShell commented on GitHub (Sep 29, 2024): Possibly not related to your issue, but I am on arch with the `nvidia` package and ollama stopped working. I had to install the `cuda` package (thought I already had it) and things are working again. Not sure why ollama was able to run on my GPU before without the `cuda` package.
Author
Owner

@dhiltgen commented on GitHub (Oct 15, 2024):

If updating the container runtime doesn't resolve the problem, please let us know and share an updated logs and I'll reopen the issue.

<!-- gh-comment-id:2415341736 --> @dhiltgen commented on GitHub (Oct 15, 2024): If updating the container runtime doesn't resolve the problem, please let us know and share an updated logs and I'll reopen the issue.
Author
Owner

@pdavis68 commented on GitHub (Oct 16, 2024):

It's all good. I ended up just installing it directly. When I first started using Ollama it didn't run in Windows, so I had it running in docker. But it's fine the way it is now. Thanks.

<!-- gh-comment-id:2417033833 --> @pdavis68 commented on GitHub (Oct 16, 2024): It's all good. I ended up just installing it directly. When I first started using Ollama it didn't run in Windows, so I had it running in docker. But it's fine the way it is now. Thanks.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66494