[GH-ISSUE #5567] Nvidia A100 - Ollama Not Using GPU #3481

Closed
opened 2026-04-12 14:10:12 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @koayst-rplesson on GitHub (Jul 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5567

Hi,

I have 2 Nvidia A100 machines and both have the same config and setup sitting on the same network. Both machines have the same Ubuntu OS setup

Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal

Docker version 24.0.7, build afdd53b

NVIDIA Container Toolkit CLI version 1.15.0
commit: ddeeca392c7bd8b33d0a66400b77af7a97e16cef

When I run Ollama docker, machine A has not issue running with GPU. But machine B, always uses the CPU as the response from LLM is slow (word by word). When I look at the output log, it said:

msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 error="cuda driver library init failur e: 802"
Screenshot 2024-07-09 165815

I tried to login into the docker container and have no issue performing "nvidia-smi". I have also rebooted the machine.

What else can I do to try to find out the problem and maybe fix the issue?

Originally created by @koayst-rplesson on GitHub (Jul 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5567 Hi, I have 2 Nvidia A100 machines and both have the same config and setup sitting on the same network. Both machines have the same Ubuntu OS setup Distributor ID: Ubuntu Description: Ubuntu 20.04.6 LTS Release: 20.04 Codename: focal Docker version 24.0.7, build afdd53b NVIDIA Container Toolkit CLI version 1.15.0 commit: ddeeca392c7bd8b33d0a66400b77af7a97e16cef When I run Ollama docker, machine A has not issue running with GPU. But machine B, always uses the CPU as the response from LLM is slow (word by word). When I look at the output log, it said: msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 error="cuda driver library init failur e: 802" <img width="791" alt="Screenshot 2024-07-09 165815" src="https://github.com/ollama/ollama/assets/167511739/a8d09169-1911-44a4-92d2-3aa45daef8a6"> I tried to login into the docker container and have no issue performing "nvidia-smi". I have also rebooted the machine. What else can I do to try to find out the problem and maybe fix the issue?
GiteaMirror added the gpunvidiabug labels 2026-04-12 14:10:13 -05:00
Author
Owner

@mbbyn commented on GitHub (Jul 9, 2024):

A usual culprit in such cases is NVIDIA_VISIBLE_DEVICES and CUDA_VISIBLE_DEVICES, try checking their values and setting them accordingly.

We run the ollama/ollama image, and these are the relevant env variables set. (You might want to test ollama's official image to reduce the scope of the problem)
image

<!-- gh-comment-id:2217180734 --> @mbbyn commented on GitHub (Jul 9, 2024): A usual culprit in such cases is `NVIDIA_VISIBLE_DEVICES` and `CUDA_VISIBLE_DEVICES`, try checking their values and setting them accordingly. We run the `ollama/ollama` image, and these are the relevant env variables set. (You might want to test ollama's official image to reduce the scope of the problem) ![image](https://github.com/ollama/ollama/assets/121847135/f13f2686-d602-4e61-b2c0-e023bd649414)
Author
Owner

@koayst-rplesson commented on GitHub (Jul 9, 2024):

A usual culprit in such cases is NVIDIA_VISIBLE_DEVICES and CUDA_VISIBLE_DEVICES, try checking their values and setting them accordingly.

We run the ollama/ollama image, and these are the relevant env variables set. (You might want to test ollama's official image to reduce the scope of the problem) image

I checked and the environment variables are same as yours.
Screenshot 2024-07-09 202602

<!-- gh-comment-id:2217530799 --> @koayst-rplesson commented on GitHub (Jul 9, 2024): > A usual culprit in such cases is `NVIDIA_VISIBLE_DEVICES` and `CUDA_VISIBLE_DEVICES`, try checking their values and setting them accordingly. > > We run the `ollama/ollama` image, and these are the relevant env variables set. (You might want to test ollama's official image to reduce the scope of the problem) ![image](https://private-user-images.githubusercontent.com/121847135/346907325-f13f2686-d602-4e61-b2c0-e023bd649414.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA1MjcyNzgsIm5iZiI6MTcyMDUyNjk3OCwicGF0aCI6Ii8xMjE4NDcxMzUvMzQ2OTA3MzI1LWYxM2YyNjg2LWQ2MDItNGU2MS1iMmMwLWUwMjNiZDY0OTQxNC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNzA5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDcwOVQxMjA5MzhaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jNWYxMWJmYTZhYmQ5NjljODMwNWI5YzczMDBhYzk0YzY0Y2E2Yjk2MzI3YzZkNjJkZTRlY2MyY2ExMmViOTkxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.IYnWGv95pcuYKtvhPlgJhizJhoW5SHMas0VkvjodWF4) I checked and the environment variables are same as yours. <img width="456" alt="Screenshot 2024-07-09 202602" src="https://github.com/ollama/ollama/assets/167511739/9b05239b-d3e6-451b-b67d-898030a91402">
Author
Owner

@jmorganca commented on GitHub (Jul 9, 2024):

Hi @koayst-rplesson do you know if you have NVLink/NVSwitch fabric manager installed? I saw a similar issue 802 here

https://forums.developer.nvidia.com/t/error-802-system-not-yet-initialized-cuda-11-3/234955

<!-- gh-comment-id:2217906507 --> @jmorganca commented on GitHub (Jul 9, 2024): Hi @koayst-rplesson do you know if you have NVLink/NVSwitch fabric manager installed? I saw a similar issue 802 here https://forums.developer.nvidia.com/t/error-802-system-not-yet-initialized-cuda-11-3/234955
Author
Owner

@koayst-rplesson commented on GitHub (Jul 10, 2024):

@jmorganca Thanks a million. You advice so valuable !!!

It works after I re-installed NVLink/NVSwitch fabric manager.

<!-- gh-comment-id:2219467921 --> @koayst-rplesson commented on GitHub (Jul 10, 2024): @jmorganca Thanks a million. You advice so valuable !!! It works after I re-installed NVLink/NVSwitch fabric manager.
Author
Owner

@koayst-rplesson commented on GitHub (Jul 10, 2024):

Installed NVLink/NVSwitch fabric manager and it works. Ollama can detect the GPU.

<!-- gh-comment-id:2219468318 --> @koayst-rplesson commented on GitHub (Jul 10, 2024): Installed NVLink/NVSwitch fabric manager and it works. Ollama can detect the GPU.
Author
Owner

@maurerle commented on GitHub (Jun 12, 2025):

I also had the same problem.
I had to run sudo apt install nvidia-fabricmanager-570=570.133.20-1 and sudo systemctl restart nvidia-fabricmanager.
Now everything works fine!

Otherwise the service did terminate with fabric manager NVIDIA GPU driver interface version 570.148.08 don't match with driver version 570.133.20. Please update with matching NVIDIA driver package.

In ollama I also had
Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20: cuda driver library init failure: 802

<!-- gh-comment-id:2968252988 --> @maurerle commented on GitHub (Jun 12, 2025): I also had the same problem. I had to run `sudo apt install nvidia-fabricmanager-570=570.133.20-1` and `sudo systemctl restart nvidia-fabricmanager`. Now everything works fine! Otherwise the service did terminate with `fabric manager NVIDIA GPU driver interface version 570.148.08 don't match with driver version 570.133.20. Please update with matching NVIDIA driver package.` In ollama I also had `Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20: cuda driver library init failure: 802`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3481