[GH-ISSUE #12173] Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.169: cuda driver library init failure: 802 #54609

Closed
opened 2026-04-29 06:33:19 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @connorgoodhue on GitHub (Sep 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12173

What is the issue?

I am new to Ollama, in fact, I am new to all of this. I've kind of been thrown blindly into setting up an "on-prem AI server" at my work. It is a Lambda Hyperplane server running 8 NVIDIA H100s. Anyway, I keep running into this error and I can't get it to go away.

Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.169: cuda driver library init failure: 802

The "AI server" is running Ubuntu Pro, and Ollama is running inside a Docker CE container (not Snap Docker). I have all the NVIDIA drivers installed at version 570.169 on the host, NVIDIA Container Toolkit is 1.17.8-1. Running CUDA 12.8. The CUDA drivers in the Ollama container are also 12.8. When I run nvidia-smi in the container it matches the versions shown on my host.

I stopped trying to use the official Ollama Docker image, because I thought maybe I could just build off of an NVIDIA Cuda container image for 12.8 and then install Ollama on top of it. But that still didn't work, I get the same 802 error. Here is a more verbose version of the error.

time=2025-09-03T15:13:30.404Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-09-03T15:13:30.458Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-09-03T15:13:30.458Z level=DEBUG source=gpu.go:503 msg="Searching for GPU library" name=libcuda.so*
time=2025-09-03T15:13:30.458Z level=DEBUG source=gpu.go:527 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/lib/x86_64-linux-gnu/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-09-03T15:13:30.465Z level=DEBUG source=gpu.go:560 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.570.169]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.169
dlsym: cuInit - 0x7fe0dfe48a60
dlsym: cuDriverGetVersion - 0x7fe0dfe48a80
dlsym: cuDeviceGetCount - 0x7fe0dfe48ac0
dlsym: cuDeviceGet - 0x7fe0dfe48aa0
dlsym: cuDeviceGetAttribute - 0x7fe0dfe48ba0
dlsym: cuDeviceGetUuid - 0x7fe0dfe48b00
dlsym: cuDeviceGetName - 0x7fe0dfe48ae0
dlsym: cuCtxCreate_v3 - 0x7fe0dfe48d80
dlsym: cuMemGetInfo_v2 - 0x7fe0dfe69140
dlsym: cuCtxDestroy - 0x7fe0dfea7a60
calling cuInit
cuInit err: 802
time=2025-09-03T15:14:00.498Z level=INFO source=gpu.go:614 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.169: cuda driver library init failure: 802"

I just find it strange, because in the container, all my GPUs are visible, I can see them all with ls -l /dev/nvidia*, nvidia-smi works. It's just that whenever Ollama calls cuInit, I get that nasty 802 error.

Any thoughts? Like I said, I am new to all this and was kind of thrown to the wolves.

Relevant log output


OS

Docker

GPU

Nvidia

CPU

Other

Ollama version

0.11.8

Originally created by @connorgoodhue on GitHub (Sep 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12173 ### What is the issue? I am new to Ollama, in fact, I am new to all of this. I've kind of been thrown blindly into setting up an "on-prem AI server" at my work. It is a Lambda Hyperplane server running 8 NVIDIA H100s. Anyway, I keep running into this error and I can't get it to go away. `Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.169: cuda driver library init failure: 802` The "AI server" is running Ubuntu Pro, and Ollama is running inside a Docker CE container (not Snap Docker). I have all the NVIDIA drivers installed at version 570.169 on the host, NVIDIA Container Toolkit is 1.17.8-1. Running CUDA 12.8. The CUDA drivers in the Ollama container are also 12.8. When I run nvidia-smi in the container it matches the versions shown on my host. I stopped trying to use the official Ollama Docker image, because I thought maybe I could just build off of an NVIDIA Cuda container image for 12.8 and then install Ollama on top of it. But that still didn't work, I get the same 802 error. Here is a more verbose version of the error. ``` time=2025-09-03T15:13:30.404Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-09-03T15:13:30.458Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-09-03T15:13:30.458Z level=DEBUG source=gpu.go:503 msg="Searching for GPU library" name=libcuda.so* time=2025-09-03T15:13:30.458Z level=DEBUG source=gpu.go:527 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/lib/x86_64-linux-gnu/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-09-03T15:13:30.465Z level=DEBUG source=gpu.go:560 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.570.169] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.169 dlsym: cuInit - 0x7fe0dfe48a60 dlsym: cuDriverGetVersion - 0x7fe0dfe48a80 dlsym: cuDeviceGetCount - 0x7fe0dfe48ac0 dlsym: cuDeviceGet - 0x7fe0dfe48aa0 dlsym: cuDeviceGetAttribute - 0x7fe0dfe48ba0 dlsym: cuDeviceGetUuid - 0x7fe0dfe48b00 dlsym: cuDeviceGetName - 0x7fe0dfe48ae0 dlsym: cuCtxCreate_v3 - 0x7fe0dfe48d80 dlsym: cuMemGetInfo_v2 - 0x7fe0dfe69140 dlsym: cuCtxDestroy - 0x7fe0dfea7a60 calling cuInit cuInit err: 802 time=2025-09-03T15:14:00.498Z level=INFO source=gpu.go:614 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.169: cuda driver library init failure: 802" ``` I just find it strange, because in the container, all my GPUs are visible, I can see them all with ls -l /dev/nvidia*, nvidia-smi works. It's just that whenever Ollama calls cuInit, I get that nasty 802 error. Any thoughts? Like I said, I am new to all this and was kind of thrown to the wolves. ### Relevant log output ```shell ``` ### OS Docker ### GPU Nvidia ### CPU Other ### Ollama version 0.11.8
GiteaMirror added the bug label 2026-04-29 06:33:19 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 3, 2025):

cudaErrorSystemNotReady = 802

  • This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.

See https://github.com/ollama/ollama/issues/9031, where a common fix is to install nvidia-fabricmanager-570.

<!-- gh-comment-id:3250099492 --> @rick-github commented on GitHub (Sep 3, 2025): [cudaErrorSystemNotReady](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1gg3f51e3575c2178246db0a94a430e0038e942e4cbbd2bef6e92e293253f055613:~:text=cudaErrorSystemNotReady) = 802 - This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide. See https://github.com/ollama/ollama/issues/9031, where a common fix is to install `nvidia-fabricmanager-570`.
Author
Owner

@rick-github commented on GitHub (Oct 6, 2025):

Any luck with fixing as per #9031?

<!-- gh-comment-id:3369533826 --> @rick-github commented on GitHub (Oct 6, 2025): Any luck with fixing as per #9031?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54609