[GH-ISSUE #4279] Ollama reports an error when running the AI model using GPU #49183

Closed
opened 2026-04-28 10:54:27 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @xiaomo0925 on GitHub (May 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4279

What is the issue?

When I use the command :
’docker run --gpus all -d -v f:/ai/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama‘
the following error will occur,
“docker:Error response from daemon:failed to create task for container:failed to create shim task:OCIruntime create failed:runc create failed:unable to start container process:error during container init:error runninghook #0:error running hook:exit status 1,stdout:stderr:Auto-detectedmode as 'legacy nvidia-container-cli:initialization error:load library failed:libnvidia-ml.so.1:cannot open shared object file:no such file or directory:unknown.”How should we handle this issue?

OS

No response

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @xiaomo0925 on GitHub (May 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4279 ### What is the issue? When I use the command : ’docker run --gpus all -d -v f:/ai/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama‘ the following error will occur, “docker:Error response from daemon:failed to create task for container:failed to create shim task:OCIruntime create failed:runc create failed:unable to start container process:error during container init:error runninghook #0:error running hook:exit status 1,stdout:stderr:Auto-detectedmode as 'legacy nvidia-container-cli:initialization error:load library failed:libnvidia-ml.so.1:cannot open shared object file:no such file or directory:unknown.”How should we handle this issue? ### OS _No response_ ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-28 10:54:27 -05:00
Author
Owner

@dhiltgen commented on GitHub (May 21, 2024):

This error appears to be coming from Docker or the Nvidia container runtime. It looks like it happens before ollama starts running.

Please make sure you have GPU support configured and working in with Docker. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

A simple test to verify things are working properly at the Docker + Nvidia level without Ollama involved is:

% docker run --gpus all ubuntu nvidia-smi
Tue May 21 23:54:36 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01   Driver Version: 515.105.01   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| 35%   31C    P8    N/A /  19W |      1MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
<!-- gh-comment-id:2123609515 --> @dhiltgen commented on GitHub (May 21, 2024): This error appears to be coming from Docker or the Nvidia container runtime. It looks like it happens before ollama starts running. Please make sure you have GPU support configured and working in with Docker. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html A simple test to verify things are working properly at the Docker + Nvidia level without Ollama involved is: ``` % docker run --gpus all ubuntu nvidia-smi Tue May 21 23:54:36 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A | | 35% 31C P8 N/A / 19W | 1MiB / 4096MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49183