[GH-ISSUE #3417] Docker with NVIDIA GPU: "Unable to load cudart CUDA management library" #2107

Closed
opened 2026-04-12 12:20:46 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @Kryszn0 on GitHub (Mar 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3417

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I am trying to run ollama in docker and to have it utilize my NVIDIA GPU, but keep getting in the error message it cannot load cudart library. This is a fresh installation on Debian. When running the NVIDIA workload sample example of "sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi" I do get correct output displaying my GPU info as well as CUDA version.

Here are my logs:

with OLLAMA_DEBUG not set


time=2024-03-30T15:05:39.537Z level=INFO source=images.go:804 msg="total blobs: 0"
time=2024-03-30T15:05:39.540Z level=INFO source=images.go:811 msg="total unused blobs removed: 0"
time=2024-03-30T15:05:39.543Z level=INFO source=routes.go:1118 msg="Listening on [::]:11434 (version 0.1.30)"
time=2024-03-30T15:05:39.543Z level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama4102197452/runners ..."
time=2024-03-30T15:05:41.303Z level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 cpu rocm_v60000]"
time=2024-03-30T15:05:41.303Z level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-03-30T15:05:41.303Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-03-30T15:05:41.304Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama4102197452/runners/cuda_v11/libcudart.so.11.0]"
time=2024-03-30T15:05:41.304Z level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama4102197452/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35"
time=2024-03-30T15:05:41.304Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-30T15:05:41.305Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-30T15:05:41.305Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-30T15:05:41.305Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-30T15:05:41.305Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1036]"
time=2024-03-30T15:05:41.305Z level=WARN source=amd_linux.go:350 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2024-03-30T15:05:41.305Z level=WARN source=amd_linux.go:99 msg="unable to verify rocm library, will use cpu: no suitable rocm found, falling back to CPU"
time=2024-03-30T15:05:41.305Z level=INFO source=routes.go:1141 msg="no GPU detected"


with OLLAMA_DEBUG=1

time=2024-03-30T15:10:04.139Z level=INFO source=images.go:804 msg="total blobs: 0"
time=2024-03-30T15:10:04.142Z level=INFO source=images.go:811 msg="total unused blobs removed: 0"
time=2024-03-30T15:10:04.145Z level=INFO source=routes.go:1118 msg="Listening on [::]:11434 (version 0.1.30)"
time=2024-03-30T15:10:04.145Z level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama1959530181/runners ..."
time=2024-03-30T15:10:05.968Z level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx cpu cpu_avx2 cuda_v11]"
time=2024-03-30T15:10:05.968Z level=DEBUG source=payload_common.go:141 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-03-30T15:10:05.968Z level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/tmp/ollama1959530181/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]"
time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama1959530181/runners/cuda_v11/libcudart.so.11.0]"
wiring cudart library functions in /tmp/ollama1959530181/runners/cuda_v11/libcudart.so.11.0
dlsym: cudaSetDevice
dlsym: cudaDeviceSynchronize
dlsym: cudaDeviceReset
dlsym: cudaMemGetInfo
dlsym: cudaGetDeviceCount
dlsym: cudaDeviceGetAttribute
dlsym: cudaDriverGetVersion
cudaSetDevice err: 35
time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama1959530181/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35"
time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-30T15:10:05.968Z level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]"
time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-30T15:10:05.968Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-30T15:10:05.968Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-30T15:10:05.968Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1036]"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /tmp/ollama1959530181/rocm"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/nvidia/lib"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/nvidia/lib64"
time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
time=2024-03-30T15:10:05.968Z level=WARN source=amd_linux.go:350 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2024-03-30T15:10:05.968Z level=WARN source=amd_linux.go:99 msg="unable to verify rocm library, will use cpu: no suitable rocm found, falling back to CPU"
time=2024-03-30T15:10:05.968Z level=INFO source=routes.go:1141 msg="no GPU detected"

What did you expect to see?

GPU detected & in use

Steps to reproduce

Running docker-compose up -d with the following files:

docker-compose.yml

version: "3.8"
services:
## Ollama
  llm:
    image: 'ollama/ollama:${LLM_TAG}'
    container_name: '${LLM_NAME}'
    hostname: '${LLM_NAME}'
    ports:
      - 11434:11434
    env_file:
      - .env
    volumes:
      - ./llm:/root/.ollama
    restart: 'always'

.env

COMPOSE_PROJECT_NAME=ollama
LLM_NAME=${COMPOSE_PROJECT_NAME}
LLM_TAG=0.1.30

#OLLAMA_DEBUG=1
gpus=all

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

amd64

Platform

Docker

Ollama version

0.1.30

GPU

Nvidia

GPU info

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Sat Mar 30 15:03:10 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4060 Ti     Off |   00000000:02:00.0 Off |                  N/A |
|  0%   46C    P0             37W /  165W |       0MiB /  16380MiB |      3%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:19:38_PST_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0

__

CPU

AMD

Other software

No response

Originally created by @Kryszn0 on GitHub (Mar 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3417 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I am trying to run ollama in docker and to have it utilize my NVIDIA GPU, but keep getting in the error message it cannot load cudart library. This is a fresh installation on Debian. When running the NVIDIA workload sample example of "sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi" I do get correct output displaying my GPU info as well as CUDA version. Here are my logs: ``` with OLLAMA_DEBUG not set time=2024-03-30T15:05:39.537Z level=INFO source=images.go:804 msg="total blobs: 0" time=2024-03-30T15:05:39.540Z level=INFO source=images.go:811 msg="total unused blobs removed: 0" time=2024-03-30T15:05:39.543Z level=INFO source=routes.go:1118 msg="Listening on [::]:11434 (version 0.1.30)" time=2024-03-30T15:05:39.543Z level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama4102197452/runners ..." time=2024-03-30T15:05:41.303Z level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 cpu rocm_v60000]" time=2024-03-30T15:05:41.303Z level=INFO source=gpu.go:115 msg="Detecting GPU type" time=2024-03-30T15:05:41.303Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*" time=2024-03-30T15:05:41.304Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama4102197452/runners/cuda_v11/libcudart.so.11.0]" time=2024-03-30T15:05:41.304Z level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama4102197452/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35" time=2024-03-30T15:05:41.304Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-30T15:05:41.305Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-03-30T15:05:41.305Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-30T15:05:41.305Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-30T15:05:41.305Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1036]" time=2024-03-30T15:05:41.305Z level=WARN source=amd_linux.go:350 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2024-03-30T15:05:41.305Z level=WARN source=amd_linux.go:99 msg="unable to verify rocm library, will use cpu: no suitable rocm found, falling back to CPU" time=2024-03-30T15:05:41.305Z level=INFO source=routes.go:1141 msg="no GPU detected" with OLLAMA_DEBUG=1 time=2024-03-30T15:10:04.139Z level=INFO source=images.go:804 msg="total blobs: 0" time=2024-03-30T15:10:04.142Z level=INFO source=images.go:811 msg="total unused blobs removed: 0" time=2024-03-30T15:10:04.145Z level=INFO source=routes.go:1118 msg="Listening on [::]:11434 (version 0.1.30)" time=2024-03-30T15:10:04.145Z level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama1959530181/runners ..." time=2024-03-30T15:10:05.968Z level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx cpu cpu_avx2 cuda_v11]" time=2024-03-30T15:10:05.968Z level=DEBUG source=payload_common.go:141 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:115 msg="Detecting GPU type" time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*" time=2024-03-30T15:10:05.968Z level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/tmp/ollama1959530181/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama1959530181/runners/cuda_v11/libcudart.so.11.0]" wiring cudart library functions in /tmp/ollama1959530181/runners/cuda_v11/libcudart.so.11.0 dlsym: cudaSetDevice dlsym: cudaDeviceSynchronize dlsym: cudaDeviceReset dlsym: cudaMemGetInfo dlsym: cudaGetDeviceCount dlsym: cudaDeviceGetAttribute dlsym: cudaDriverGetVersion cudaSetDevice err: 35 time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama1959530181/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35" time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-30T15:10:05.968Z level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]" time=2024-03-30T15:10:05.968Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-03-30T15:10:05.968Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-30T15:10:05.968Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-30T15:10:05.968Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1036]" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /tmp/ollama1959530181/rocm" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/nvidia/lib" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/nvidia/lib64" time=2024-03-30T15:10:05.968Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib" time=2024-03-30T15:10:05.968Z level=WARN source=amd_linux.go:350 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2024-03-30T15:10:05.968Z level=WARN source=amd_linux.go:99 msg="unable to verify rocm library, will use cpu: no suitable rocm found, falling back to CPU" time=2024-03-30T15:10:05.968Z level=INFO source=routes.go:1141 msg="no GPU detected" ``` ### What did you expect to see? _GPU detected & in use_ ### Steps to reproduce Running docker-compose up -d with the following files: docker-compose.yml ``` version: "3.8" services: ## Ollama llm: image: 'ollama/ollama:${LLM_TAG}' container_name: '${LLM_NAME}' hostname: '${LLM_NAME}' ports: - 11434:11434 env_file: - .env volumes: - ./llm:/root/.ollama restart: 'always' ``` .env ``` COMPOSE_PROJECT_NAME=ollama LLM_NAME=${COMPOSE_PROJECT_NAME} LLM_TAG=0.1.30 #OLLAMA_DEBUG=1 gpus=all ``` ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform Docker ### Ollama version 0.1.30 ### GPU Nvidia ### GPU info sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi ``` Sat Mar 30 15:03:10 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:02:00.0 Off | N/A | | 0% 46C P0 37W / 165W | 0MiB / 16380MiB | 3% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` nvcc --version ``` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Tue_Feb_27_16:19:38_PST_2024 Cuda compilation tools, release 12.4, V12.4.99 Build cuda_12.4.r12.4/compiler.33961263_0 ``` __ ### CPU AMD ### Other software _No response_
GiteaMirror added the bugnvidia labels 2026-04-12 12:20:46 -05:00
Author
Owner

@dhiltgen commented on GitHub (Apr 1, 2024):

The behavior you're describing sounds like the behavior when the nvidia container runtime isn't exposing the GPU to the container. cudart init failure: 35 is a "driver too old" error code, but I believe this also shows up when the driver isn't visible/loaded. The failure to find the management library (nvidia-ml) also implies the runtime isn't wiring things up for GPU access. Normally this would be mounted from the host as the management library is bundled with the driver.

Can you try to run the container image directly without compose and specify the environment variables on the command line so we can see if that makes a difference?

<!-- gh-comment-id:2030475205 --> @dhiltgen commented on GitHub (Apr 1, 2024): The behavior you're describing sounds like the behavior when the nvidia container runtime isn't exposing the GPU to the container. `cudart init failure: 35` is a "driver too old" error code, but I believe this also shows up when the driver isn't visible/loaded. The failure to find the management library (nvidia-ml) also implies the runtime isn't wiring things up for GPU access. Normally this would be mounted from the host as the management library is bundled with the driver. Can you try to run the container image directly without compose and specify the environment variables on the command line so we can see if that makes a difference?
Author
Owner

@qingfengfenga commented on GitHub (Apr 2, 2024):

I have the same problem, docker run can correctly call GPU

docker-compose nvidia-smi displays normally, but cannot use GPU

Both mounting methods cannot use GPU

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities: [gpu]

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: "1"
              capabilities: [gpu]
<!-- gh-comment-id:2031517237 --> @qingfengfenga commented on GitHub (Apr 2, 2024): I have the same problem, docker run can correctly call GPU docker-compose nvidia-smi displays normally, but cannot use GPU Both mounting methods cannot use GPU ``` deploy: resources: reservations: devices: - driver: nvidia device_ids: ['0'] capabilities: [gpu] deploy: resources: reservations: devices: - driver: nvidia count: "1" capabilities: [gpu] ```
Author
Owner

@aagarwal937 commented on GitHub (Apr 4, 2024):

Below is the method worked for me while deploying the deploying using docker-compose file on an EC2 instance having Tesla T4 GPU.

version: '3.8'
services:
ollama:
image: local_ollama
restart: always
build:
context: ./ollama
dockerfile: Dockerfile
ports:
- 11434:11434
volumes:
- ./data/ollama:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

<!-- gh-comment-id:2036454421 --> @aagarwal937 commented on GitHub (Apr 4, 2024): `Below` is the method worked for me while deploying the deploying using docker-compose file on an EC2 instance having Tesla T4 GPU. **version: '3.8' services: ollama: image: local_ollama restart: always build: context: ./ollama dockerfile: Dockerfile ports: - 11434:11434 volumes: - ./data/ollama:/root/.ollama deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu]**
Author
Owner

@hapasa commented on GitHub (Apr 8, 2024):

I have the same error, also with NVidia 4060 Ti 16Gb and docker.
Running ollama outside docker with systemd does work fine with the NVidia card.

Regarding the suggestion: "Can you try to run the container image directly without compose and specify the environment variables on the command line so we can see if that makes a difference?"

It is unclear to me what environment variables should be set to test. The sanity check for NVidia docker toolkit with docker run ... nvidia-smi does work fine.

<!-- gh-comment-id:2043587858 --> @hapasa commented on GitHub (Apr 8, 2024): I have the same error, also with NVidia 4060 Ti 16Gb and docker. Running ollama outside docker with systemd does work fine with the NVidia card. Regarding the suggestion: "Can you try to run the container image directly without compose and specify the environment variables on the command line so we can see if that makes a difference?" It is unclear to me what environment variables should be set to test. The sanity check for NVidia docker toolkit with docker run ... nvidia-smi does work fine.
Author
Owner

@dhiltgen commented on GitHub (Apr 12, 2024):

Usage instructions are at https://hub.docker.com/r/ollama/ollama

Also adding -e OLLAMA_DEBUG=1 may help expose some more details if you are still having problems accessing the GPU.

<!-- gh-comment-id:2052642655 --> @dhiltgen commented on GitHub (Apr 12, 2024): Usage instructions are at https://hub.docker.com/r/ollama/ollama Also adding `-e OLLAMA_DEBUG=1` may help expose some more details if you are still having problems accessing the GPU.
Author
Owner

@Kryszn0 commented on GitHub (Apr 13, 2024):

In my case, @qingfengfenga 's example was able to fix the issue. Just needed to properly select the correct method of passing GPU via compose

Thanks!

<!-- gh-comment-id:2053651541 --> @Kryszn0 commented on GitHub (Apr 13, 2024): In my case, @qingfengfenga 's example was able to fix the issue. Just needed to properly select the correct method of passing GPU via compose Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2107