issue: ERROR:open_webui.env:Error when testing CUDA but USE_CUDA_DOCKER is true. #4349

Closed
opened 2025-11-11 15:52:03 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @SchwarzerA on GitHub (Mar 9, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

v0.5.20

Ollama Version (if applicable)

0.5.13

Operating System

openSUSE Tumbleweed | 20250307-0 | x86_64

Browser (if applicable)

any (Edge, FireFox, Chromium)

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

I expected that open-webui:cuda would use the CUDA functionality to improve performance with the help of this docker-compose.yml:

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:cuda
    container_name: open-webui
    hostname: open-webui
    privileged: true
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities:
                - gpu
    environment:
      - CUDA_VISIBLE_DEVICES=all
      - GLOBAL_LOG_LEVEL=INFO
      - OLLAMA_BASE_URL=http://<ip>:11434
      - OLLAMA_MODELS=/root/.ollama
      - TZ=Europe/Berlin
      - USE_CUDA=true
      - WEBUI_SECRET_KEY=<secret>
    ports:
      - 127.0.0.1:51081:8080
    volumes:
      - ${PWD}/persistent/data:/app/backend/data:rw
      - /usr/share/ollama/.ollama:/root/.ollama:rw
      - /etc/localtime:/etc/localtime:ro

Similar settings were successful for an ollama container and this open-webui-container can also make use of nvidia-smi. But it always throws an error during initialization (pls. see Logs & Screenshots)

Actual Behavior

Whatever I do in open-webui the tool nvtop does not show any activity. So I'm afraid CUDA is really not used.

Steps to Reproduce

Run open-webui with that image and docker-compose.yml; have a look at the log.

Logs & Screenshots

REPOSITORY                          TAG        IMAGE ID       CREATED         SIZE
ghcr.io/open-webui/open-webui       cuda       dbc45818aeb9   19 hours ago    8.57GB

open-webui  | CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries.
open-webui  | /app/backend/open_webui
open-webui  | /app/backend
open-webui  | /app
open-webui  | INFO:open_webui.env:GLOBAL_LOG_LEVEL: INFO
open-webui  | ERROR:open_webui.env:Error when testing CUDA but USE_CUDA_DOCKER is true. Resetting USE_CUDA_DOCKER to false: CUDA not available
open-webui  | NoneType: None

root@open-webui:/app/backend# nvidia-smi
Sun Mar  9 16:20:04 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.04             Driver Version: 570.124.04     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA T400 4GB                On  |   00000000:01:00.0 Off |                  N/A |
| 38%   33C    P8            N/A  /   31W |       4MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Additional Information

Do I have false expectations or is there still something to configure? What does CUDA not available mean in that context?

Installed software in running os:

S  | Name                          | Type    | Version                   | Arch   | Repository
---+-------------------------------+---------+---------------------------+--------+--------------
i  | kernel-firmware-nvidia        | package | 20250206-1.1              | noarch | tw.Ioss
i  | libnvidia-container-devel     | package | 1.17.4-1                  | x86_64 | tw.NVIDIA.ctk
i+ | libnvidia-container-static    | package | 1.17.4-1                  | x86_64 | tw.NVIDIA.ctk
i  | libnvidia-container-tools     | package | 1.17.4-1                  | x86_64 | tw.NVIDIA.ctk
i  | libnvidia-container1          | package | 1.17.4-1                  | x86_64 | tw.NVIDIA.ctk
i  | libnvidia-egl-gbm1            | package | 1.1.2-7.2                 | x86_64 | tw.NVIDIA
i  | libnvidia-egl-gbm1-32bit      | package | 1.1.2-7.3                 | x86_64 | tw.NVIDIA
i  | libnvidia-egl-wayland1        | package | 1.1.18-1.1                | x86_64 | tw.Ioss
i  | libnvidia-egl-wayland1-32bit  | package | 1.1.17-43.3               | x86_64 | tw.NVIDIA
i  | libnvidia-egl-x111            | package | 1.0.1-9.4                 | x86_64 | tw.NVIDIA
i  | libnvidia-egl-x111-32bit      | package | 1.0.1-9.4                 | x86_64 | tw.NVIDIA
i  | nvidia-common-G06             | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i  | nvidia-compute-G06            | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i  | nvidia-compute-G06-32bit      | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i+ | nvidia-compute-utils-G06      | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i+ | nvidia-container-toolkit      | package | 1.17.4-1                  | x86_64 | tw.NVIDIA.ctk
i  | nvidia-container-toolkit-base | package | 1.17.4-1                  | x86_64 | tw.NVIDIA.ctk
i  | nvidia-driver-G06-kmp-default | package | 570.124.04_k6.13.4_1-32.1 | x86_64 | tw.NVIDIA
i  | nvidia-gl-G06                 | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i  | nvidia-gl-G06-32bit           | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i  | nvidia-modprobe               | package | 570.124.04-11.1           | x86_64 | tw.NVIDIA
i  | nvidia-persistenced           | package | 570.124.04-2.1            | x86_64 | tw.NVIDIA
i+ | nvidia-video-G06              | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i  | nvidia-video-G06-32bit        | package | 570.124.04-32.1           | x86_64 | tw.NVIDIA
i+ | openSUSE-repos-MicroOS-NVIDIA | package | 20250303.f74564e-1.1      | x86_64 | tw.Ioss
`

Does the output from `nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml` help:

`
cdiVersion: 0.5.0
containerEdits:
  deviceNodes:
  - path: /dev/nvidia-modeset
  - path: /dev/nvidia-uvm
  - path: /dev/nvidia-uvm-tools
  - path: /dev/nvidiactl
  env:
  - NVIDIA_VISIBLE_DEVICES=void
  hooks:
  - args:
    - nvidia-cdi-hook
    - create-symlinks
    - --link
    - ../libnvidia-allocator.so.1::/usr/lib64/gbm/nvidia-drm_gbm.so
    - --link
    - libglxserver_nvidia.so.570.124.04::/usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so
    hookName: createContainer
    path: /bin/nvidia-cdi-hook
  - args:
    - nvidia-cdi-hook
    - create-symlinks
    - --link
    - libGLX_nvidia.so.570.124.04::/usr/lib64/libGLX_indirect.so.0
    - --link
    - libcuda.so.1::/usr/lib64/libcuda.so
    - --link
    - libnvidia-opticalflow.so.1::/usr/lib64/libnvidia-opticalflow.so
    hookName: createContainer
    path: /bin/nvidia-cdi-hook
  - args:
    - nvidia-cdi-hook
    - update-ldcache
    - --folder
    - /usr/lib64
    - --folder
    - /usr/lib64/vdpau
    hookName: createContainer
    path: /bin/nvidia-cdi-hook
  mounts:
  - containerPath: /bin/nvidia-cuda-mps-control
    hostPath: /bin/nvidia-cuda-mps-control
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /bin/nvidia-cuda-mps-server
    hostPath: /bin/nvidia-cuda-mps-server
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /bin/nvidia-debugdump
    hostPath: /bin/nvidia-debugdump
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /bin/nvidia-persistenced
    hostPath: /bin/nvidia-persistenced
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /bin/nvidia-smi
    hostPath: /bin/nvidia-smi
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /run/nvidia-persistenced/socket
    hostPath: /run/nvidia-persistenced/socket
    options:
    - ro
    - nosuid
    - nodev
    - bind
    - noexec
  - containerPath: /usr/lib64/libEGL_nvidia.so.570.124.04
    hostPath: /usr/lib64/libEGL_nvidia.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libGLESv1_CM_nvidia.so.570.124.04
    hostPath: /usr/lib64/libGLESv1_CM_nvidia.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libGLESv2_nvidia.so.570.124.04
    hostPath: /usr/lib64/libGLESv2_nvidia.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libGLX_nvidia.so.570.124.04
    hostPath: /usr/lib64/libGLX_nvidia.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libcuda.so.570.124.04
    hostPath: /usr/lib64/libcuda.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libcudadebugger.so.570.124.04
    hostPath: /usr/lib64/libcudadebugger.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvcuvid.so.570.124.04
    hostPath: /usr/lib64/libnvcuvid.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-allocator.so.570.124.04
    hostPath: /usr/lib64/libnvidia-allocator.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-cfg.so.570.124.04
    hostPath: /usr/lib64/libnvidia-cfg.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-egl-gbm.so.1.1.2
    hostPath: /usr/lib64/libnvidia-egl-gbm.so.1.1.2
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-egl-wayland.so.1.1.17
    hostPath: /usr/lib64/libnvidia-egl-wayland.so.1.1.17
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-eglcore.so.570.124.04
    hostPath: /usr/lib64/libnvidia-eglcore.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-encode.so.570.124.04
    hostPath: /usr/lib64/libnvidia-encode.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-fbc.so.570.124.04
    hostPath: /usr/lib64/libnvidia-fbc.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-glcore.so.570.124.04
    hostPath: /usr/lib64/libnvidia-glcore.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-glsi.so.570.124.04
    hostPath: /usr/lib64/libnvidia-glsi.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-glvkspirv.so.570.124.04
    hostPath: /usr/lib64/libnvidia-glvkspirv.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-gpucomp.so.570.124.04
    hostPath: /usr/lib64/libnvidia-gpucomp.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-ml.so.570.124.04
    hostPath: /usr/lib64/libnvidia-ml.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-ngx.so.570.124.04
    hostPath: /usr/lib64/libnvidia-ngx.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-nvvm.so.570.124.04
    hostPath: /usr/lib64/libnvidia-nvvm.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-opencl.so.570.124.04
    hostPath: /usr/lib64/libnvidia-opencl.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-opticalflow.so.570.124.04
    hostPath: /usr/lib64/libnvidia-opticalflow.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-pkcs11-openssl3.so.570.124.04
    hostPath: /usr/lib64/libnvidia-pkcs11-openssl3.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-ptxjitcompiler.so.570.124.04
    hostPath: /usr/lib64/libnvidia-ptxjitcompiler.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-rtcore.so.570.124.04
    hostPath: /usr/lib64/libnvidia-rtcore.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-sandboxutils.so.570.124.04
    hostPath: /usr/lib64/libnvidia-sandboxutils.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-tls.so.570.124.04
    hostPath: /usr/lib64/libnvidia-tls.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvidia-vksc-core.so.570.124.04
    hostPath: /usr/lib64/libnvidia-vksc-core.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/libnvoptix.so.570.124.04
    hostPath: /usr/lib64/libnvoptix.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /etc/vulkan/implicit_layer.d/nvidia_layers.json
    hostPath: /usr/share/vulkan/implicit_layer.d/nvidia_layers.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/vdpau/libvdpau_nvidia.so.570.124.04
    hostPath: /usr/lib64/vdpau/libvdpau_nvidia.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/nvidia/nvoptix.bin
    hostPath: /usr/share/nvidia/nvoptix.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /lib/firmware/nvidia/570.124.04/gsp_ga10x.bin
    hostPath: /lib/firmware/nvidia/570.124.04/gsp_ga10x.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /lib/firmware/nvidia/570.124.04/gsp_tu10x.bin
    hostPath: /lib/firmware/nvidia/570.124.04/gsp_tu10x.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
    hostPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
    hostPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json
    hostPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/xorg/modules/drivers/nvidia_drv.so
    hostPath: /usr/lib64/xorg/modules/drivers/nvidia_drv.so
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so.570.124.04
    hostPath: /usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so.570.124.04
    options:
    - ro
    - nosuid
    - nodev
    - bind
devices:
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
    - path: /dev/dri/card0
    - path: /dev/dri/renderD129
    hooks:
    - args:
      - nvidia-cdi-hook
      - create-symlinks
      - --link
      - ../card0::/dev/dri/by-path/pci-0000:01:00.0-card
      - --link
      - ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render
      hookName: createContainer
      path: /bin/nvidia-cdi-hook
    - args:
      - nvidia-cdi-hook
      - chmod
      - --mode
      - "755"
      - --path
      - /dev/dri
      hookName: createContainer
      path: /bin/nvidia-cdi-hook
  name: "0"
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
    - path: /dev/dri/card0
    - path: /dev/dri/renderD129
    hooks:
    - args:
      - nvidia-cdi-hook
      - create-symlinks
      - --link
      - ../card0::/dev/dri/by-path/pci-0000:01:00.0-card
      - --link
      - ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render
      hookName: createContainer
      path: /bin/nvidia-cdi-hook
    - args:
      - nvidia-cdi-hook
      - chmod
      - --mode
      - "755"
      - --path
      - /dev/dri
      hookName: createContainer
      path: /bin/nvidia-cdi-hook
  name: GPU-aac2e917-ee53-8bd4-2b90-0f6e8485ce1e
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
    - path: /dev/dri/card0
    - path: /dev/dri/renderD129
    hooks:
    - args:
      - nvidia-cdi-hook
      - create-symlinks
      - --link
      - ../card0::/dev/dri/by-path/pci-0000:01:00.0-card
      - --link
      - ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render
      hookName: createContainer
      path: /bin/nvidia-cdi-hook
    - args:
      - nvidia-cdi-hook
      - chmod
      - --mode
      - "755"
      - --path
      - /dev/dri
      hookName: createContainer
      path: /bin/nvidia-cdi-hook
  name: all
kind: nvidia.com/gpu
`
Originally created by @SchwarzerA on GitHub (Mar 9, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version v0.5.20 ### Ollama Version (if applicable) 0.5.13 ### Operating System openSUSE Tumbleweed | 20250307-0 | x86_64 ### Browser (if applicable) any (Edge, FireFox, Chromium) ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior I expected that open-webui:cuda would use the CUDA functionality to improve performance with the help of this docker-compose.yml: ``` services: open-webui: image: ghcr.io/open-webui/open-webui:cuda container_name: open-webui hostname: open-webui privileged: true restart: unless-stopped deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: - gpu environment: - CUDA_VISIBLE_DEVICES=all - GLOBAL_LOG_LEVEL=INFO - OLLAMA_BASE_URL=http://<ip>:11434 - OLLAMA_MODELS=/root/.ollama - TZ=Europe/Berlin - USE_CUDA=true - WEBUI_SECRET_KEY=<secret> ports: - 127.0.0.1:51081:8080 volumes: - ${PWD}/persistent/data:/app/backend/data:rw - /usr/share/ollama/.ollama:/root/.ollama:rw - /etc/localtime:/etc/localtime:ro ``` Similar settings were successful for an ollama container and this open-webui-container can also make use of nvidia-smi. But it always throws an error during initialization (pls. see **Logs & Screenshots**) ### Actual Behavior Whatever I do in open-webui the tool **nvtop** does not show any activity. So I'm afraid CUDA is really not used. ### Steps to Reproduce Run open-webui with that image and docker-compose.yml; have a look at the log. ### Logs & Screenshots ``` REPOSITORY TAG IMAGE ID CREATED SIZE ghcr.io/open-webui/open-webui cuda dbc45818aeb9 19 hours ago 8.57GB open-webui | CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries. open-webui | /app/backend/open_webui open-webui | /app/backend open-webui | /app open-webui | INFO:open_webui.env:GLOBAL_LOG_LEVEL: INFO open-webui | ERROR:open_webui.env:Error when testing CUDA but USE_CUDA_DOCKER is true. Resetting USE_CUDA_DOCKER to false: CUDA not available open-webui | NoneType: None root@open-webui:/app/backend# nvidia-smi Sun Mar 9 16:20:04 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.124.04 Driver Version: 570.124.04 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA T400 4GB On | 00000000:01:00.0 Off | N/A | | 38% 33C P8 N/A / 31W | 4MiB / 4096MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` ### Additional Information Do I have false expectations or is there still something to configure? What does **CUDA not available** mean in that context? Installed software in running os: ``` S | Name | Type | Version | Arch | Repository ---+-------------------------------+---------+---------------------------+--------+-------------- i | kernel-firmware-nvidia | package | 20250206-1.1 | noarch | tw.Ioss i | libnvidia-container-devel | package | 1.17.4-1 | x86_64 | tw.NVIDIA.ctk i+ | libnvidia-container-static | package | 1.17.4-1 | x86_64 | tw.NVIDIA.ctk i | libnvidia-container-tools | package | 1.17.4-1 | x86_64 | tw.NVIDIA.ctk i | libnvidia-container1 | package | 1.17.4-1 | x86_64 | tw.NVIDIA.ctk i | libnvidia-egl-gbm1 | package | 1.1.2-7.2 | x86_64 | tw.NVIDIA i | libnvidia-egl-gbm1-32bit | package | 1.1.2-7.3 | x86_64 | tw.NVIDIA i | libnvidia-egl-wayland1 | package | 1.1.18-1.1 | x86_64 | tw.Ioss i | libnvidia-egl-wayland1-32bit | package | 1.1.17-43.3 | x86_64 | tw.NVIDIA i | libnvidia-egl-x111 | package | 1.0.1-9.4 | x86_64 | tw.NVIDIA i | libnvidia-egl-x111-32bit | package | 1.0.1-9.4 | x86_64 | tw.NVIDIA i | nvidia-common-G06 | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i | nvidia-compute-G06 | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i | nvidia-compute-G06-32bit | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i+ | nvidia-compute-utils-G06 | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i+ | nvidia-container-toolkit | package | 1.17.4-1 | x86_64 | tw.NVIDIA.ctk i | nvidia-container-toolkit-base | package | 1.17.4-1 | x86_64 | tw.NVIDIA.ctk i | nvidia-driver-G06-kmp-default | package | 570.124.04_k6.13.4_1-32.1 | x86_64 | tw.NVIDIA i | nvidia-gl-G06 | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i | nvidia-gl-G06-32bit | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i | nvidia-modprobe | package | 570.124.04-11.1 | x86_64 | tw.NVIDIA i | nvidia-persistenced | package | 570.124.04-2.1 | x86_64 | tw.NVIDIA i+ | nvidia-video-G06 | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i | nvidia-video-G06-32bit | package | 570.124.04-32.1 | x86_64 | tw.NVIDIA i+ | openSUSE-repos-MicroOS-NVIDIA | package | 20250303.f74564e-1.1 | x86_64 | tw.Ioss ` Does the output from `nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml` help: ` cdiVersion: 0.5.0 containerEdits: deviceNodes: - path: /dev/nvidia-modeset - path: /dev/nvidia-uvm - path: /dev/nvidia-uvm-tools - path: /dev/nvidiactl env: - NVIDIA_VISIBLE_DEVICES=void hooks: - args: - nvidia-cdi-hook - create-symlinks - --link - ../libnvidia-allocator.so.1::/usr/lib64/gbm/nvidia-drm_gbm.so - --link - libglxserver_nvidia.so.570.124.04::/usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so hookName: createContainer path: /bin/nvidia-cdi-hook - args: - nvidia-cdi-hook - create-symlinks - --link - libGLX_nvidia.so.570.124.04::/usr/lib64/libGLX_indirect.so.0 - --link - libcuda.so.1::/usr/lib64/libcuda.so - --link - libnvidia-opticalflow.so.1::/usr/lib64/libnvidia-opticalflow.so hookName: createContainer path: /bin/nvidia-cdi-hook - args: - nvidia-cdi-hook - update-ldcache - --folder - /usr/lib64 - --folder - /usr/lib64/vdpau hookName: createContainer path: /bin/nvidia-cdi-hook mounts: - containerPath: /bin/nvidia-cuda-mps-control hostPath: /bin/nvidia-cuda-mps-control options: - ro - nosuid - nodev - bind - containerPath: /bin/nvidia-cuda-mps-server hostPath: /bin/nvidia-cuda-mps-server options: - ro - nosuid - nodev - bind - containerPath: /bin/nvidia-debugdump hostPath: /bin/nvidia-debugdump options: - ro - nosuid - nodev - bind - containerPath: /bin/nvidia-persistenced hostPath: /bin/nvidia-persistenced options: - ro - nosuid - nodev - bind - containerPath: /bin/nvidia-smi hostPath: /bin/nvidia-smi options: - ro - nosuid - nodev - bind - containerPath: /run/nvidia-persistenced/socket hostPath: /run/nvidia-persistenced/socket options: - ro - nosuid - nodev - bind - noexec - containerPath: /usr/lib64/libEGL_nvidia.so.570.124.04 hostPath: /usr/lib64/libEGL_nvidia.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libGLESv1_CM_nvidia.so.570.124.04 hostPath: /usr/lib64/libGLESv1_CM_nvidia.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libGLESv2_nvidia.so.570.124.04 hostPath: /usr/lib64/libGLESv2_nvidia.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libGLX_nvidia.so.570.124.04 hostPath: /usr/lib64/libGLX_nvidia.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libcuda.so.570.124.04 hostPath: /usr/lib64/libcuda.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libcudadebugger.so.570.124.04 hostPath: /usr/lib64/libcudadebugger.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvcuvid.so.570.124.04 hostPath: /usr/lib64/libnvcuvid.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-allocator.so.570.124.04 hostPath: /usr/lib64/libnvidia-allocator.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-cfg.so.570.124.04 hostPath: /usr/lib64/libnvidia-cfg.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-egl-gbm.so.1.1.2 hostPath: /usr/lib64/libnvidia-egl-gbm.so.1.1.2 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-egl-wayland.so.1.1.17 hostPath: /usr/lib64/libnvidia-egl-wayland.so.1.1.17 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-eglcore.so.570.124.04 hostPath: /usr/lib64/libnvidia-eglcore.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-encode.so.570.124.04 hostPath: /usr/lib64/libnvidia-encode.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-fbc.so.570.124.04 hostPath: /usr/lib64/libnvidia-fbc.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-glcore.so.570.124.04 hostPath: /usr/lib64/libnvidia-glcore.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-glsi.so.570.124.04 hostPath: /usr/lib64/libnvidia-glsi.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-glvkspirv.so.570.124.04 hostPath: /usr/lib64/libnvidia-glvkspirv.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-gpucomp.so.570.124.04 hostPath: /usr/lib64/libnvidia-gpucomp.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-ml.so.570.124.04 hostPath: /usr/lib64/libnvidia-ml.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-ngx.so.570.124.04 hostPath: /usr/lib64/libnvidia-ngx.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-nvvm.so.570.124.04 hostPath: /usr/lib64/libnvidia-nvvm.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-opencl.so.570.124.04 hostPath: /usr/lib64/libnvidia-opencl.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-opticalflow.so.570.124.04 hostPath: /usr/lib64/libnvidia-opticalflow.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-pkcs11-openssl3.so.570.124.04 hostPath: /usr/lib64/libnvidia-pkcs11-openssl3.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-ptxjitcompiler.so.570.124.04 hostPath: /usr/lib64/libnvidia-ptxjitcompiler.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-rtcore.so.570.124.04 hostPath: /usr/lib64/libnvidia-rtcore.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-sandboxutils.so.570.124.04 hostPath: /usr/lib64/libnvidia-sandboxutils.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-tls.so.570.124.04 hostPath: /usr/lib64/libnvidia-tls.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvidia-vksc-core.so.570.124.04 hostPath: /usr/lib64/libnvidia-vksc-core.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/libnvoptix.so.570.124.04 hostPath: /usr/lib64/libnvoptix.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /etc/vulkan/implicit_layer.d/nvidia_layers.json hostPath: /usr/share/vulkan/implicit_layer.d/nvidia_layers.json options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/vdpau/libvdpau_nvidia.so.570.124.04 hostPath: /usr/lib64/vdpau/libvdpau_nvidia.so.570.124.04 options: - ro - nosuid - nodev - bind - containerPath: /usr/share/nvidia/nvoptix.bin hostPath: /usr/share/nvidia/nvoptix.bin options: - ro - nosuid - nodev - bind - containerPath: /lib/firmware/nvidia/570.124.04/gsp_ga10x.bin hostPath: /lib/firmware/nvidia/570.124.04/gsp_ga10x.bin options: - ro - nosuid - nodev - bind - containerPath: /lib/firmware/nvidia/570.124.04/gsp_tu10x.bin hostPath: /lib/firmware/nvidia/570.124.04/gsp_tu10x.bin options: - ro - nosuid - nodev - bind - containerPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json hostPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json options: - ro - nosuid - nodev - bind - containerPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json hostPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json options: - ro - nosuid - nodev - bind - containerPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json hostPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/xorg/modules/drivers/nvidia_drv.so hostPath: /usr/lib64/xorg/modules/drivers/nvidia_drv.so options: - ro - nosuid - nodev - bind - containerPath: /usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so.570.124.04 hostPath: /usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so.570.124.04 options: - ro - nosuid - nodev - bind devices: - containerEdits: deviceNodes: - path: /dev/nvidia0 - path: /dev/dri/card0 - path: /dev/dri/renderD129 hooks: - args: - nvidia-cdi-hook - create-symlinks - --link - ../card0::/dev/dri/by-path/pci-0000:01:00.0-card - --link - ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render hookName: createContainer path: /bin/nvidia-cdi-hook - args: - nvidia-cdi-hook - chmod - --mode - "755" - --path - /dev/dri hookName: createContainer path: /bin/nvidia-cdi-hook name: "0" - containerEdits: deviceNodes: - path: /dev/nvidia0 - path: /dev/dri/card0 - path: /dev/dri/renderD129 hooks: - args: - nvidia-cdi-hook - create-symlinks - --link - ../card0::/dev/dri/by-path/pci-0000:01:00.0-card - --link - ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render hookName: createContainer path: /bin/nvidia-cdi-hook - args: - nvidia-cdi-hook - chmod - --mode - "755" - --path - /dev/dri hookName: createContainer path: /bin/nvidia-cdi-hook name: GPU-aac2e917-ee53-8bd4-2b90-0f6e8485ce1e - containerEdits: deviceNodes: - path: /dev/nvidia0 - path: /dev/dri/card0 - path: /dev/dri/renderD129 hooks: - args: - nvidia-cdi-hook - create-symlinks - --link - ../card0::/dev/dri/by-path/pci-0000:01:00.0-card - --link - ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render hookName: createContainer path: /bin/nvidia-cdi-hook - args: - nvidia-cdi-hook - chmod - --mode - "755" - --path - /dev/dri hookName: createContainer path: /bin/nvidia-cdi-hook name: all kind: nvidia.com/gpu `
GiteaMirror added the bug label 2025-11-11 15:52:03 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#4349