WhisperAI crashes the docker-container #1653

Closed
opened 2025-11-11 14:49:26 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @Husky110 on GitHub (Aug 1, 2024).

Bug Report

Description

Bug Summary:
Whenever I try to use the audio-recording feature my docker container crashes.
Here's the container logs on the crash-event:

INFO  [apps.audio.main] file.content_type: audio/wav
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.Loading WEBUI_SECRET_KEY from .webui_secret_keyUSE_OLLAMA is set to true, starting ollama serve.CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries.2024/08/01 05:20:28 routes.go:1099: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"time=2024-08-01T05:20:28.659Z level=INFO source=images.go:784 msg="total blobs: 53"time=2024-08-01T05:20:28.660Z level=INFO source=images.go:791 msg="total unused blobs removed: 0"time=2024-08-01T05:20:28.660Z level=INFO source=routes.go:1146 msg="Listening on 127.0.0.1:11434 (version 0.3.0)"time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=10 path=/tmp/ollama1439945203time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=13 path=/tmp/ollama2009279848time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=10 path=/tmp/ollama2237447354time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=10 path=/tmp/ollama3584355008time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:100 msg="unable to cleanup stale tmpdir" path=/tmp/ollama868162921 error="remove /tmp/ollama868162921: directory not empty"time=2024-08-01T05:20:28.661Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama161789335/runners

At the beginning it tried to download the whisperai-model, but was unable to do so:

INFO  [apps.audio.main] file.content_type: audio/wavWARNI [faster_whisper] An error occured while synchronizing the model Systran/faster-whisper-base from the Hugging Face Hub:Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.WARNI [faster_whisper] Trying to load the model directly from the local cache, if it exists.WARNI [apps.audio.main] WhisperModel initialization failed, attempting download with local_files_only=Falsebdb5e801-8e76-472e-923e-f780a5ca5cf0.wavINFO:     127.0.0.1:52588 - "GET /health HTTP/1.1" 200 OKCould not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

Steps to Reproduce:
Run Open WebUI in docker an try to use the voice-recognition.

Expected Behavior:
Working...

Actual Behavior:
Container-crash

Environment

  • Open WebUI Version: 0.3.10

  • Ollama (if applicable): 0.3.0

  • Operating System: Ubuntu 22.04

  • Browser (if applicable): Ungoogled Chromium

Reproduction Details

Confirmation:

  • [x ] I have read and followed all the instructions provided in the README.md.
  • [x ] I am on the latest version of both Open WebUI and Ollama. (<--- well... the current DockerImage does not ship with the latest version... had to build it myself...)
  • I have included the browser console logs. -> Nope... There are none...
  • [x ] I have included the Docker container logs. -> see above

Logs and Screenshots

Docker Container Logs:
See above

Installation Method

Docker via Dockerfile setting "USE_CUDA" to true and "USE_OLLAMA" to true.
nvidia-smi-output on my system:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.06             Driver Version: 535.183.06   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3080 ...    On  | 00000000:01:00.0  On |                  N/A |
| N/A   46C    P8              20W / 115W |   7684MiB / 16384MiB |     21%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

Additional Information

I am not sure what actually happened there. I'm guessing that there was something wrong during the whisper-download and everything else is a subsequent error.. Maybe there need to be some commands run and everything is fine...

Originally created by @Husky110 on GitHub (Aug 1, 2024). # Bug Report ## Description **Bug Summary:** Whenever I try to use the audio-recording feature my docker container crashes. Here's the container logs on the crash-event: ``` INFO [apps.audio.main] file.content_type: audio/wav Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.Loading WEBUI_SECRET_KEY from .webui_secret_keyUSE_OLLAMA is set to true, starting ollama serve.CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries.2024/08/01 05:20:28 routes.go:1099: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"time=2024-08-01T05:20:28.659Z level=INFO source=images.go:784 msg="total blobs: 53"time=2024-08-01T05:20:28.660Z level=INFO source=images.go:791 msg="total unused blobs removed: 0"time=2024-08-01T05:20:28.660Z level=INFO source=routes.go:1146 msg="Listening on 127.0.0.1:11434 (version 0.3.0)"time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=10 path=/tmp/ollama1439945203time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=13 path=/tmp/ollama2009279848time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=10 path=/tmp/ollama2237447354time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:94 msg="found running ollama" pid=10 path=/tmp/ollama3584355008time=2024-08-01T05:20:28.661Z level=WARN source=assets.go:100 msg="unable to cleanup stale tmpdir" path=/tmp/ollama868162921 error="remove /tmp/ollama868162921: directory not empty"time=2024-08-01T05:20:28.661Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama161789335/runners ``` At the beginning it tried to download the whisperai-model, but was unable to do so: ``` INFO [apps.audio.main] file.content_type: audio/wavWARNI [faster_whisper] An error occured while synchronizing the model Systran/faster-whisper-base from the Hugging Face Hub:Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.WARNI [faster_whisper] Trying to load the model directly from the local cache, if it exists.WARNI [apps.audio.main] WhisperModel initialization failed, attempting download with local_files_only=Falsebdb5e801-8e76-472e-923e-f780a5ca5cf0.wavINFO: 127.0.0.1:52588 - "GET /health HTTP/1.1" 200 OKCould not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory ``` **Steps to Reproduce:** Run Open WebUI in docker an try to use the voice-recognition. **Expected Behavior:** Working... **Actual Behavior:** Container-crash ## Environment - **Open WebUI Version:** 0.3.10 - **Ollama (if applicable):** 0.3.0 - **Operating System:** Ubuntu 22.04 - **Browser (if applicable):** Ungoogled Chromium ## Reproduction Details **Confirmation:** - [x ] I have read and followed all the instructions provided in the README.md. - [x ] I am on the latest version of both Open WebUI and Ollama. (<--- well... the current DockerImage does not ship with the latest version... had to build it myself...) - [ ] I have included the browser console logs. -> Nope... There are none... - [x ] I have included the Docker container logs. -> see above ## Logs and Screenshots **Docker Container Logs:** See above ## Installation Method Docker via Dockerfile setting "USE_CUDA" to true and "USE_OLLAMA" to true. nvidia-smi-output on my system: ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3080 ... On | 00000000:01:00.0 On | N/A | | N/A 46C P8 20W / 115W | 7684MiB / 16384MiB | 21% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ ``` ## Additional Information I am not sure what actually happened there. I'm guessing that there was something wrong during the whisper-download and everything else is a subsequent error.. Maybe there need to be some commands run and everything is fine...
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#1653