[GH-ISSUE #24371] issue: Worker dies (SIGKILL) on Knowledge Base upload of large MP3 files — pydub/ffmpeg pre-conversion ignores configured remote STT engine #58949

Closed
opened 2026-05-06 00:35:50 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @CallSohail on GitHub (May 5, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/24371

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

v0.9.2-cuda

Ollama Version (if applicable)

No response

Operating System

N/A — issue is independent of Ollama. Ollama is used only for embeddings (snowflake-arctic-embed2:latest), not for transcription.

Browser (if applicable)

Debian 12 (host) — Open WebUI runs in Docker (ghcr.io/open-webui/open-webui:v0.9.2-cuda) on Linux 6.1.0-41-amd64

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

When the Speech-to-Text engine is configured to a remote OpenAI-compatible endpoint (in my case a self-hosted WhisperX proxy at http://whisperx-proxy:8767/v1 with large-v3), uploading an audio file to a Knowledge Base should:

  1. Save the file to disk.
  2. Stream the audio file directly to the configured remote STT endpoint.
  3. Receive the transcript back.
  4. Index the transcript into the vector store (pgvector).

The size of the audio file should not matter as long as the configured remote endpoint can handle it. My WhisperX endpoint handles a 400 MB MP3 with no issue when called directly via curl.

Actual Behavior

For small audio files (≤ ~10 MB), the flow works as expected and the request reaches http://whisperx-proxy:8767/v1/audio/transcriptions successfully (HTTP 200, transcript returned and indexed).

For large audio files (tested with a 346 MB MP3, ~3 hours of audio), the Uvicorn worker is killed mid-request. The request never reaches the configured remote STT endpoint.

Root cause from the logs:

In open_webui/routers/audio.py, the transcribe() function unconditionally calls convert_audio_to_mp3(), which in turn invokes pydub.AudioSegment.from_file(...). pydub spawns ffmpeg to fully decode the MP3 into raw pcm_s16le and pipes the entire decoded PCM back into Python memory as an AudioSegment object.

For a 3-hour MP3 this is approximately 3.5 GB of decoded PCM held in Python memory inside a single synchronous request handler. The Uvicorn worker is killed during this decode (Child process [PID] died) and the container restarts.

The same in-process pydub pre-decode runs even though the configured STT engine is a remote OpenAI-compatible endpoint that has no 25 MB limit and could ingest the original MP3 directly. The pydub conversion + the chunking logic that follows it appear to be hardcoded for OpenAI's hosted Whisper API limit (25 MB) and incorrectly applied to all openai-flavoured engines, including local/remote OpenAI-compatible servers.

Net result: any audio file large enough that the decoded PCM exceeds available worker memory crashes the worker — even when the configured remote engine could have handled it natively.

Steps to Reproduce

Environment

  • Host: Debian 12, 125 GB RAM, dual NVIDIA L40S
  • Docker: Open WebUI v0.9.2-cuda
  • DB: PostgreSQL + pgvector
  • Cache: Redis (Valkey 8)
  • Reverse proxy: nginx with client_max_body_size 1024M
  • Remote STT: a self-hosted OpenAI-compatible WhisperX server reachable on the same Docker network as whisperx-proxy:8767

1. Run a remote OpenAI-compatible STT server

Any OpenAI-compatible Whisper server works. I use WhisperX behind a small proxy that exposes /v1/audio/transcriptions. Verify it works:

curl -sS http://whisperx-proxy:8767/v1/models
# {"object":"list","data":[{"id":"whisper-large-v3",...}]}

curl -sS http://whisperx-proxy:8767/health
# {"status":"healthy","device":"cuda","loaded_models":["large-v3"]}

2. Deploy Open WebUI v0.9.2-cuda via docker-compose

docker-compose.yml (relevant parts):

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:v0.9.2-cuda
    container_name: open-webui
    restart: unless-stopped
    ports:
      - "3000:8080"
    environment:
      VECTOR_DB: "pgvector"
      REDIS_URL: "redis://openwebui-redis:6379/0"
      WEBSOCKET_MANAGER: "redis"
      UVICORN_WORKERS: "2"
      RAG_EMBEDDING_ENGINE: "ollama"
      RAG_EMBEDDING_MODEL: "snowflake-arctic-embed2:latest"
      VECTOR_DB: "pgvector"
      ENABLE_LDAP: "true"
      ENABLE_PERSISTENT_CONFIG: "true"
    networks:
      - webui-net

networks:
  webui-net:
    external: true
cd /opt/open-webui
docker compose up -d open-webui

3. Configure Speech-to-Text

In Open WebUI:

  1. Open https://<your-host>/admin/settings
  2. Go to Settings → Audio
  3. Set Speech-to-Text Engine = OpenAI
  4. Set API Base URL = http://whisperx-proxy:8767/v1
  5. Set API Key = any non-empty string (the local server doesn't validate it)
  6. Set STT Model = large-v3
  7. Save

4. Verify small audio works (control test)

  1. Take a small MP3 (e.g., a 2 MB / ~2 minute clip).
  2. Open WebUI → Workspace → Knowledge → create a KB → Add file → select the MP3.
  3. Observe: file uploads, gets transcribed, transcript is indexed.
  4. Check container logs — you will see a successful POST to whisperx-proxy:8767/v1/audio/transcriptions returning HTTP 200.

Expected: works.
Actual: works.

5. Reproduce the bug with a large audio file

  1. Take a large MP3 file (~346 MB, ~3 hours of audio at 256 kbps).
  2. Same KB → Add file → select the large MP3.
  3. Browser shows "uploading..." then the operation fails.
  4. Container logs show INFO: Child process [PID] died and the container restarts.

Expected: file is transcribed via the configured WhisperX endpoint.
Actual: Uvicorn worker dies mid-decode; the configured STT endpoint is never called.

6. Verify the remote endpoint is not at fault

From the same Docker network, send the exact same MP3 directly to the WhisperX endpoint, bypassing Open WebUI:

docker exec open-webui curl -X POST \
  http://whisperx-proxy:8767/v1/audio/transcriptions \
  -F "file=@/app/backend/data/uploads/<the-big-file>.mp3" \
  -F "model=whisper-large-v3" \
  -F "response_format=text" \
  -m 1800

This returns a complete transcript with HTTP 200 in normal time. The remote endpoint is healthy and capable. The bug is purely in Open WebUI's pre-conversion path.

Logs & Screenshots

A) Successful small file (2 MB MP3) — full happy path

INFO  | open_webui.routers.files:upload_file_handler:210 - file.content_type: audio/mpeg True
INFO  | uvicorn ... "POST /api/v1/files/?process=true HTTP/1.1" 200
INFO  | open_webui.routers.audio:transcribe:1102 - transcribe: /app/backend/data/uploads/6a67a90b-..._example.mp3 {}
DEBUG | pydub.logging_utils:log_conversion:9 - subprocess.call(['ffmpeg', '-y', '-i', '...mp3', '-acodec', 'pcm_s16le', '-vn', '-f', 'wav', '-'])
DEBUG | pydub.logging_utils:log_conversion:9 - subprocess.call(['ffmpeg', '-y', '-f', 'wav', '-i', '/tmp/tmp1v9ycgmk', '-f', 'mp3', '/tmp/tmpn7txj1c_'])
INFO  | open_webui.routers.audio:convert_audio_to_mp3:123 - Converted /app/.../example.mp3 to /app/.../example.mp3
DEBUG | urllib3.connectionpool:_new_conn:241 - Starting new HTTP connection (1): whisperx-proxy:8767
DEBUG | urllib3.connectionpool:_make_request:544 - http://whisperx-proxy:8767 "POST /v1/audio/transcriptions HTTP/1.1" 200 1786
INFO  | open_webui.retrieval.vector.dbs.pgvector:insert:333 - Inserted 1 items into collection 'file-...'.
DEBUG | open_webui.routers.retrieval:process_file:1675 - text_content: SPEAKER_01: ... <full transcript>

B) Failed large file (346 MB MP3) — worker killed mid-decode

INFO  | open_webui.routers.files:upload_file_handler:210 - file.content_type: audio/mpeg True
INFO  | uvicorn ... "POST /api/v1/files/?process=true HTTP/1.1" 200
INFO  | open_webui.routers.audio:transcribe:1102 - transcribe: /app/backend/data/uploads/96c45625-..._audio_CSA_09.04.2026.mp3 {}
DEBUG | pydub.logging_utils:log_conversion:9 - subprocess.call(['ffmpeg', '-y', '-i', '...mp3', '-acodec', 'pcm_s16le', '-vn', '-f', 'wav', '-'])
INFO: Waiting for child process [1123]
INFO: Child process [1123] died
[full Open WebUI container restart, OPEN WEBUI banner reprinted, app reinitialises]

The second ffmpeg (re-encode WAV → MP3), the convert_audio_to_mp3 completion log, and the urllib3 POST to whisperx-proxy:8767 never appear — the worker dies during the first decode pass.

C) Direct call to WhisperX with the same file — succeeds

curl -X POST http://whisperx-proxy:8767/v1/audio/transcriptions \
  -F "file=@audio_CSA_09.04.2026.mp3" \
  -F "model=whisper-large-v3" \
  -F "response_format=text"
# → HTTP 200, full transcript returned

D) System state at time of failure

$ free -h
               total        used        free      shared  buff/cache   available
Mem:           125Gi        72Gi       6.3Gi       335Mi        48Gi        53Gi
Swap:          976Mi       976Mi          0B

$ docker stats open-webui --no-stream
CONTAINER ID   NAME         CPU %     MEM USAGE / LIMIT     MEM %     PIDS
804a098681c4   open-webui   1.18%     3.185GiB / 125.4GiB   2.54%     335

$ docker inspect open-webui | grep -i oomkill
"OOMKilled": false,
"OomKillDisable": null,

System has 53 GB available, container is not OOMKilled at the cgroup level, the host has no memory pressure. The kill is happening at the Uvicorn worker level during the pydub decode of a multi-GB in-Python AudioSegment.

E) Confirmed STT routing for small file

The successful 2 MB run shows the request does reach the configured WhisperX endpoint:

DEBUG | urllib3 ... http://whisperx-proxy:8767 "POST /v1/audio/transcriptions HTTP/1.1" 200

So the engine config is correct. The bug is the unconditional pydub pre-decode that runs before dispatch.

Additional Information

No response

Originally created by @CallSohail on GitHub (May 5, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/24371 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version v0.9.2-cuda ### Ollama Version (if applicable) _No response_ ### Operating System N/A — issue is independent of Ollama. Ollama is used only for embeddings (snowflake-arctic-embed2:latest), not for transcription. ### Browser (if applicable) Debian 12 (host) — Open WebUI runs in Docker (ghcr.io/open-webui/open-webui:v0.9.2-cuda) on Linux 6.1.0-41-amd64 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior When the Speech-to-Text engine is configured to a **remote OpenAI-compatible endpoint** (in my case a self-hosted WhisperX proxy at `http://whisperx-proxy:8767/v1` with `large-v3`), uploading an audio file to a Knowledge Base should: 1. Save the file to disk. 2. Stream the audio file directly to the configured remote STT endpoint. 3. Receive the transcript back. 4. Index the transcript into the vector store (pgvector). The size of the audio file should not matter as long as the configured remote endpoint can handle it. My WhisperX endpoint handles a 400 MB MP3 with no issue when called directly via `curl`. ### Actual Behavior For **small** audio files (≤ ~10 MB), the flow works as expected and the request reaches `http://whisperx-proxy:8767/v1/audio/transcriptions` successfully (HTTP 200, transcript returned and indexed). For **large** audio files (tested with a 346 MB MP3, ~3 hours of audio), the **Uvicorn worker is killed mid-request**. The request never reaches the configured remote STT endpoint. Root cause from the logs: In `open_webui/routers/audio.py`, the `transcribe()` function unconditionally calls `convert_audio_to_mp3()`, which in turn invokes `pydub.AudioSegment.from_file(...)`. pydub spawns ffmpeg to fully decode the MP3 into raw `pcm_s16le` and pipes the entire decoded PCM back into Python memory as an `AudioSegment` object. For a 3-hour MP3 this is approximately **3.5 GB of decoded PCM held in Python memory** inside a single synchronous request handler. The Uvicorn worker is killed during this decode (`Child process [PID] died`) and the container restarts. The same in-process pydub pre-decode runs **even though** the configured STT engine is a remote OpenAI-compatible endpoint that has no 25 MB limit and could ingest the original MP3 directly. The pydub conversion + the chunking logic that follows it appear to be hardcoded for OpenAI's hosted Whisper API limit (25 MB) and incorrectly applied to all `openai`-flavoured engines, including local/remote OpenAI-compatible servers. Net result: any audio file large enough that the decoded PCM exceeds available worker memory crashes the worker — even when the configured remote engine could have handled it natively. ### Steps to Reproduce ### Environment - Host: Debian 12, 125 GB RAM, dual NVIDIA L40S - Docker: Open WebUI v0.9.2-cuda - DB: PostgreSQL + pgvector - Cache: Redis (Valkey 8) - Reverse proxy: nginx with `client_max_body_size 1024M` - Remote STT: a self-hosted OpenAI-compatible WhisperX server reachable on the same Docker network as `whisperx-proxy:8767` ### 1. Run a remote OpenAI-compatible STT server Any OpenAI-compatible Whisper server works. I use WhisperX behind a small proxy that exposes `/v1/audio/transcriptions`. Verify it works: ```bash curl -sS http://whisperx-proxy:8767/v1/models # {"object":"list","data":[{"id":"whisper-large-v3",...}]} curl -sS http://whisperx-proxy:8767/health # {"status":"healthy","device":"cuda","loaded_models":["large-v3"]} ``` ### 2. Deploy Open WebUI v0.9.2-cuda via docker-compose `docker-compose.yml` (relevant parts): ```yaml services: open-webui: image: ghcr.io/open-webui/open-webui:v0.9.2-cuda container_name: open-webui restart: unless-stopped ports: - "3000:8080" environment: VECTOR_DB: "pgvector" REDIS_URL: "redis://openwebui-redis:6379/0" WEBSOCKET_MANAGER: "redis" UVICORN_WORKERS: "2" RAG_EMBEDDING_ENGINE: "ollama" RAG_EMBEDDING_MODEL: "snowflake-arctic-embed2:latest" VECTOR_DB: "pgvector" ENABLE_LDAP: "true" ENABLE_PERSISTENT_CONFIG: "true" networks: - webui-net networks: webui-net: external: true ``` ```bash cd /opt/open-webui docker compose up -d open-webui ``` ### 3. Configure Speech-to-Text In Open WebUI: 1. Open `https://<your-host>/admin/settings` 2. Go to **Settings → Audio** 3. Set **Speech-to-Text Engine** = `OpenAI` 4. Set **API Base URL** = `http://whisperx-proxy:8767/v1` 5. Set **API Key** = any non-empty string (the local server doesn't validate it) 6. Set **STT Model** = `large-v3` 7. Save ### 4. Verify small audio works (control test) 1. Take a small MP3 (e.g., a 2 MB / ~2 minute clip). 2. Open WebUI → **Workspace → Knowledge** → create a KB → **Add file** → select the MP3. 3. Observe: file uploads, gets transcribed, transcript is indexed. 4. Check container logs — you will see a successful POST to `whisperx-proxy:8767/v1/audio/transcriptions` returning HTTP 200. ✅ Expected: works. ✅ Actual: works. ### 5. Reproduce the bug with a large audio file 1. Take a large MP3 file (~346 MB, ~3 hours of audio at 256 kbps). 2. Same KB → **Add file** → select the large MP3. 3. Browser shows "uploading..." then the operation fails. 4. Container logs show `INFO: Child process [PID] died` and the container restarts. ❌ Expected: file is transcribed via the configured WhisperX endpoint. ❌ Actual: Uvicorn worker dies mid-decode; the configured STT endpoint is never called. ### 6. Verify the remote endpoint is not at fault From the same Docker network, send the exact same MP3 directly to the WhisperX endpoint, bypassing Open WebUI: ```bash docker exec open-webui curl -X POST \ http://whisperx-proxy:8767/v1/audio/transcriptions \ -F "file=@/app/backend/data/uploads/<the-big-file>.mp3" \ -F "model=whisper-large-v3" \ -F "response_format=text" \ -m 1800 ``` ✅ This returns a complete transcript with HTTP 200 in normal time. The remote endpoint is healthy and capable. The bug is purely in Open WebUI's pre-conversion path. ### Logs & Screenshots ### A) Successful small file (2 MB MP3) — full happy path ``` INFO | open_webui.routers.files:upload_file_handler:210 - file.content_type: audio/mpeg True INFO | uvicorn ... "POST /api/v1/files/?process=true HTTP/1.1" 200 INFO | open_webui.routers.audio:transcribe:1102 - transcribe: /app/backend/data/uploads/6a67a90b-..._example.mp3 {} DEBUG | pydub.logging_utils:log_conversion:9 - subprocess.call(['ffmpeg', '-y', '-i', '...mp3', '-acodec', 'pcm_s16le', '-vn', '-f', 'wav', '-']) DEBUG | pydub.logging_utils:log_conversion:9 - subprocess.call(['ffmpeg', '-y', '-f', 'wav', '-i', '/tmp/tmp1v9ycgmk', '-f', 'mp3', '/tmp/tmpn7txj1c_']) INFO | open_webui.routers.audio:convert_audio_to_mp3:123 - Converted /app/.../example.mp3 to /app/.../example.mp3 DEBUG | urllib3.connectionpool:_new_conn:241 - Starting new HTTP connection (1): whisperx-proxy:8767 DEBUG | urllib3.connectionpool:_make_request:544 - http://whisperx-proxy:8767 "POST /v1/audio/transcriptions HTTP/1.1" 200 1786 INFO | open_webui.retrieval.vector.dbs.pgvector:insert:333 - Inserted 1 items into collection 'file-...'. DEBUG | open_webui.routers.retrieval:process_file:1675 - text_content: SPEAKER_01: ... <full transcript> ``` ### B) Failed large file (346 MB MP3) — worker killed mid-decode ``` INFO | open_webui.routers.files:upload_file_handler:210 - file.content_type: audio/mpeg True INFO | uvicorn ... "POST /api/v1/files/?process=true HTTP/1.1" 200 INFO | open_webui.routers.audio:transcribe:1102 - transcribe: /app/backend/data/uploads/96c45625-..._audio_CSA_09.04.2026.mp3 {} DEBUG | pydub.logging_utils:log_conversion:9 - subprocess.call(['ffmpeg', '-y', '-i', '...mp3', '-acodec', 'pcm_s16le', '-vn', '-f', 'wav', '-']) INFO: Waiting for child process [1123] INFO: Child process [1123] died [full Open WebUI container restart, OPEN WEBUI banner reprinted, app reinitialises] ``` The second `ffmpeg` (re-encode WAV → MP3), the `convert_audio_to_mp3` completion log, and the urllib3 POST to `whisperx-proxy:8767` **never appear** — the worker dies during the first decode pass. ### C) Direct call to WhisperX with the same file — succeeds ```bash curl -X POST http://whisperx-proxy:8767/v1/audio/transcriptions \ -F "file=@audio_CSA_09.04.2026.mp3" \ -F "model=whisper-large-v3" \ -F "response_format=text" # → HTTP 200, full transcript returned ``` ### D) System state at time of failure ``` $ free -h total used free shared buff/cache available Mem: 125Gi 72Gi 6.3Gi 335Mi 48Gi 53Gi Swap: 976Mi 976Mi 0B $ docker stats open-webui --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % PIDS 804a098681c4 open-webui 1.18% 3.185GiB / 125.4GiB 2.54% 335 $ docker inspect open-webui | grep -i oomkill "OOMKilled": false, "OomKillDisable": null, ``` System has 53 GB available, container is not OOMKilled at the cgroup level, the host has no memory pressure. The kill is happening at the Uvicorn worker level during the pydub decode of a multi-GB in-Python `AudioSegment`. ### E) Confirmed STT routing for small file The successful 2 MB run shows the request **does** reach the configured WhisperX endpoint: ``` DEBUG | urllib3 ... http://whisperx-proxy:8767 "POST /v1/audio/transcriptions HTTP/1.1" 200 ``` So the engine config is correct. The bug is the unconditional pydub pre-decode that runs *before* dispatch. ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-06 00:35:50 -05:00
Author
Owner

@owui-terminator[bot] commented on GitHub (May 5, 2026):

🔍 Similar Issues Found

I found some existing issues that might be related. Please check if any of these are duplicates or contain helpful solutions:

  1. #23014 issue: file upload as knowledge base to agent fails to respond and results in object not iterable
    by sanchitbhavsar · bug

  2. #15535 issue: Plain text file upload to knowledge fails with 400: 'NoneType' object has no attribute 'encode'
    by GanizaniSitara · bug

  3. #15702 issue: Failed uploading large markdown files to Knowledge
    by raymondhs · bug

  4. #15828 issue: Unable to upload document in chat / 0.6.16
    by GlisseManTV · bug

  5. #14336 issue: Memory Leak when uploading files to Knowledge
    by FringeNet · bug


💡 If this is a duplicate, consider closing it and adding details to the existing issue.

This comment was generated automatically. React with 👍 if helpful, 👎 if not.

<!-- gh-comment-id:4377824798 --> @owui-terminator[bot] commented on GitHub (May 5, 2026): 🔍 **Similar Issues Found** I found some existing issues that might be related. Please check if any of these are duplicates or contain helpful solutions: 1. [#23014](https://github.com/open-webui/open-webui/issues/23014) **issue: file upload as knowledge base to agent fails to respond and results in object not iterable** *by sanchitbhavsar · `bug`* 2. [#15535](https://github.com/open-webui/open-webui/issues/15535) **issue: Plain text file upload to knowledge fails with 400: 'NoneType' object has no attribute 'encode'** *by GanizaniSitara · `bug`* 3. [#15702](https://github.com/open-webui/open-webui/issues/15702) **issue: Failed uploading large markdown files to Knowledge** *by raymondhs · `bug`* 4. [#15828](https://github.com/open-webui/open-webui/issues/15828) **issue: Unable to upload document in chat / 0.6.16** *by GlisseManTV · `bug`* 5. [#14336](https://github.com/open-webui/open-webui/issues/14336) **issue: Memory Leak when uploading files to Knowledge** *by FringeNet · `bug`* --- 💡 If this is a duplicate, consider closing it and adding details to the existing issue. *This comment was generated automatically.* React with 👍 if helpful, 👎 if not.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58949