[GH-ISSUE #23962] issue: High CPU usage after a query #35658

Closed
opened 2026-04-25 09:50:31 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @vk2r on GitHub (Apr 21, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23962

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.9.1

Ollama Version (if applicable)

No response

Operating System

Proxmox 9.1.7

Browser (if applicable)

Zen Browser

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

I hope the CPU usage does not increase drastically, or at the very least, diminishes once OpenWebUI is no longer in use.

Actual Behavior

Since this update, I've observed a significant increase in CPU usage after a query is completed. After the query has finished responding, generating follow-ups, and the title, the CPU usage unexpectedly spikes to approximately 25%, even when there appears to be no other activity. This behavior was not present in earlier versions.

Steps to Reproduce

In Proxmox:

  1. Install Alpine Linux with Docker
  2. Install OpenWebUI with Docker Compose
  3. Run and generate a question with Llama-Swap integration

Docker Compose

x-lockdown: &lockdown
  read_only: true
  security_opt:
    - "no-new-privileges=true"

services:
  open-webui:
    container_name: open-webui
    image: ghcr.io/open-webui/open-webui:${OWUI_VERSION}
    networks:
      - public
      - backend
    ports:
      - 8080:8080
    volumes:
      - data:/app/backend/data
    restart: always
    environment:
      - ENV=$OWUI_ENV
      - TZ=OWUI_DB_TZ
      - USER_PERMISSIONS_CHAT_TEMPORARY=$OWUI_USER_PERMISSIONS_CHAT_TEMPORARY
      - OAUTH_CLIENT_ID=$OWUI_OAUTH_CLIENT_ID
      - OAUTH_CLIENT_SECRET=$OWUI_OAUTH_CLIENT_SECRET
      - OPENID_PROVIDER_URL=$OWUI_OPENID_PROVIDER_URL
      - OAUTH_PROVIDER_NAME=$OWUI_OAUTH_PROVIDER_NAME
      - OAUTH_SCOPES=$OWUI_OAUTH_SCOPES
      - OAUTH_GROUP_CLAIM=$OWUI_OAUTH_GROUP_CLAIM
      - OPENID_REDIRECT_URI=$OWUI_OPENID_REDIRECT_URI
      - ENABLE_OAUTH_ROLE_MANAGEMENT=$OWUI_ENABLE_OAUTH_ROLE_MANAGEMENT
      - ENABLE_OAUTH_SIGNUP=$OWUI_ENABLE_OAUTH_SIGNUP
      - ENABLE_OAUTH_GROUP_CREATION=$OWUI_ENABLE_OAUTH_GROUP_CREATION
      - WEBUI_AUTH=$OWUI_WEBUI_AUTH
      - WEBUI_ENABLE_SSO=$OWUI_WEBUI_ENABLE_SSO
      - WEBUI_AUTH_TYPE=$OWUI_WEBUI_AUTH_TYPE
      - WEBUI_SESSION_COOKIE_SAME_SITE=$OWUI_WEBUI_SESSION_COOKIE_SAME_SITE
      - ENABLE_PERSISTENT_CONFIG=$OWUI_ENABLE_PERSISTENT_CONFIG
      - ENABLE_PASSWORD_AUTH=$OWUI_ENABLE_PASSWORD_AUTH
      - ENABLE_LOGIN_FORM=$OWUI_ENABLE_LOGIN_FORM
      - ENABLE_SIGNUP=$OWUI_ENABLE_SIGNUP
      - ENABLE_WEBSOCKET_SUPPORT=$OWUI_ENABLE_WEBSOCKET_SUPPORT
      - ENABLE_OLLAMA_API=$OWUI_ENABLE_OLLAMA_API
      - ENABLE_API_KEYS=true
      - ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS=false
      - WEBUI_URL=$OWUI_WEBUI_URL
      - CORS_ALLOW_ORIGIN=$OWUI_CORS_ALLOW_ORIGIN
      - SEARXNG_QUERY_URL=$OWUI_SEARXNG_QUERY_URL
      - SEARXNG_LANGUAGE=$OWUI_SEARXNG_LANGUAGE
      - DATABASE_URL=$OWUI_DB_URL
      - ENABLE_WEBSOCKET_SUPPORT=$OWUI_WEBSOCKET_SUPPORT
      - WEBSOCKET_MANAGER=$OWUI_WEBSOCKET_MANAGER
      - REDIS_URL=$OWUI_REDIS_URL
      - JWT_EXPIRES_IN=$OWUI_JWT_EXPIRES_IN
      - GLOBAL_LOG_LEVEL=DEBUG
    depends_on:
      database:
        condition: "service_healthy"
        restart: true
    logging:
      driver: "json-file"
      options:
        max-size: "5m"
        max-file: "3"

  open-terminal:
    image: ghcr.io/open-webui/open-terminal
    container_name: open-terminal
    ports:
      - 8000:8000
    networks:
      - public
    volumes:
      - open-terminal:/home/user
    environment:
      - OPEN_TERMINAL_MULTI_USER=true
      - OPEN_TERMINAL_API_KEY=$OPEN_TERMINAL_API_KEY

  database:
    container_name: database
    image: 11notes/postgres:18
    ports:
      - "5432:5432/tcp"
    environment:
      - TZ=$OWUI_DB_TZ
      - POSTGRES_PASSWORD=$OWUI_DB_PASSWORD
      - POSTGRES_BACKUP_SCHEDULE=$OWUI_DB_BACKUP_SCHEDULE
      - POSTGRES_BACKUP_RETENTION=$OWUI_DB_BACKUP_RETENTION
    volumes:
      - database.etc:/postgres/etc
      - database.var:/postgres/var
      - /mnt/backups:/postgres/backup
    tmpfs:
      - "/postgres/run:uid=1000,gid=1000"
      - "/postgres/log:uid=1000,gid=1000"
    networks:
      - backend
    restart: "always"

  cache:
    container_name: cache
    image: redis:alpine
    hostname: redis
    restart: unless-stopped
    command: redis-server --requirepass ${OWUI_REDIS_PASSWORD}
    volumes:
      - cache:/data
    networks:
      - backend

networks:
  public:
    driver: bridge
  backend:
    internal: true

volumes:
  data:
  cache:
  database.etc:
  database.var:
  open-terminal:

Logs & Screenshots

Logs

Video
https://github.com/user-attachments/assets/8bfe769f-8a5d-48c8-8fdf-f23c5f7f09c4

Originally created by @vk2r on GitHub (Apr 21, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/23962 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.9.1 ### Ollama Version (if applicable) _No response_ ### Operating System Proxmox 9.1.7 ### Browser (if applicable) Zen Browser ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior I hope the CPU usage does not increase drastically, or at the very least, diminishes once OpenWebUI is no longer in use. ### Actual Behavior Since this update, I've observed a significant increase in CPU usage after a query is completed. After the query has finished responding, generating follow-ups, and the title, the CPU usage unexpectedly spikes to approximately 25%, even when there appears to be no other activity. This behavior was not present in earlier versions. ### Steps to Reproduce In Proxmox: 1. Install Alpine Linux with Docker 2. Install OpenWebUI with Docker Compose 3. Run and generate a question with Llama-Swap integration Docker Compose ```code x-lockdown: &lockdown read_only: true security_opt: - "no-new-privileges=true" services: open-webui: container_name: open-webui image: ghcr.io/open-webui/open-webui:${OWUI_VERSION} networks: - public - backend ports: - 8080:8080 volumes: - data:/app/backend/data restart: always environment: - ENV=$OWUI_ENV - TZ=OWUI_DB_TZ - USER_PERMISSIONS_CHAT_TEMPORARY=$OWUI_USER_PERMISSIONS_CHAT_TEMPORARY - OAUTH_CLIENT_ID=$OWUI_OAUTH_CLIENT_ID - OAUTH_CLIENT_SECRET=$OWUI_OAUTH_CLIENT_SECRET - OPENID_PROVIDER_URL=$OWUI_OPENID_PROVIDER_URL - OAUTH_PROVIDER_NAME=$OWUI_OAUTH_PROVIDER_NAME - OAUTH_SCOPES=$OWUI_OAUTH_SCOPES - OAUTH_GROUP_CLAIM=$OWUI_OAUTH_GROUP_CLAIM - OPENID_REDIRECT_URI=$OWUI_OPENID_REDIRECT_URI - ENABLE_OAUTH_ROLE_MANAGEMENT=$OWUI_ENABLE_OAUTH_ROLE_MANAGEMENT - ENABLE_OAUTH_SIGNUP=$OWUI_ENABLE_OAUTH_SIGNUP - ENABLE_OAUTH_GROUP_CREATION=$OWUI_ENABLE_OAUTH_GROUP_CREATION - WEBUI_AUTH=$OWUI_WEBUI_AUTH - WEBUI_ENABLE_SSO=$OWUI_WEBUI_ENABLE_SSO - WEBUI_AUTH_TYPE=$OWUI_WEBUI_AUTH_TYPE - WEBUI_SESSION_COOKIE_SAME_SITE=$OWUI_WEBUI_SESSION_COOKIE_SAME_SITE - ENABLE_PERSISTENT_CONFIG=$OWUI_ENABLE_PERSISTENT_CONFIG - ENABLE_PASSWORD_AUTH=$OWUI_ENABLE_PASSWORD_AUTH - ENABLE_LOGIN_FORM=$OWUI_ENABLE_LOGIN_FORM - ENABLE_SIGNUP=$OWUI_ENABLE_SIGNUP - ENABLE_WEBSOCKET_SUPPORT=$OWUI_ENABLE_WEBSOCKET_SUPPORT - ENABLE_OLLAMA_API=$OWUI_ENABLE_OLLAMA_API - ENABLE_API_KEYS=true - ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS=false - WEBUI_URL=$OWUI_WEBUI_URL - CORS_ALLOW_ORIGIN=$OWUI_CORS_ALLOW_ORIGIN - SEARXNG_QUERY_URL=$OWUI_SEARXNG_QUERY_URL - SEARXNG_LANGUAGE=$OWUI_SEARXNG_LANGUAGE - DATABASE_URL=$OWUI_DB_URL - ENABLE_WEBSOCKET_SUPPORT=$OWUI_WEBSOCKET_SUPPORT - WEBSOCKET_MANAGER=$OWUI_WEBSOCKET_MANAGER - REDIS_URL=$OWUI_REDIS_URL - JWT_EXPIRES_IN=$OWUI_JWT_EXPIRES_IN - GLOBAL_LOG_LEVEL=DEBUG depends_on: database: condition: "service_healthy" restart: true logging: driver: "json-file" options: max-size: "5m" max-file: "3" open-terminal: image: ghcr.io/open-webui/open-terminal container_name: open-terminal ports: - 8000:8000 networks: - public volumes: - open-terminal:/home/user environment: - OPEN_TERMINAL_MULTI_USER=true - OPEN_TERMINAL_API_KEY=$OPEN_TERMINAL_API_KEY database: container_name: database image: 11notes/postgres:18 ports: - "5432:5432/tcp" environment: - TZ=$OWUI_DB_TZ - POSTGRES_PASSWORD=$OWUI_DB_PASSWORD - POSTGRES_BACKUP_SCHEDULE=$OWUI_DB_BACKUP_SCHEDULE - POSTGRES_BACKUP_RETENTION=$OWUI_DB_BACKUP_RETENTION volumes: - database.etc:/postgres/etc - database.var:/postgres/var - /mnt/backups:/postgres/backup tmpfs: - "/postgres/run:uid=1000,gid=1000" - "/postgres/log:uid=1000,gid=1000" networks: - backend restart: "always" cache: container_name: cache image: redis:alpine hostname: redis restart: unless-stopped command: redis-server --requirepass ${OWUI_REDIS_PASSWORD} volumes: - cache:/data networks: - backend networks: public: driver: bridge backend: internal: true volumes: data: cache: database.etc: database.var: open-terminal: ``` ### Logs & Screenshots [Logs](https://github.com/user-attachments/files/26945101/logs.txt) Video https://github.com/user-attachments/assets/8bfe769f-8a5d-48c8-8fdf-f23c5f7f09c4
GiteaMirror added the bug label 2026-04-25 09:50:31 -05:00
Author
Owner

@pauloalexcosta commented on GitHub (Apr 21, 2026):

Bug report: CPU spikes to 100% on first message when MCPHub is connected

Version: Regression introduced after 0.8.2 — not present in that version.


Reproduction

Reproducible with any model. Steps:

  1. Connect to MCPHub
  2. Open a new chat session
  3. Send any first message

Result: CPU immediately pegs at 100%.


Additional behavior: admin panel connection test

When testing the MCP integration via the admin panel, despite the connection appearing to work in chat, the following happens:

  • No success notification toast is shown
  • The logs emit a clear error (see below)

Error log

Summary:

RuntimeError: Attempted to exit cancel scope in a different task than it was entered in

Full traceback:

File "/usr/local/lib/python3.11/contextlib.py", line 687, in aclose
File "/usr/local/lib/python3.11/contextlib.py", line 745, in __aexit__
File "/usr/local/lib/python3.11/contextlib.py", line 728, in __aexit__
File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__
File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 717, in streamablehttp_client
File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__
File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 647, in streamable_http_client
    async with anyio.create_task_group() as tg:
File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 805, in __aexit__
File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 455, in __exit__
    raise RuntimeError(

RuntimeError: Attempted to exit cancel scope in a different task than it was entered in

Notes

  • Error originates in mcp/client/streamable_http.py around anyio cancel scope / task group teardown
  • Likely cause: MCP client cleanup is running in a different asyncio task context than where it was initialized — a concurrency or lifecycle change introduced after 0.8.2
  • Full container logs attached (relevant entries begin around 20:59:29.460)

merged-logs.txt

<!-- gh-comment-id:4291792920 --> @pauloalexcosta commented on GitHub (Apr 21, 2026): ## Bug report: CPU spikes to 100% on first message when MCPHub is connected **Version:** Regression introduced after `0.8.2` — not present in that version. --- ### Reproduction Reproducible with any model. Steps: 1. Connect to MCPHub 2. Open a new chat session 3. Send any first message **Result:** CPU immediately pegs at 100%. --- ### Additional behavior: admin panel connection test When testing the MCP integration via the admin panel, despite the connection appearing to work in chat, the following happens: - No success notification toast is shown - The logs emit a clear error (see below) --- ### Error log **Summary:** ``` RuntimeError: Attempted to exit cancel scope in a different task than it was entered in ``` **Full traceback:** ``` File "/usr/local/lib/python3.11/contextlib.py", line 687, in aclose File "/usr/local/lib/python3.11/contextlib.py", line 745, in __aexit__ File "/usr/local/lib/python3.11/contextlib.py", line 728, in __aexit__ File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__ File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 717, in streamablehttp_client File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__ File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 647, in streamable_http_client async with anyio.create_task_group() as tg: File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 805, in __aexit__ File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 455, in __exit__ raise RuntimeError( RuntimeError: Attempted to exit cancel scope in a different task than it was entered in ``` --- ### Notes - Error originates in `mcp/client/streamable_http.py` around `anyio` cancel scope / task group teardown - Likely cause: MCP client cleanup is running in a different asyncio task context than where it was initialized — a concurrency or lifecycle change introduced after `0.8.2` - Full container logs attached (relevant entries begin around `20:59:29.460`) [merged-logs.txt](https://github.com/user-attachments/files/26947199/merged-logs.1.txt)
Author
Owner

@vk2r commented on GitHub (Apr 21, 2026):

Bug report: CPU spikes to 100% on first message when MCPHub is connected

Version: Regression introduced after 0.8.2 — not present in that version.

Reproduction

Reproducible with any model. Steps:

1. Connect to MCPHub

2. Open a new chat session

3. Send any first message

Result: CPU immediately pegs at 100%.

Additional behavior: admin panel connection test

When testing the MCP integration via the admin panel, despite the connection appearing to work in chat, the following happens:

* No success notification toast is shown

* The logs emit a clear error (see below)

Error log

Summary:

RuntimeError: Attempted to exit cancel scope in a different task than it was entered in

Full traceback:

File "/usr/local/lib/python3.11/contextlib.py", line 687, in aclose
File "/usr/local/lib/python3.11/contextlib.py", line 745, in __aexit__
File "/usr/local/lib/python3.11/contextlib.py", line 728, in __aexit__
File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__
File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 717, in streamablehttp_client
File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__
File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 647, in streamable_http_client
    async with anyio.create_task_group() as tg:
File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 805, in __aexit__
File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 455, in __exit__
    raise RuntimeError(

RuntimeError: Attempted to exit cancel scope in a different task than it was entered in

Notes

* Error originates in `mcp/client/streamable_http.py` around `anyio` cancel scope / task group teardown

* Likely cause: MCP client cleanup is running in a different asyncio task context than where it was initialized — a concurrency or lifecycle change introduced after `0.8.2`

* Full container logs attached (relevant entries begin around `20:59:29.460`)

merged-logs.txt

I hadn't thought it could be MCPHub. I didn't have any problems with it before, but it seems we're now two people experiencing issues and needing MCPHub. I appreciate your report.

<!-- gh-comment-id:4291938922 --> @vk2r commented on GitHub (Apr 21, 2026): > ## Bug report: CPU spikes to 100% on first message when MCPHub is connected > > **Version:** Regression introduced after `0.8.2` — not present in that version. > ### Reproduction > > Reproducible with any model. Steps: > > 1. Connect to MCPHub > > 2. Open a new chat session > > 3. Send any first message > > > **Result:** CPU immediately pegs at 100%. > ### Additional behavior: admin panel connection test > > When testing the MCP integration via the admin panel, despite the connection appearing to work in chat, the following happens: > > * No success notification toast is shown > > * The logs emit a clear error (see below) > > > ### Error log > > **Summary:** > > ``` > RuntimeError: Attempted to exit cancel scope in a different task than it was entered in > ``` > > **Full traceback:** > > ``` > File "/usr/local/lib/python3.11/contextlib.py", line 687, in aclose > File "/usr/local/lib/python3.11/contextlib.py", line 745, in __aexit__ > File "/usr/local/lib/python3.11/contextlib.py", line 728, in __aexit__ > File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__ > File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 717, in streamablehttp_client > File "/usr/local/lib/python3.11/contextlib.py", line 231, in __aexit__ > File "/usr/local/lib/python3.11/site-packages/mcp/client/streamable_http.py", line 647, in streamable_http_client > async with anyio.create_task_group() as tg: > File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 805, in __aexit__ > File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 455, in __exit__ > raise RuntimeError( > > RuntimeError: Attempted to exit cancel scope in a different task than it was entered in > ``` > > ### Notes > > * Error originates in `mcp/client/streamable_http.py` around `anyio` cancel scope / task group teardown > > * Likely cause: MCP client cleanup is running in a different asyncio task context than where it was initialized — a concurrency or lifecycle change introduced after `0.8.2` > > * Full container logs attached (relevant entries begin around `20:59:29.460`) > > > [merged-logs.txt](https://github.com/user-attachments/files/26947199/merged-logs.1.txt) I hadn't thought it could be MCPHub. I didn't have any problems with it before, but it seems we're now two people experiencing issues and needing MCPHub. I appreciate your report.
Author
Owner

@oatmealm commented on GitHub (Apr 22, 2026):

I'm having anyio crashing calls to an MCP endpoint using http transport with exactly this message after upgrading to 0.9.1

<!-- gh-comment-id:4295223195 --> @oatmealm commented on GitHub (Apr 22, 2026): I'm having anyio crashing calls to an MCP endpoint using http transport with exactly this message after upgrading to 0.9.1
Author
Owner

@fmonnier74 commented on GitHub (Apr 22, 2026):

I am having the same issue, the machine have 4 Cores and after a single prompt sent, one core goes 100% forever.

<!-- gh-comment-id:4296421110 --> @fmonnier74 commented on GitHub (Apr 22, 2026): I am having the same issue, the machine have 4 Cores and after a single prompt sent, one core goes 100% forever.
Author
Owner

@dude75 commented on GitHub (Apr 22, 2026):

I am having the same issue

<!-- gh-comment-id:4297571734 --> @dude75 commented on GitHub (Apr 22, 2026): I am having the same issue
Author
Owner

@bboyles303 commented on GitHub (Apr 22, 2026):

Just echoing on this as well to show it is affecting more users. One prompt and a single core will go to 100% and stay there until I restart the service. Also noted the chat working indicator never stops spinning, and the ui will get progressively slower until the service is restarted.

<!-- gh-comment-id:4298322344 --> @bboyles303 commented on GitHub (Apr 22, 2026): Just echoing on this as well to show it is affecting more users. One prompt and a single core will go to 100% and stay there until I restart the service. Also noted the chat working indicator never stops spinning, and the ui will get progressively slower until the service is restarted.
Author
Owner

@somera commented on GitHub (Apr 22, 2026):

I updated from v0.8.12 to v0.9.1. And my VN runs on Proxmox 8.x. After I could login into Open WebUI I didn't see any models. In logs exceptions. I had no time for deep analysis. I go back to my backup with v0.8.12.

Perhaps I have time to look on this on weekend.

Why so many problems in this release?

<!-- gh-comment-id:4298516300 --> @somera commented on GitHub (Apr 22, 2026): I updated from v0.8.12 to v0.9.1. And my VN runs on Proxmox 8.x. After I could login into Open WebUI I didn't see any models. In logs exceptions. I had no time for deep analysis. I go back to my backup with v0.8.12. Perhaps I have time to look on this on weekend. Why so many problems in this release?
Author
Owner

@pauloalexcosta commented on GitHub (Apr 22, 2026):

Workaround for CPU spike when using MCPHub "the regular way".

For those hitting this with MCP tool servers, the root cause is OWUI's streamable HTTP client leaking async tasks on teardown, which spins the event loop at 100% CPU until container restart. This - if I'm understanding all of it correctly - is a known recurring bug, also documented in #18316 and #18279.

Working workaround until this is fixed: route your MCP servers through mcpo and connect them to OWUI as OpenAPI tool servers instead of MCP Streamable HTTP. This bypasses the broken transport entirely.

Setup

Add mcpo to your docker-compose alongside your MCP server(s):

mcpo:
  image: ghcr.io/open-webui/mcpo:main
  ports:
    - "3101:8000"
  volumes:
    - ./mcpo_config.json:/app/config.json
  command: --port 8000 --api-key "your-mcpo-secret" --config /app/config.json
  restart: unless-stopped

Create mcpo_config.json next to your compose file. Important: use type: sse, not streamable-http (!) ; mcpo only forwards headers (for auth) on SSE type connections:

{
  "mcpServers": {
    "my-server": {
      "type": "sse",
      "url": "http://your-mcp-host/sse/my-server",
      "headers": {
        "Authorization": "Bearer your-mcp-server-token"
      }
    }
  }
}

Then in OWUI, add each server as an OpenAPI (not MCP) tool server pointing to http://your-host:3101/my-server, with Bearer token your-mcpo-secret.

Key gotcha

mcpo silently ignores the headers field for streamable-http type entries, so auth tokens don't get forwarded and you get 403s on tool calls. Using type: sse with the /sse/ endpoint path works correctly.

No CPU spikes, no container restarts needed. The OpenAPI path in OWUI is stable and unaffected by the streamable HTTP bug.

<!-- gh-comment-id:4300300058 --> @pauloalexcosta commented on GitHub (Apr 22, 2026): ### Workaround for CPU spike when using MCPHub "the regular way". For those hitting this with MCP tool servers, the root cause is OWUI's streamable HTTP client leaking async tasks on teardown, which spins the event loop at 100% CPU until container restart. This - if I'm understanding all of it correctly - is a known recurring bug, also documented in #18316 and #18279. **Working workaround** until this is fixed: route your MCP servers through [mcpo](https://github.com/open-webui/mcpo) and connect them to OWUI as **OpenAPI** tool servers instead of MCP Streamable HTTP. This bypasses the broken transport entirely. ### Setup Add mcpo to your docker-compose alongside your MCP server(s): ```yaml mcpo: image: ghcr.io/open-webui/mcpo:main ports: - "3101:8000" volumes: - ./mcpo_config.json:/app/config.json command: --port 8000 --api-key "your-mcpo-secret" --config /app/config.json restart: unless-stopped ``` Create `mcpo_config.json` next to your compose file. **Important: use `type: sse`**, not `streamable-http` (!) ; mcpo only forwards `headers` (for auth) on SSE type connections: ```json { "mcpServers": { "my-server": { "type": "sse", "url": "http://your-mcp-host/sse/my-server", "headers": { "Authorization": "Bearer your-mcp-server-token" } } } } ``` Then in OWUI, add each server as an **OpenAPI** (not MCP) tool server pointing to `http://your-host:3101/my-server`, with Bearer token `your-mcpo-secret`. ### Key gotcha mcpo silently ignores the `headers` field for `streamable-http` type entries, so auth tokens don't get forwarded and you get 403s on tool calls. Using `type: sse` with the `/sse/` endpoint path works correctly. No CPU spikes, no container restarts needed. The OpenAPI path in OWUI is stable and unaffected by the streamable HTTP bug.
Author
Owner

@pauloalexcosta commented on GitHub (Apr 24, 2026):

On my end I can confirm the fix delivered on v0.9.2 is working fine.
CPU usage back to normal and I can use the MCP servers in streamableHTTP format again.

👍

<!-- gh-comment-id:4314555386 --> @pauloalexcosta commented on GitHub (Apr 24, 2026): On my end I can confirm the fix delivered on v0.9.2 is working fine. CPU usage back to normal and I can use the MCP servers in streamableHTTP format again. 👍
Author
Owner

@fmonnier74 commented on GitHub (Apr 24, 2026):

I confirm as well, it works like a charm right since v0.9.2.

On Fri, Apr 24, 2026 at 6:04 PM Paulo Costa @.***>
wrote:

pauloalexcosta left a comment (open-webui/open-webui#23962)
https://github.com/open-webui/open-webui/issues/23962#issuecomment-4314555386

On my end I can confirm the fix delivered on v0.9.2 is working fine.
CPU usage back to normal and I can use the MCP servers in streamableHTTP
format again.

👍


Reply to this email directly, view it on GitHub
https://github.com/open-webui/open-webui/issues/23962#issuecomment-4314555386,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABE5QTXJXBT5A3QPWKXRJHL4XOGBFAVCNFSM6AAAAACYBMQASOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGMJUGU2TKMZYGY
.
Triage notifications, keep track of coding agent tasks and review pull
requests on the go with GitHub Mobile for iOS
https://github.com/notifications/mobile/ios/ABE5QTQHCVXP2HZLVVNTDSD4XOGBFA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRGQ2TKNJTHA3KM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJKTGN5XXIZLSL5UW64Y
and Android
https://github.com/notifications/mobile/android/ABE5QTRC63T7GKEEPFFD34D4XOGBFA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRGQ2TKNJTHA3KM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJLTGN5XXIZLSL5QW4ZDSN5UWI.
Download it today!
You are receiving this because you commented.Message ID:
@.***>

--

Best regards,
Fabien Monnier
INT : +1 703 973 4348
FR : +33 6 89 53 08 91

<!-- gh-comment-id:4314667755 --> @fmonnier74 commented on GitHub (Apr 24, 2026): I confirm as well, it works like a charm right since v0.9.2. On Fri, Apr 24, 2026 at 6:04 PM Paulo Costa ***@***.***> wrote: > *pauloalexcosta* left a comment (open-webui/open-webui#23962) > <https://github.com/open-webui/open-webui/issues/23962#issuecomment-4314555386> > > On my end I can confirm the fix delivered on v0.9.2 is working fine. > CPU usage back to normal and I can use the MCP servers in streamableHTTP > format again. > > 👍 > > — > Reply to this email directly, view it on GitHub > <https://github.com/open-webui/open-webui/issues/23962#issuecomment-4314555386>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABE5QTXJXBT5A3QPWKXRJHL4XOGBFAVCNFSM6AAAAACYBMQASOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGMJUGU2TKMZYGY> > . > Triage notifications, keep track of coding agent tasks and review pull > requests on the go with GitHub Mobile for iOS > <https://github.com/notifications/mobile/ios/ABE5QTQHCVXP2HZLVVNTDSD4XOGBFA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRGQ2TKNJTHA3KM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJKTGN5XXIZLSL5UW64Y> > and Android > <https://github.com/notifications/mobile/android/ABE5QTRC63T7GKEEPFFD34D4XOGBFA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRGQ2TKNJTHA3KM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJLTGN5XXIZLSL5QW4ZDSN5UWI>. > Download it today! > You are receiving this because you commented.Message ID: > ***@***.***> > -- -- Best regards, Fabien Monnier INT : +1 703 973 4348 FR : +33 6 89 53 08 91
Author
Owner

@oatmealm commented on GitHub (Apr 24, 2026):

It works for me but models claim they can't see any tools despite the MCP being enabled on the model and perfectly accessible from opencode and other agents.

<!-- gh-comment-id:4316517613 --> @oatmealm commented on GitHub (Apr 24, 2026): It works for me but models claim they can't see any tools despite the MCP being enabled on the model and perfectly accessible from opencode and other agents.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35658