mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-22 06:02:06 -05:00
issue: Open-WebUI sends a lot of requests #5565
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kekePower on GitHub (Jun 17, 2025).
Check Existing Issues
Installation Method
Docker
Open WebUI Version
0.6.15
Ollama Version (if applicable)
0.9.1
Operating System
Mageia Linux
Browser (if applicable)
Zen
Confirmation
README.md.Expected Behavior
When I click "Send" I expect it to send the request and get the response.
Actual Behavior
When I click send it
This is normally hidden to the user and will, in the worst case, incur extra cost due to requesting more tokens.
Steps to Reproduce
Pull Docker image.
Run Docker image.
podman run -d -p 8080:8080 -e OLLAMA_BASE_URL=http://127.0.0.1:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:latest
Open site and chat.
Logs & Screenshots
This is for a single request, a single question.
Additional Information
No response
@jrkropp commented on GitHub (Jun 17, 2025):
You likely have tasks enabled in settings. These run after your completion is done to generate title, tags, follow up questions etc.. You can either turn them off or choose a lightweight model as your task model.
https:/[openwebui_url]/admin/settings/interface
@Classic298 commented on GitHub (Jun 17, 2025):
This is expected and intended behaviour, title generation, tag generation and so forth are separate requests
@kekePower commented on GitHub (Jun 17, 2025):
Thanks. It wasn't intuitive and somewhat poorly explained, but I found the settings and disabled them all as a test. This kind of degrades the functionality, so I'll experiment a bit more.
Another question. Why does it check for models all the time? Shouldn't it be using a cached list?
I see a request for models when I refresh the page, when I send a request or when I check the list of models to switch and when I continue a conversation.
These are, imho, unnecessary requests.
@Classic298 commented on GitHub (Jun 17, 2025):
Why should it use a cached list?
Many people use Open Router or OpenAI directly and want to always have the latest version of the available models.
Models get removed, new ones get added, model IDs can change, and so forth
Edit: Or Ollama! Load in a new model, then it should show up immediately. Cached list would not be good here