mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #1288] feat: Make LLM integrations toggleable (Ollama, LiteLLM, ...?) #12430
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @qdrop17 on GitHub (Mar 25, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/1288
Originally assigned to: @tjbck on GitHub.
I host Open WebUI on a small server at home. I connect various inference providers to it, such as OpenRouter and OpenAI. Additionally, I have a gaming rig with an RTX 3090 GPU, which I only power on when needed due to its high energy consumption.
Problem:
When the gaming rig is offline, Open WebUI encounters a timeout while loading tags, causing the application to display a blank white screen for 5-10 seconds. This issue seems to be related to a network layer problem because even if the Ollama endpoint is down, the timeout does not occur as long as the host is running.
Desired Solution:
It would be beneficial if the different integrations could be toggled in the settings or if the Ollama endpoint could be lazy-loaded, preventing it from hindering the application's startup process.
Current Workaround:
As a temporary workaround, I set the IP address to another online host, such as the gateway, but this is merely a temporary fix.
Thanks for delivering this impressive piece of software!
@justinh-rahb commented on GitHub (Mar 29, 2024):
I agree and have some similar situations in my home setups where various backends may not be available all the time.
@ScuttleSE commented on GitHub (Mar 31, 2024):
Similarly, I am running just the webui, using it as an alternate frontend to OpenAI. There doesn't seem to be a way to disable or remove the ollama integration.
In Settings - Connections, even if you delete the predefined ollama-url http://host.docker.internal:11434 and click save, it is back when you go back into settings.
@justinh-rahb commented on GitHub (Mar 31, 2024):
@ScuttleSE it can't be removed. If it can't resolve that URL it just won't use it.
@ScuttleSE commented on GitHub (Mar 31, 2024):
It still generates errors in the log that it can't reach http://host.docker.internal:11434/
@lee-b commented on GitHub (Apr 6, 2024):
I've had a similar thought for offline use on phones etc. It would be great to have fallback to even a relatively dumb local AI, but use the more powerful AI when available. Ideally the dumb AI would cue complicated requests and say "I'll think about that and get back to you" or something, then "get back to you" when the smarter AI is available, but that's more like agentic workflow that could come much later. Supporting the dumb local AI fallback would be a step in that direction, though.
@tjbck commented on GitHub (May 26, 2024):
Implemented on our dev branch! Let us know if you encounter any issues!