[GH-ISSUE #1288] feat: Make LLM integrations toggleable (Ollama, LiteLLM, ...?) #51095

Closed
opened 2026-05-05 11:57:25 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @qdrop17 on GitHub (Mar 25, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/1288

Originally assigned to: @tjbck on GitHub.

I host Open WebUI on a small server at home. I connect various inference providers to it, such as OpenRouter and OpenAI. Additionally, I have a gaming rig with an RTX 3090 GPU, which I only power on when needed due to its high energy consumption.

Problem:
When the gaming rig is offline, Open WebUI encounters a timeout while loading tags, causing the application to display a blank white screen for 5-10 seconds. This issue seems to be related to a network layer problem because even if the Ollama endpoint is down, the timeout does not occur as long as the host is running.

Desired Solution:
It would be beneficial if the different integrations could be toggled in the settings or if the Ollama endpoint could be lazy-loaded, preventing it from hindering the application's startup process.

Current Workaround:
As a temporary workaround, I set the IP address to another online host, such as the gateway, but this is merely a temporary fix.

Thanks for delivering this impressive piece of software!

Originally created by @qdrop17 on GitHub (Mar 25, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/1288 Originally assigned to: @tjbck on GitHub. I host Open WebUI on a small server at home. I connect various inference providers to it, such as OpenRouter and OpenAI. Additionally, I have a gaming rig with an RTX 3090 GPU, which I only power on when needed due to its high energy consumption. **Problem**: When the gaming rig is offline, Open WebUI encounters a timeout while loading tags, causing the application to display a blank white screen for 5-10 seconds. This issue seems to be related to a network layer problem because even if the Ollama endpoint is down, the timeout does not occur as long as the host is running. **Desired Solution**: It would be beneficial if the different integrations could be toggled in the settings or if the Ollama endpoint could be lazy-loaded, preventing it from hindering the application's startup process. **Current Workaround**: As a temporary workaround, I set the IP address to another online host, such as the gateway, but this is merely a temporary fix. Thanks for delivering this impressive piece of software!
Author
Owner

@justinh-rahb commented on GitHub (Mar 29, 2024):

I agree and have some similar situations in my home setups where various backends may not be available all the time.

<!-- gh-comment-id:2027378968 --> @justinh-rahb commented on GitHub (Mar 29, 2024): I agree and have some similar situations in my home setups where various backends may not be available all the time.
Author
Owner

@ScuttleSE commented on GitHub (Mar 31, 2024):

Similarly, I am running just the webui, using it as an alternate frontend to OpenAI. There doesn't seem to be a way to disable or remove the ollama integration.

In Settings - Connections, even if you delete the predefined ollama-url http://host.docker.internal:11434 and click save, it is back when you go back into settings.

<!-- gh-comment-id:2028680109 --> @ScuttleSE commented on GitHub (Mar 31, 2024): Similarly, I am running just the webui, using it as an alternate frontend to OpenAI. There doesn't seem to be a way to disable or remove the ollama integration. In Settings - Connections, even if you delete the predefined ollama-url http://host.docker.internal:11434 and click save, it is back when you go back into settings.
Author
Owner

@justinh-rahb commented on GitHub (Mar 31, 2024):

@ScuttleSE it can't be removed. If it can't resolve that URL it just won't use it.

<!-- gh-comment-id:2028681712 --> @justinh-rahb commented on GitHub (Mar 31, 2024): @ScuttleSE it can't be removed. If it can't resolve that URL it just won't use it.
Author
Owner

@ScuttleSE commented on GitHub (Mar 31, 2024):

It still generates errors in the log that it can't reach http://host.docker.internal:11434/

<!-- gh-comment-id:2028718104 --> @ScuttleSE commented on GitHub (Mar 31, 2024): It still generates errors in the log that it can't reach http://host.docker.internal:11434/
Author
Owner

@lee-b commented on GitHub (Apr 6, 2024):

I've had a similar thought for offline use on phones etc. It would be great to have fallback to even a relatively dumb local AI, but use the more powerful AI when available. Ideally the dumb AI would cue complicated requests and say "I'll think about that and get back to you" or something, then "get back to you" when the smarter AI is available, but that's more like agentic workflow that could come much later. Supporting the dumb local AI fallback would be a step in that direction, though.

<!-- gh-comment-id:2041034420 --> @lee-b commented on GitHub (Apr 6, 2024): I've had a similar thought for offline use on phones etc. It would be great to have fallback to even a relatively dumb local AI, but use the more powerful AI when available. Ideally the dumb AI would cue complicated requests and say "I'll think about that and get back to you" or something, then "get back to you" when the smarter AI is available, but that's more like agentic workflow that could come much later. Supporting the dumb local AI fallback would be a step in that direction, though.
Author
Owner

@tjbck commented on GitHub (May 26, 2024):

image

Implemented on our dev branch! Let us know if you encounter any issues!

<!-- gh-comment-id:2132166596 --> @tjbck commented on GitHub (May 26, 2024): ![image](https://github.com/open-webui/open-webui/assets/25473318/8f5a2aca-0a7c-4bbc-b4b4-3a1cd78493fc) Implemented on our dev branch! Let us know if you encounter any issues!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#51095