[GH-ISSUE #693] feat: multiple OpenAI connections #27710

Closed
opened 2026-04-25 02:27:14 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @NinjaPerson24119 on GitHub (Feb 10, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/693

Originally assigned to: @tjbck on GitHub.

Is your feature request related to a problem? Please describe.

I really like your UI. And I want to like Ollama, but it refuses to keep my models loaded, so I've started using Llama.cpp server directly because I can't be losing 2 minutes every time i need a query.

I'd like to be able to continue using this UI, but I need to be able to call Llama.cpp server instead.

Describe the solution you'd like

Allow me to enter the Llama.cpp server IP and select it as a model.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

I mean, I could just use a different UI, but I like this one.

Additional context
Add any other context or screenshots about the feature request here.

Nothing to add.

Originally created by @NinjaPerson24119 on GitHub (Feb 10, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/693 Originally assigned to: @tjbck on GitHub. **Is your feature request related to a problem? Please describe.** I really like your UI. And I want to like Ollama, but it refuses to keep my models loaded, so I've started using Llama.cpp server directly because I can't be losing 2 minutes every time i need a query. I'd like to be able to continue using this UI, but I need to be able to call Llama.cpp server instead. **Describe the solution you'd like** Allow me to enter the Llama.cpp server IP and select it as a model. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. I mean, I could just use a different UI, but I like this one. **Additional context** Add any other context or screenshots about the feature request here. Nothing to add.
GiteaMirror added the enhancementgood first issuehelp wantedcore labels 2026-04-25 02:27:15 -05:00
Author
Owner

@tjbck commented on GitHub (Feb 10, 2024):

Hi, just tested with llama-cpp-python[server] by modifying the OpenAI API Base URL value from Settings > External, seems to be working as intended. Let me know if you're encountering issues!

image

<!-- gh-comment-id:1936801534 --> @tjbck commented on GitHub (Feb 10, 2024): Hi, just tested with `llama-cpp-python[server]` by modifying the OpenAI API Base URL value from Settings > External, seems to be working as intended. Let me know if you're encountering issues! ![image](https://github.com/ollama-webui/ollama-webui/assets/25473318/0aa1f14d-168d-46a3-8049-d5eaa2560653)
Author
Owner

@NinjaPerson24119 commented on GitHub (Feb 10, 2024):

Hello!

I figured I could do that, but I want to be able to use OpenAI as well.

My workflow is something like:

  • Use GPT-4 for complex, short queries, but limit this because it's expensive
  • Use Llama.cpp for a large model I want to keep loaded
  • Use Ollama for smaller models I want to load on-demand

I need to be able to switch between providers without editing the URL, or it's easier for me to just use my VSCode with Continue. But that doesn't let me save prompts, use voice, etc.

<!-- gh-comment-id:1936803177 --> @NinjaPerson24119 commented on GitHub (Feb 10, 2024): Hello! I figured I could do that, but I want to be able to use OpenAI as well. My workflow is something like: - Use GPT-4 for complex, short queries, but limit this because it's expensive - Use Llama.cpp for a large model I want to keep loaded - Use Ollama for smaller models I want to load on-demand I need to be able to switch between providers without editing the URL, or it's easier for me to just use my VSCode with Continue. But that doesn't let me save prompts, use voice, etc.
Author
Owner

@tjbck commented on GitHub (Feb 10, 2024):

I'll take a look and see how this could be natively implemented from the webui, but in the meantime, LiteLLM would be useful here. Related: #604

<!-- gh-comment-id:1936814626 --> @tjbck commented on GitHub (Feb 10, 2024): I'll take a look and see how this could be natively implemented from the webui, but in the meantime, LiteLLM would be useful here. Related: #604
Author
Owner

@NinjaPerson24119 commented on GitHub (Feb 10, 2024):

Interesting. Using an adapter pattern seems like the winning move for development effort / maintenance. Since it's a standardization layer, I probably can't actually use it at this point unless the UI gives me an option to select options within that provider.

At a basic level, I think it'd be sufficient to allow adding APIs that are OpenAI compatible. Adding a simple + to make a list on the screen would do it.

This is what I'm able to do in Continue, https://continue.dev/docs/model-setup/configuration. And if want to run something that's not compatible I can write a proxy in Python pretty quickly.

<!-- gh-comment-id:1936842741 --> @NinjaPerson24119 commented on GitHub (Feb 10, 2024): Interesting. Using an adapter pattern seems like the winning move for development effort / maintenance. Since it's a standardization layer, I probably can't actually use it at this point unless the UI gives me an option to select options within that provider. At a basic level, I think it'd be sufficient to allow adding APIs that are OpenAI compatible. Adding a simple + to make a list on the screen would do it. This is what I'm able to do in Continue, https://continue.dev/docs/model-setup/configuration. And if want to run something that's not compatible I can write a proxy in Python pretty quickly.
Author
Owner

@justinh-rahb commented on GitHub (Feb 10, 2024):

Hi @NinjaPerson24119, these are all great suggestions that we've been discussing recently in #604, glad to hear there's others with similar ideas.

<!-- gh-comment-id:1936843753 --> @justinh-rahb commented on GitHub (Feb 10, 2024): Hi @NinjaPerson24119, these are all great suggestions that we've been discussing recently in #604, glad to hear there's others with similar ideas.
Author
Owner

@NinjaPerson24119 commented on GitHub (Feb 10, 2024):

Got this working locally https://github.com/ollama-webui/ollama-webui/pull/697 @tjbck @justinh-rahb

<!-- gh-comment-id:1937101845 --> @NinjaPerson24119 commented on GitHub (Feb 10, 2024): Got this working locally https://github.com/ollama-webui/ollama-webui/pull/697 @tjbck @justinh-rahb
Author
Owner

@jukofyork commented on GitHub (Apr 10, 2024):

Sorry to bump an old thread, but can I ask did you find anything missing when using the llama.cpp Server's OpenAI API compatible endpoint vs Ollama?

I've sadly had my fill of Ollama bugs but do like using OpenWebUI.

How does it handle things like prompt templates, etc as I assume the OpenAI API wouldn't need these customizing? Is it all done on the server side and OpenWebUI uses whatever it sees?

<!-- gh-comment-id:2046409717 --> @jukofyork commented on GitHub (Apr 10, 2024): Sorry to bump an old thread, but can I ask did you find anything missing when using the llama.cpp Server's OpenAI API compatible endpoint vs Ollama? I've sadly had my fill of Ollama bugs but do like using OpenWebUI. How does it handle things like prompt templates, etc as I assume the OpenAI API wouldn't need these customizing? Is it all done on the server side and OpenWebUI uses whatever it sees?
Author
Owner

@tjbck commented on GitHub (Apr 10, 2024):

@jukofyork I'm hoping to clear out quite a bit of backlogs starting from later this week, and I will get back to this as well so stay tuned!

I'll close this feature request, let's continue our discussion for direct llama.cpp integration here: #1483

<!-- gh-comment-id:2046494067 --> @tjbck commented on GitHub (Apr 10, 2024): @jukofyork I'm hoping to clear out quite a bit of backlogs starting from later this week, and I will get back to this as well so stay tuned! I'll close this feature request, let's continue our discussion for direct llama.cpp integration here: #1483
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#27710