mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #930] when openai compatible reverse proxy name contains 'openai' in its name, models without 'gpt' string are not shown in model dialog #12256
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @fbirlik on GitHub (Feb 26, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/930
Bug Report
Description
When OpenAI compatible API server is defined using 'OPENAI_API_BASE_URL' and this url contains 'openai' string in it, all models that don't contain 'gpt' string will be hidden from model menu.
Steps to Reproduce:
Using fastchat openai compatible api server for serving local models. I use nginx-proxy-manager as reverse proxy and ssl terminator. URL for the api is
--> https://openai.i.{mydomain.com}/v1
actual host address being proxied:
--> http://ares.i.{mydomain.com}/v1
When 'OpenAI API Base URL' is set to https://openai.i..., models without 'gpt' string disappear.
Culprit:
backend/apps/openai/main.py @ 187:
Expected Behavior:
Ability to disable 'gpt' filter with a config/environment variable.
@justinh-rahb commented on GitHub (Feb 26, 2024):
I've never observed this happen. Before the built-in LiteLLM was merged I had been using it externally as my "OpenAI" proxy with many models, both GPT and others, and they all showed in the list. Can't view on mobile but I'll have a look at that code snippet to see when it was added, it's possible I haven't noticed because I've switched to using the built-in LiteLLM.
@justinh-rahb commented on GitHub (Feb 26, 2024):
That snippet hasn't been touched since last month, so ya I'm unsure what this could be. As you can see I've got multiple different models that do not have "gpt" in the model string:
I assume your OpenAI-compatible API requires an
access_key? AFAIK the OpenAI endpoint code won't activate in the backend unless there's a key set, if it's left blank you'd get no models in your list. Some APIs like LM Studio's will accept anything you send as the key, others like Textgen WebUI's will give an error unless you've set that as the key for the server as well.@tjbck commented on GitHub (Feb 27, 2024):
Should be an easy fix, feel free to make a PR!
@fbirlik commented on GitHub (Feb 27, 2024):
Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues.
@tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui.
@justinh-rahb commented on GitHub (Feb 27, 2024):
In this case would it make more sense to expand the string matching to the whole FQDN
api.openai.com? I don't think it'll be changing.@fbirlik commented on GitHub (Feb 28, 2024):
I believe it would be appropriate. I was not quite sure about self/azure hosted gpt endpoints, but open-webui has azure specific dedicated configuration for that purpose. So it should be ok.