[GH-ISSUE #930] when openai compatible reverse proxy name contains 'openai' in its name, models without 'gpt' string are not shown in model dialog #12256

Closed
opened 2026-04-19 19:08:41 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @fbirlik on GitHub (Feb 26, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/930

Bug Report

Description

When OpenAI compatible API server is defined using 'OPENAI_API_BASE_URL' and this url contains 'openai' string in it, all models that don't contain 'gpt' string will be hidden from model menu.

Steps to Reproduce:
Using fastchat openai compatible api server for serving local models. I use nginx-proxy-manager as reverse proxy and ssl terminator. URL for the api is
--> https://openai.i.{mydomain.com}/v1
actual host address being proxied:
--> http://ares.i.{mydomain.com}/v1

When 'OpenAI API Base URL' is set to https://openai.i..., models without 'gpt' string disappear.

Culprit:
backend/apps/openai/main.py @ 187:

  if "openai" in app.state.OPENAI_API_BASE_URL and path == "models":
      response_data["data"] = list(
          filter(lambda model: "gpt" in model["id"], response_data["data"])
      )

Expected Behavior:
Ability to disable 'gpt' filter with a config/environment variable.

Originally created by @fbirlik on GitHub (Feb 26, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/930 # Bug Report ## Description When OpenAI compatible API server is defined using 'OPENAI_API_BASE_URL' and this url contains 'openai' string in it, all models that don't contain 'gpt' string will be hidden from model menu. **Steps to Reproduce:** Using fastchat openai compatible api server for serving local models. I use nginx-proxy-manager as reverse proxy and ssl terminator. URL for the api is --> https://openai.i.{mydomain.com}/v1 actual host address being proxied: --> http://ares.i.{mydomain.com}/v1 When 'OpenAI API Base URL' is set to https://openai.i..., models without 'gpt' string disappear. Culprit: backend/apps/openai/main.py @ 187: ``` if "openai" in app.state.OPENAI_API_BASE_URL and path == "models": response_data["data"] = list( filter(lambda model: "gpt" in model["id"], response_data["data"]) ) ``` **Expected Behavior:** Ability to disable 'gpt' filter with a config/environment variable.
Author
Owner

@justinh-rahb commented on GitHub (Feb 26, 2024):

I've never observed this happen. Before the built-in LiteLLM was merged I had been using it externally as my "OpenAI" proxy with many models, both GPT and others, and they all showed in the list. Can't view on mobile but I'll have a look at that code snippet to see when it was added, it's possible I haven't noticed because I've switched to using the built-in LiteLLM.

<!-- gh-comment-id:1965321738 --> @justinh-rahb commented on GitHub (Feb 26, 2024): I've never observed this happen. Before the built-in LiteLLM was merged I had been using it externally as my "OpenAI" proxy with many models, both GPT and others, and they all showed in the list. Can't view on mobile but I'll have a look at that code snippet to see when it was added, it's possible I haven't noticed because I've switched to using the built-in LiteLLM.
Author
Owner

@justinh-rahb commented on GitHub (Feb 26, 2024):

That snippet hasn't been touched since last month, so ya I'm unsure what this could be. As you can see I've got multiple different models that do not have "gpt" in the model string:

Screenshot 2024-02-26 at 5 08 47 PM

I assume your OpenAI-compatible API requires an access_key? AFAIK the OpenAI endpoint code won't activate in the backend unless there's a key set, if it's left blank you'd get no models in your list. Some APIs like LM Studio's will accept anything you send as the key, others like Textgen WebUI's will give an error unless you've set that as the key for the server as well.

<!-- gh-comment-id:1965386963 --> @justinh-rahb commented on GitHub (Feb 26, 2024): That snippet hasn't been touched since last month, so ya I'm unsure what this could be. As you can see I've got multiple different models that do not have "gpt" in the model string: <img width="656" alt="Screenshot 2024-02-26 at 5 08 47 PM" src="https://github.com/open-webui/open-webui/assets/52832301/f5e849da-0be8-4b0d-93d9-152bc31a5ab7"> I assume your OpenAI-compatible API requires an `access_key`? AFAIK the OpenAI endpoint code won't activate in the backend unless there's a key set, if it's left blank you'd get no models in your list. Some APIs like LM Studio's will accept anything you send as the key, others like Textgen WebUI's will give an error unless you've set that as the key for the server as well.
Author
Owner

@tjbck commented on GitHub (Feb 27, 2024):

Should be an easy fix, feel free to make a PR!

<!-- gh-comment-id:1965681600 --> @tjbck commented on GitHub (Feb 27, 2024): Should be an easy fix, feel free to make a PR!
Author
Owner

@fbirlik commented on GitHub (Feb 27, 2024):

Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues.

@tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui.

<!-- gh-comment-id:1967861060 --> @fbirlik commented on GitHub (Feb 27, 2024): Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues. @tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui.
Author
Owner

@justinh-rahb commented on GitHub (Feb 27, 2024):

Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues.

@tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui.

In this case would it make more sense to expand the string matching to the whole FQDN api.openai.com? I don't think it'll be changing.

<!-- gh-comment-id:1967882432 --> @justinh-rahb commented on GitHub (Feb 27, 2024): > Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues. > > @tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui. In this case would it make more sense to expand the string matching to the whole FQDN `api.openai.com`? I don't think it'll be changing.
Author
Owner

@fbirlik commented on GitHub (Feb 28, 2024):

Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues.
@tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui.

In this case would it make more sense to expand the string matching to the whole FQDN api.openai.com? I don't think it'll be changing.

I believe it would be appropriate. I was not quite sure about self/azure hosted gpt endpoints, but open-webui has azure specific dedicated configuration for that purpose. So it should be ok.

<!-- gh-comment-id:1968075431 --> @fbirlik commented on GitHub (Feb 28, 2024): > > Issue occurs when 'OPENAI_API_BASE_URL' host contains 'openai' in the name as a way to identify official openai endpoint. If backend api url does not include openai, no issues. > > @tjbck I believe the reason to include this code piece was to filter out embedding, dall-e, whisper, tts models that are returned from official api.openai.com/v1/models api response, which can't be used by the chat ui. > > In this case would it make more sense to expand the string matching to the whole FQDN `api.openai.com`? I don't think it'll be changing. I believe it would be appropriate. I was not quite sure about self/azure hosted gpt endpoints, but open-webui has azure specific dedicated configuration for that purpose. So it should be ok.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#12256