[GH-ISSUE #945] Unable to get model list from litellm proxy #50927

Closed
opened 2026-05-05 11:33:53 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @shekhars-li on GitHub (Feb 27, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/945

Bug Report

Description

Bug Summary:
I have setup a litellm proxy. Both the containers (open-webui and litellm proxy) are running properly. I can access my litellm-proxy endpoint and query the model list from open-webui container console (python interpreter -> get model list by passing the master_key). However, on loading the open-webui, the model list only shows ollama models and not the litellm models I configured. I get this error:

webui-1         | http://openai-proxy:8000/v1/models sk-123456789qwerty
webui-1         | Error loading request body into a dictionary: Expecting value: line 1 column 1 (char 0)
webui-1         | INFO:     192.168.65.1:22222 - "GET /openai/api/models HTTP/1.1" 200 OK

Looks like the GET requests succeeds but the json is not loaded properly.
I tried to get the josn list myself inside the open-webui container and I see the request is successful. Can you please help me fix this?

Steps to Reproduce:
Here's my docker-compose.yml:

version: '3.9'

services:
  webui:
    image: ghcr.io/ollama-webui/ollama-webui:main
    environment:
      - "OLLAMA_API_BASE_URL=http://host.docker.internal:11434/api"
      - "OPENAI_API_BASE_URL=http://openai-proxy:8000/v1"
      - "OPENAI_API_KEY=sk-123456789qwerty"
    ports:
      - 3000:8080
      - 11434:11434
    volumes:
      - ./ollama-webui/data:/app/backend/data
    restart: unless-stopped

  openai-proxy:
    image: ghcr.io/berriai/litellm:main-latest
    environment:
      - "MASTER_KEY=sk-123456789qwerty"
      - "OPENAI_API_KEY=${OPENAI_API_KEY}"
      - "MISTRAL_API_KEY=${MISTRAL_API_KEY}"
      - "ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}"
    ports:
      - 4000:8000
    volumes:
      - ./litellm/config.yaml:/app/config.yaml
    command: [ "--config", "/app/config.yaml", "--port", "8000" ]
    restart: unless-stopped

Here's my litellm config:

model_list:
  - model_name: anthropic-claude-instant-1.2 
    litellm_params:
      model: anthropic/claude-instant-1.2
      api_key:"PLACEHOLDER_API_KEY" # this is correctly set
  - model_name: anthropic-claude-v2.1
    litellm_params: 
      model: "anthropic/claude-2.1"
      api_key: "PLACEHOLDER_API_KEY" # this is correctly set

litellm_settings: # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py
  drop_params: True
  set_verbose: True

general_settings: 
  master_key: sk-123456789qwerty

Expected Behavior:
Model list shows up via litellm proxy and shows anthropic models

Actual Behavior:
Error when trying to list all the models

Environment

  • Operating System: mac os 14.3.1

Reproduction Details

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I have reviewed the troubleshooting.md document.
  • I have included the browser console logs.
  • I have included the Docker container logs.

Logs and Screenshots

Browser Console Logs:
None

Docker Container Logs:

openai-proxy-1  | INFO:     172.19.0.2:42334 - "GET /v1/models HTTP/1.1" 200 OK
webui-1         | http://openai-proxy:8000/v1/models sk-123456789qwerty
webui-1         | Error loading request body into a dictionary: Expecting value: line 1 column 1 (char 0)
webui-1         | INFO:     192.168.65.1:22295 - "GET /openai/api/models HTTP/1.1" 200 OK

Screenshots (if applicable):
image

Installation Method

Docker installation - shared docker compose file above.

Note

If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

Originally created by @shekhars-li on GitHub (Feb 27, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/945 # Bug Report ## Description **Bug Summary:** I have setup a litellm proxy. Both the containers (open-webui and litellm proxy) are running properly. I can access my litellm-proxy endpoint and query the model list from open-webui container console (python interpreter -> get model list by passing the master_key). However, on loading the open-webui, the model list only shows ollama models and not the litellm models I configured. I get this error: ``` webui-1 | http://openai-proxy:8000/v1/models sk-123456789qwerty webui-1 | Error loading request body into a dictionary: Expecting value: line 1 column 1 (char 0) webui-1 | INFO: 192.168.65.1:22222 - "GET /openai/api/models HTTP/1.1" 200 OK ``` Looks like the GET requests succeeds but the json is not loaded properly. I tried to get the josn list myself inside the open-webui container and I see the request is successful. Can you please help me fix this? **Steps to Reproduce:** Here's my docker-compose.yml: ``` version: '3.9' services: webui: image: ghcr.io/ollama-webui/ollama-webui:main environment: - "OLLAMA_API_BASE_URL=http://host.docker.internal:11434/api" - "OPENAI_API_BASE_URL=http://openai-proxy:8000/v1" - "OPENAI_API_KEY=sk-123456789qwerty" ports: - 3000:8080 - 11434:11434 volumes: - ./ollama-webui/data:/app/backend/data restart: unless-stopped openai-proxy: image: ghcr.io/berriai/litellm:main-latest environment: - "MASTER_KEY=sk-123456789qwerty" - "OPENAI_API_KEY=${OPENAI_API_KEY}" - "MISTRAL_API_KEY=${MISTRAL_API_KEY}" - "ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}" ports: - 4000:8000 volumes: - ./litellm/config.yaml:/app/config.yaml command: [ "--config", "/app/config.yaml", "--port", "8000" ] restart: unless-stopped ``` Here's my litellm config: ``` model_list: - model_name: anthropic-claude-instant-1.2 litellm_params: model: anthropic/claude-instant-1.2 api_key:"PLACEHOLDER_API_KEY" # this is correctly set - model_name: anthropic-claude-v2.1 litellm_params: model: "anthropic/claude-2.1" api_key: "PLACEHOLDER_API_KEY" # this is correctly set litellm_settings: # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py drop_params: True set_verbose: True general_settings: master_key: sk-123456789qwerty ``` **Expected Behavior:** Model list shows up via litellm proxy and shows anthropic models **Actual Behavior:** Error when trying to list all the models ## Environment - **Operating System:** mac os 14.3.1 ## Reproduction Details **Confirmation:** - [X] I have read and followed all the instructions provided in the README.md. - [X] I have reviewed the troubleshooting.md document. - [X] I have included the browser console logs. - [X] I have included the Docker container logs. ## Logs and Screenshots **Browser Console Logs:** None **Docker Container Logs:** ``` openai-proxy-1 | INFO: 172.19.0.2:42334 - "GET /v1/models HTTP/1.1" 200 OK webui-1 | http://openai-proxy:8000/v1/models sk-123456789qwerty webui-1 | Error loading request body into a dictionary: Expecting value: line 1 column 1 (char 0) webui-1 | INFO: 192.168.65.1:22295 - "GET /openai/api/models HTTP/1.1" 200 OK ``` **Screenshots (if applicable):** <img width="712" alt="image" src="https://github.com/open-webui/open-webui/assets/72765053/9dd7e759-4ccf-4f8b-809a-439e57e408e8"> ## Installation Method Docker installation - shared docker compose file above. ## Note If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
Author
Owner

@justinh-rahb commented on GitHub (Feb 27, 2024):

Try changing OPENAI_API_BASE_URL=http://openai-proxy:8000/v1 to OPENAI_API_BASE_URL=http://host.docker.internal:4000/v1. I've had some issues where the LiteLLM proxy doesn't like to listen from the internal container network.

<!-- gh-comment-id:1967577975 --> @justinh-rahb commented on GitHub (Feb 27, 2024): Try changing `OPENAI_API_BASE_URL=http://openai-proxy:8000/v1` to `OPENAI_API_BASE_URL=http://host.docker.internal:4000/v1`. I've had some issues where the LiteLLM proxy doesn't like to listen from the internal container network.
Author
Owner

@shekhars-li commented on GitHub (Feb 27, 2024):

Wow that was quick! Thank you so much @justinh-rahb! Really appreciate it. This fixed the issue. Although I am not sure why. Seems like litellm could get and send response fine with the openai-proxy url?

<!-- gh-comment-id:1967583516 --> @shekhars-li commented on GitHub (Feb 27, 2024): Wow that was quick! Thank you so much @justinh-rahb! Really appreciate it. This fixed the issue. Although I am not sure why. Seems like litellm could get and send response fine with the openai-proxy url?
Author
Owner

@justinh-rahb commented on GitHub (Feb 27, 2024):

Your guess is as good as mine, I didn't feel like debugging it so just went with what worked at the time 😬 I've since remove the LiteLLM container from my WebUI stack because it's now built-in to the project and can be configured from the Settings > Models interface.

<!-- gh-comment-id:1967590385 --> @justinh-rahb commented on GitHub (Feb 27, 2024): Your guess is as good as mine, I didn't feel like debugging it so just went with what worked at the time 😬 I've since remove the LiteLLM container from my WebUI stack because it's now built-in to the project and can be configured from the **Settings > Models** interface.
Author
Owner

@shekhars-li commented on GitHub (Feb 27, 2024):

Thanks a lot for your help! :)

<!-- gh-comment-id:1967616078 --> @shekhars-li commented on GitHub (Feb 27, 2024): Thanks a lot for your help! :)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#50927