[GH-ISSUE #562] litellm config file #12121

Closed
opened 2026-04-19 18:55:29 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @mafrasiabi on GitHub (Jan 24, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/562

@justinh-rahb Could you please share the litellm config file? I've been struggling to make it work but no matter what I did, I can't see the models on the model list

Originally created by @mafrasiabi on GitHub (Jan 24, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/562 @justinh-rahb Could you please share the litellm config file? I've been struggling to make it work but no matter what I did, I can't see the models on the model list
Author
Owner

@justinh-rahb commented on GitHub (Jan 25, 2024):

@mafrasiabi I got everything I know about configuring this from their docs:
https://docs.litellm.ai/docs/proxy/configs

Assuming you're following my previously posted configs this should work:

model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: openai/gpt-3.5-turbo-1106
      api_base: https://api.openai.com/v1
      api_key: "os.environ/OPENAI_API_KEY"
  - model_name: gpt-4-turbo
    litellm_params:
      model: openai/gpt-4-1106-preview
      api_base: https://api.openai.com/v1
      api_key: "os.environ/OPENAI_API_KEY"
  - model_name: mistral-small
    litellm_params:
      model: mistral/mistral-small
      api_base: https://api.mistral.ai/v1
      api_key: "os.environ/MISTRAL_API_KEY"
  - model_name: mistral-medium
    litellm_params:
      model: mistral/mistral-medium
      api_base: https://api.mistral.ai/v1
      api_key: "os.environ/MISTRAL_API_KEY"
  - model_name: claude-2.1
    litellm_params:
      model: claude-2.1
      api_key: "os.environ/ANTHROPIC_API_KEY"
  - model_name: claude-instant-1.2
    litellm_params:
      model: claude-instant-1.2
      api_key: "os.environ/ANTHROPIC_API_KEY"
  - model_name: cohere-command
    litellm_params:
      model: command
      api_key: "os.environ/COHERE_API_KEY"
  - model_name: cohere-command-light
    litellm_params:
      model: command-light
      api_key: "os.environ/COHERE_API_KEY"

general_settings:
  master_key: "os.environ/MASTER_KEY"

If you omit the general_settings: master_key: section you can test LiteLLM with a simple cURL command from the ollama-webui container:

curl http://litellm:8000/v1/models

This will verify that the containers are communicating successfully. If not, try using this instead:
http://host.docker.internal:publish_port/v1/models

Do note that the master_key is required for Ollama WebUI to use an OpenAI endpoint, with no key Ollama WebUI will not query the /v1/models endpoint. Took me a while to figure that one out and should probably be added to the README.md if it's intentional @tjbck Otherwise I can open an issue if this is unexpected behaviour.

<!-- gh-comment-id:1909157243 --> @justinh-rahb commented on GitHub (Jan 25, 2024): @mafrasiabi I got everything I know about configuring this from their docs: https://docs.litellm.ai/docs/proxy/configs Assuming you're following my previously posted configs this should work: ```yml model_list: - model_name: gpt-3.5-turbo litellm_params: model: openai/gpt-3.5-turbo-1106 api_base: https://api.openai.com/v1 api_key: "os.environ/OPENAI_API_KEY" - model_name: gpt-4-turbo litellm_params: model: openai/gpt-4-1106-preview api_base: https://api.openai.com/v1 api_key: "os.environ/OPENAI_API_KEY" - model_name: mistral-small litellm_params: model: mistral/mistral-small api_base: https://api.mistral.ai/v1 api_key: "os.environ/MISTRAL_API_KEY" - model_name: mistral-medium litellm_params: model: mistral/mistral-medium api_base: https://api.mistral.ai/v1 api_key: "os.environ/MISTRAL_API_KEY" - model_name: claude-2.1 litellm_params: model: claude-2.1 api_key: "os.environ/ANTHROPIC_API_KEY" - model_name: claude-instant-1.2 litellm_params: model: claude-instant-1.2 api_key: "os.environ/ANTHROPIC_API_KEY" - model_name: cohere-command litellm_params: model: command api_key: "os.environ/COHERE_API_KEY" - model_name: cohere-command-light litellm_params: model: command-light api_key: "os.environ/COHERE_API_KEY" general_settings: master_key: "os.environ/MASTER_KEY" ``` If you omit the `general_settings: master_key:` section you can test LiteLLM with a simple cURL command from the ollama-webui container: ```bash curl http://litellm:8000/v1/models ``` This will verify that the containers are communicating successfully. If not, try using this instead: `http://host.docker.internal:publish_port/v1/models` Do note that the `master_key` is required for Ollama WebUI to use an OpenAI endpoint, with no key Ollama WebUI will not query the `/v1/models` endpoint. Took me a while to figure that one out and should probably be added to the README.md if it's intentional @tjbck Otherwise I can open an issue if this is unexpected behaviour.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#12121