issue: Cannot use OpenAI-compatible server API in Playground Completions mode #5458

Closed
opened 2025-11-11 16:21:34 -06:00 by GiteaMirror · 5 comments
Owner

Originally created by @secminhr on GitHub (Jun 6, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

v0.6.13 (dev branch, commit 0b84b22)

Ollama Version (if applicable)

No response

Operating System

macOS Sequoia

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

In Playground Completions mode, when the selected model is from an OpenAI-compatible server, by clicking the Run button the input prompt should be sent to the server API and get the response.

Actual Behavior

When the selected model is from an OpenAI-compatible server, the Run button will send a request to http://localhost:{backend_port}/api/chat/completions instead of the model's server URL. Therefore only built-in models work.

Steps to Reproduce

  1. Navigate to the repo directory.
  2. Add WEBUI_AUTH=false to .env to use single-user mode.
  3. Start frontend with npm run dev.
  4. Navigate to backend and start backend with ./dev.sh.
  5. Open the webpage, go to Playground and switch to Completions mode.
  6. Select a model from an OpenAI-compatible server on local machine.
  7. Type in some prompts in the text area.
  8. Click Run button, and nothing will happen.

Python environment running backend is created via conda, following the instructions in the documentation.

Logs & Screenshots

  • Browser log (right after clicking Run button)
POST http://localhost:8080/api/chat/completions 400 (Bad Request)
window.fetch @ fetcher.js?v=3f93747d:66
chatCompletion @ index.ts:341
textCompletionHandler @ Completions.svelte:43
submitHandler @ Completions.svelte:100
click_handler @ Completions.svelte:168
  • Backend log (right after clicking Run button)
2025-06-06 11:56:09.031 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:61712 - "POST /api/chat/completions HTTP/1.1" 400 - {}

Additional Information

Modified .env for your reference:

# Ollama URL for the backend to connect
# The path '/ollama' will be redirected to the specified backend URL
OLLAMA_BASE_URL='http://localhost:11434'

OPENAI_API_BASE_URL=''
OPENAI_API_KEY=''

# AUTOMATIC1111_BASE_URL="http://localhost:7860"

# DO NOT TRACK
SCARF_NO_ANALYTICS=true
DO_NOT_TRACK=true
ANONYMIZED_TELEMETRY=false

WEBUI_AUTH=false

A fix I've done on my local machine is adding a model.direct check in src/lib/components/playground/Completions.svelte, and updating the URL and token argument to chatCompletion correspondingly.

Originally created by @secminhr on GitHub (Jun 6, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version v0.6.13 (dev branch, commit 0b84b22) ### Ollama Version (if applicable) _No response_ ### Operating System macOS Sequoia ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior In Playground Completions mode, when the selected model is from an OpenAI-compatible server, by clicking the Run button the input prompt should be sent to the server API and get the response. ### Actual Behavior When the selected model is from an OpenAI-compatible server, the Run button will send a request to http://localhost:{backend_port}/api/chat/completions instead of the model's server URL. Therefore only built-in models work. ### Steps to Reproduce 1. Navigate to the repo directory. 2. Add `WEBUI_AUTH=false` to `.env` to use single-user mode. 3. Start frontend with `npm run dev`. 4. Navigate to `backend` and start backend with `./dev.sh`. 5. Open the webpage, go to Playground and switch to Completions mode. 6. Select a model from an OpenAI-compatible server on local machine. 7. Type in some prompts in the text area. 8. Click Run button, and nothing will happen. Python environment running backend is created via conda, following the instructions in the documentation. ### Logs & Screenshots - Browser log (right after clicking Run button) ``` POST http://localhost:8080/api/chat/completions 400 (Bad Request) window.fetch @ fetcher.js?v=3f93747d:66 chatCompletion @ index.ts:341 textCompletionHandler @ Completions.svelte:43 submitHandler @ Completions.svelte:100 click_handler @ Completions.svelte:168 ``` - Backend log (right after clicking Run button) ``` 2025-06-06 11:56:09.031 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:61712 - "POST /api/chat/completions HTTP/1.1" 400 - {} ``` ### Additional Information Modified `.env` for your reference: ``` # Ollama URL for the backend to connect # The path '/ollama' will be redirected to the specified backend URL OLLAMA_BASE_URL='http://localhost:11434' OPENAI_API_BASE_URL='' OPENAI_API_KEY='' # AUTOMATIC1111_BASE_URL="http://localhost:7860" # DO NOT TRACK SCARF_NO_ANALYTICS=true DO_NOT_TRACK=true ANONYMIZED_TELEMETRY=false WEBUI_AUTH=false ``` A fix I've done on my local machine is adding a `model.direct` check in `src/lib/components/playground/Completions.svelte`, and updating the URL and token argument to `chatCompletion` correspondingly.
GiteaMirror added the bug label 2025-11-11 16:21:34 -06:00
Author
Owner

@tjbck commented on GitHub (Jun 6, 2025):

@ayanahye

@tjbck commented on GitHub (Jun 6, 2025): @ayanahye
Author
Owner

@ayanahye commented on GitHub (Jun 8, 2025):

Hello, I wanted to clarify why you're setting the OPEN_API_BASE_URL to empty string? If you set it to whichever port your OpenAI-compatible provider is on it should work. Please let me know, thank you!

@ayanahye commented on GitHub (Jun 8, 2025): Hello, I wanted to clarify why you're setting the OPEN_API_BASE_URL to empty string? If you set it to whichever port your OpenAI-compatible provider is on it should work. Please let me know, thank you!
Author
Owner

@secminhr commented on GitHub (Jun 9, 2025):

I added the OpenAI-compatible server via Settings/Connections. I didn't change OPEN_API_BASE_URL because the normal chat will work without that additional environment variable, so I thought Playground/Completions should also work.

After playing around with OPEN_API_BASE_URL, I found that if the URL is provided via OPEN_API_BASE_URL, it's marked as External in the model selector; if the URL is provided via Settings/Connections, then it's marked Direct. And it seems that Playground/Completions only supports External models, while normal chat supports them both.
I used 2 URLs to test that, by opening 2 servers on my local machine (serving different models). One is added in .env, and another one is added via Settings/Connections.

Is it intended to only support External models? But then it would be difficult for non-admin users to use Playground/Completions with their own OpenAI-compatible servers.

@secminhr commented on GitHub (Jun 9, 2025): I added the OpenAI-compatible server via Settings/Connections. I didn't change OPEN_API_BASE_URL because the normal chat will work without that additional environment variable, so I thought Playground/Completions should also work. After playing around with OPEN_API_BASE_URL, I found that if the URL is provided via OPEN_API_BASE_URL, it's marked as `External` in the model selector; if the URL is provided via Settings/Connections, then it's marked `Direct`. And it seems that Playground/Completions only supports External models, while normal chat supports them both. I used 2 URLs to test that, by opening 2 servers on my local machine (serving different models). One is added in `.env`, and another one is added via Settings/Connections. Is it intended to only support External models? But then it would be difficult for non-admin users to use Playground/Completions with their own OpenAI-compatible servers.
Author
Owner

@ayanahye commented on GitHub (Jun 10, 2025):

Hello, thank you for your response. Yes, you are right. Currently, Open WebUI supports External Connections, but Direct connections are not fully supported yet. We are updating the Docs with more information. Thank you!

@ayanahye commented on GitHub (Jun 10, 2025): Hello, thank you for your response. Yes, you are right. Currently, Open WebUI supports External Connections, but Direct connections are not fully supported yet. We are updating the Docs with more information. Thank you!
Author
Owner

@secminhr commented on GitHub (Jun 11, 2025):

Got it. Thank you.

@secminhr commented on GitHub (Jun 11, 2025): Got it. Thank you.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5458