issue: Ollama proxy for POST /api/show endpoint returns Status 422 with correct parameters #5699

Closed
opened 2025-11-11 16:30:13 -06:00 by GiteaMirror · 9 comments
Owner

Originally created by @gmacario on GitHub (Jul 4, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.15

Ollama Version (if applicable)

v0.9.3

Operating System

Ubuntu 24.04

Browser (if applicable)

N.A.

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

According to Ollama API documentation at https://ollama.readthedocs.io/en/api/#show-model-information,
endpoint POST /api/show expects the following parameters:

  • model: name of the model to show
  • verbose: (optional) if set to true, returns full data for verbose response fields

This is confirmed by invoking Ollama API directly.

According to section "Ollama API Proxy Support" as documented at https://docs.openwebui.com/getting-started/api-endpoints/#-ollama-api-proxy-support, this API endpoint should also be available through Open WebUI at http://<openwebui_host>:<openwebui_port>/ollama/api/show

Actual Behavior

Instead of getting Status 200, the Ollama API POST /api/show invoked with correct parameters but proxied through Open WebUI returns Status 422 with the following error.

{
  "detail": [
    {
      "type": "model_attributes_type",
      "loc": [
        "body"
      ],
      "msg": "Input should be a valid dictionary or object to extract fields from",
      "input": "{\"model\": \"llama3.2\"}"
    }
  ]
}

This behaviour causes many Ollama clients which use Open WebUI proxy to break, most notably
Copilot Chat extension for VS Code.

Steps to Reproduce

Login to the host where

  • Open WebUI is installed in a Docker container and listens to http://localhost:3000
  • Ollama can be accessed as http://10.1.204.21:11434

then inspect an installed model (example: llama3.2) calling either Ollama directly

curl -X POST -d '{"model": "llama3.2"}' http://10.1.204.21:11434/api/show > model.out
cat model.out && echo

or through the Ollama API Proxy provided by Open WebUI (for security reasons I partially obfuscated my API Key):

curl -X POST -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2"}' http://localhost:3000/ollama/api/show > model.out
cat model.out && echo

Logs & Screenshots

Results invoking Ollama API directly with the same parameters --> OK (NOTE: I trimmed the contents of model.out for clarity)

gmacario@hw2482:~$ curl -X POST -d '{"model": "llama3.2"}' http://10.1.204.21:11434/api/show > model.out
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 49703    0 49682  100    21   517k    224 --:--:-- --:--:-- --:--:--  516k
gmacario@hw2482:~$ cat model.out && echo
{"license":"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means (...)
gmacario@hw2482:~$

Results invoking Ollama API through Open WebUI --> ERROR

gmacario@hw2482:~$ curl -X POST -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2"}' http://localhost:3000/ollama/api/show > model.out
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   193  100   172  100    21   7979    974 --:--:-- --:--:-- --:--:--  9190
gmacario@hw2482:~$ cat model.out && echo
{"detail":[{"type":"model_attributes_type","loc":["body"],"msg":"Input should be a valid dictionary or object to extract fields from","input":"{\"model\": \"llama3.2\"}"}]}
gmacario@hw2482:~$

Excerpt from docker logs -f open-webui

2025-07-04 09:31:45.823 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.20.0.1:33548 - "POST /ollama/api/show HTTP/1.1" 422 - {}

Additional Information

No response

Originally created by @gmacario on GitHub (Jul 4, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.15 ### Ollama Version (if applicable) v0.9.3 ### Operating System Ubuntu 24.04 ### Browser (if applicable) N.A. ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior According to Ollama API documentation at https://ollama.readthedocs.io/en/api/#show-model-information, endpoint `POST /api/show` expects the following parameters: - `model`: name of the model to show - `verbose`: (optional) if set to `true`, returns full data for verbose response fields This is confirmed by invoking Ollama API directly. According to section "Ollama API Proxy Support" as documented at <https://docs.openwebui.com/getting-started/api-endpoints/#-ollama-api-proxy-support>, this API endpoint should also be available through Open WebUI at `http://<openwebui_host>:<openwebui_port>/ollama/api/show` ### Actual Behavior Instead of getting Status 200, the Ollama API `POST /api/show` invoked with correct parameters but proxied through Open WebUI returns Status 422 with the following error. ```json { "detail": [ { "type": "model_attributes_type", "loc": [ "body" ], "msg": "Input should be a valid dictionary or object to extract fields from", "input": "{\"model\": \"llama3.2\"}" } ] } ``` This behaviour causes many Ollama clients which use Open WebUI proxy to break, most notably [Copilot Chat extension for VS Code](https://github.com/microsoft/vscode-copilot-chat). ### Steps to Reproduce Login to the host where - Open WebUI is installed in a Docker container and listens to `http://localhost:3000` - Ollama can be accessed as `http://10.1.204.21:11434` then inspect an installed model (example: `llama3.2`) calling either Ollama directly ```bash curl -X POST -d '{"model": "llama3.2"}' http://10.1.204.21:11434/api/show > model.out cat model.out && echo ``` or through the Ollama API Proxy provided by Open WebUI (for security reasons I partially obfuscated my API Key): ```bash curl -X POST -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2"}' http://localhost:3000/ollama/api/show > model.out cat model.out && echo ``` ### Logs & Screenshots Results invoking Ollama API directly with the same parameters --> OK (NOTE: I trimmed the contents of model.out for clarity) ```text gmacario@hw2482:~$ curl -X POST -d '{"model": "llama3.2"}' http://10.1.204.21:11434/api/show > model.out % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 49703 0 49682 100 21 517k 224 --:--:-- --:--:-- --:--:-- 516k gmacario@hw2482:~$ cat model.out && echo {"license":"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means (...) gmacario@hw2482:~$ ``` Results invoking Ollama API through Open WebUI --> ERROR ```text gmacario@hw2482:~$ curl -X POST -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2"}' http://localhost:3000/ollama/api/show > model.out % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 193 100 172 100 21 7979 974 --:--:-- --:--:-- --:--:-- 9190 gmacario@hw2482:~$ cat model.out && echo {"detail":[{"type":"model_attributes_type","loc":["body"],"msg":"Input should be a valid dictionary or object to extract fields from","input":"{\"model\": \"llama3.2\"}"}]} gmacario@hw2482:~$ ``` Excerpt from `docker logs -f open-webui` ```text 2025-07-04 09:31:45.823 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.20.0.1:33548 - "POST /ollama/api/show HTTP/1.1" 422 - {} ``` ### Additional Information _No response_
GiteaMirror added the bug label 2025-11-11 16:30:13 -06:00
Author
Owner

@gmacario commented on GitHub (Jul 4, 2025):

Ensuring that the Content-type of the request is "application/json"

curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out
cat mode.out | jq .

Result:

gmacario@hw2482:~$ curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out
Note: Unnecessary use of -X or --request, POST is already inferred.
* Host localhost:3000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying [::1]:3000...
* Connected to localhost (::1) port 3000
> POST /ollama/api/show HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/8.5.0
> Accept: */*
> Content-Type: application/json
> Authorization: Bearer sk-5********************************a09d
> Content-Length: 28
> 
} [28 bytes data]
< HTTP/1.1 422 Unprocessable Entity
< date: Fri, 04 Jul 2025 09:46:45 GMT
< server: uvicorn
< content-length: 112
< content-type: application/json
< x-process-time: 0
< 
{ [112 bytes data]
100   140  100   112  100    28   5285   1321 --:--:-- --:--:-- --:--:--  6666
* Connection #0 to host localhost left intact
gmacario@hw2482:~$ cat model.out | jq .
{
  "detail": [
    {
      "type": "missing",
      "loc": [
        "body",
        "name"
      ],
      "msg": "Field required",
      "input": {
        "model": "llama3.2:latest"
      }
    }
  ]
}
gmacario@hw2482:~$
@gmacario commented on GitHub (Jul 4, 2025): Ensuring that the Content-type of the request is "application/json" ```bash curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out cat mode.out | jq . ``` Result: ```text gmacario@hw2482:~$ curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"model": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out Note: Unnecessary use of -X or --request, POST is already inferred. * Host localhost:3000 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying [::1]:3000... * Connected to localhost (::1) port 3000 > POST /ollama/api/show HTTP/1.1 > Host: localhost:3000 > User-Agent: curl/8.5.0 > Accept: */* > Content-Type: application/json > Authorization: Bearer sk-5********************************a09d > Content-Length: 28 > } [28 bytes data] < HTTP/1.1 422 Unprocessable Entity < date: Fri, 04 Jul 2025 09:46:45 GMT < server: uvicorn < content-length: 112 < content-type: application/json < x-process-time: 0 < { [112 bytes data] 100 140 100 112 100 28 5285 1321 --:--:-- --:--:-- --:--:-- 6666 * Connection #0 to host localhost left intact gmacario@hw2482:~$ cat model.out | jq . { "detail": [ { "type": "missing", "loc": [ "body", "name" ], "msg": "Field required", "input": { "model": "llama3.2:latest" } } ] } gmacario@hw2482:~$ ```
Author
Owner

@gmacario commented on GitHub (Jul 4, 2025):

It looks like the API is correctly handled if parameter model is instead called name:

curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"name": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out
cat model.out | jq .

Result:

gmacario@hw2482:~$ curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"name": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out
Note: Unnecessary use of -X or --request, POST is already inferred.
* Host localhost:3000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying [::1]:3000...
* Connected to localhost (::1) port 3000
> POST /ollama/api/show HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/8.5.0
> Accept: */*
> Content-Type: application/json
> Authorization: Bearer sk-5d643367335a4742b9367c8c75dba09d
> Content-Length: 27
> 
} [27 bytes data]
< HTTP/1.1 200 OK
< date: Fri, 04 Jul 2025 09:48:25 GMT
< server: uvicorn
< content-length: 49270
< content-type: application/json
< vary: Accept-Encoding
< x-process-time: 0
< 
{ [14480 bytes data]
100 49297  100 49270  100    27   273k    153 --:--:-- --:--:-- --:--:--  275k
* Connection #0 to host localhost left intact
gmacario@hw2482:~$ cat model.out | jq .
{
  "license": "LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution \nand modification of the Llama Materials set forth herein.\n\n“Documentation”
...
  "capabilities": [
    "completion",
    "tools"
  ],
  "modified_at": "2025-06-29T09:42:39.550443328Z"
}
gmacario@hw2482:~$ 

Excerpt from docker logs -f open-webui:

2025-07-04 10:04:28.600 | INFO     | open_webui.routers.ollama:get_all_models:334 - get_all_models() - {}
2025-07-04 10:04:28.756 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.20.0.1:58574 - "POST /ollama/api/show HTTP/1.1" 200 - {}
@gmacario commented on GitHub (Jul 4, 2025): It looks like the API is correctly handled if parameter `model` is instead called `name`: ```bash curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"name": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out cat model.out | jq . ``` Result: ```text gmacario@hw2482:~$ curl -v -X POST -H "Content-Type: application/json" -H "Authorization: Bearer sk-5********************************a09d" -d '{"name": "llama3.2:latest"}' http://localhost:3000/ollama/api/show > model.out Note: Unnecessary use of -X or --request, POST is already inferred. * Host localhost:3000 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying [::1]:3000... * Connected to localhost (::1) port 3000 > POST /ollama/api/show HTTP/1.1 > Host: localhost:3000 > User-Agent: curl/8.5.0 > Accept: */* > Content-Type: application/json > Authorization: Bearer sk-5d643367335a4742b9367c8c75dba09d > Content-Length: 27 > } [27 bytes data] < HTTP/1.1 200 OK < date: Fri, 04 Jul 2025 09:48:25 GMT < server: uvicorn < content-length: 49270 < content-type: application/json < vary: Accept-Encoding < x-process-time: 0 < { [14480 bytes data] 100 49297 100 49270 100 27 273k 153 --:--:-- --:--:-- --:--:-- 275k * Connection #0 to host localhost left intact gmacario@hw2482:~$ cat model.out | jq . { "license": "LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution \nand modification of the Llama Materials set forth herein.\n\n“Documentation” ... "capabilities": [ "completion", "tools" ], "modified_at": "2025-06-29T09:42:39.550443328Z" } gmacario@hw2482:~$ ``` Excerpt from `docker logs -f open-webui`: ```text 2025-07-04 10:04:28.600 | INFO | open_webui.routers.ollama:get_all_models:334 - get_all_models() - {} 2025-07-04 10:04:28.756 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.20.0.1:58574 - "POST /ollama/api/show HTTP/1.1" 200 - {} ```
Author
Owner

@rgaricano commented on GitHub (Jul 4, 2025):

Confirmed,
Open-webUI ollama/api/show endpoint respond to
{"name":"llama3.2:latest"}
and not to
{"model": "llama3.2:latest"}

It should be adapted to fit with ollama API.

@rgaricano commented on GitHub (Jul 4, 2025): Confirmed, Open-webUI ollama/api/show endpoint respond to `{"name":"llama3.2:latest"}` and not to ` {"model": "llama3.2:latest"}` It should be adapted to fit with ollama API.
Author
Owner

@rgaricano commented on GitHub (Jul 4, 2025):

@gmacario : to can see a swagger with all endpoints in http://yourOpenWebUI/docs
(in dev mode)

@rgaricano commented on GitHub (Jul 4, 2025): @gmacario : to can see a swagger with all endpoints in `http://yourOpenWebUI/docs` (in dev mode)
Author
Owner

@rgaricano commented on GitHub (Jul 4, 2025):

Same for other ollama endpoints, as unload, delete or pull.

i'll try to test changing name by model in
59ba21bdf8/backend/open_webui/routers/ollama.py (L636-L637)

@rgaricano commented on GitHub (Jul 4, 2025): Same for other ollama endpoints, as unload, delete or pull. i'll try to test changing name by model in https://github.com/open-webui/open-webui/blob/59ba21bdf8eb791a412db869a13ff76c6135b651/backend/open_webui/routers/ollama.py#L636-L637
Author
Owner

@gmacario commented on GitHub (Jul 4, 2025):

Same for other ollama endpoints, as unload, delete or pull.

i'll try to test changing name by model in
59ba21bdf8/backend/open_webui/routers/ollama.py (L636-L637)

I was trying the same in https://github.com/gmacario/open-webui/blob/gmacario-fix-api-show/backend/open_webui/routers/ollama.py but I am just learning how to setup the development environment :-)

@gmacario commented on GitHub (Jul 4, 2025): > Same for other ollama endpoints, as unload, delete or pull. > > i'll try to test changing name by model in > https://github.com/open-webui/open-webui/blob/59ba21bdf8eb791a412db869a13ff76c6135b651/backend/open_webui/routers/ollama.py#L636-L637 > I was trying the same in https://github.com/gmacario/open-webui/blob/gmacario-fix-api-show/backend/open_webui/routers/ollama.py but I am just learning how to setup the development environment :-)
Author
Owner

@gmacario commented on GitHub (Jul 4, 2025):

Same for other ollama endpoints, as unload, delete or pull.

It looks like sometime around November 2024 Ollama applied some parameter name change (from name to model) to the following API endpoints:

  • POST /api/create
  • POST /api/show
  • DELETE /api/delete
  • POST /api/pull
  • POST /api/push

At least this is what I have inferred from https://github.com/ollama/ollama/pull/7731/files

@gmacario commented on GitHub (Jul 4, 2025): > Same for other ollama endpoints, as unload, delete or pull. It looks like sometime around November 2024 Ollama applied some parameter name change (from `name` to `model`) to the following API endpoints: - `POST /api/create` - `POST /api/show` - `DELETE /api/delete` - `POST /api/pull` - `POST /api/push` At least this is what I have inferred from https://github.com/ollama/ollama/pull/7731/files
Author
Owner

@rgaricano commented on GitHub (Jul 4, 2025):

how do you run Open-WebUI (cloned, docker, pip)?
(for chatting about best on discord channel https://discord.gg/3jZam3YK )

but I am just learning how to setup the development environment :-)

I do it in a easy way, cloned, and build

npm install
NODE_OPTIONS=--max_old_space_size=12000  npm run build
cd backend
pip install -r requirements.txt

& start Open-WebUI with backend/start.sh
(I have it as service)

then if I change something in src I rebuild it (NODE_OPTIONS=--max_old_space_size=12000 npm run build),
if changes in backend just restart openwebui.

I follow code on github and edit with nano in my local copy.

@rgaricano commented on GitHub (Jul 4, 2025): how do you run Open-WebUI (cloned, docker, pip)? (for chatting about best on discord channel https://discord.gg/3jZam3YK ) > but I am just learning how to setup the development environment :-) I do it in a easy way, cloned, and build ``` npm install NODE_OPTIONS=--max_old_space_size=12000 npm run build cd backend pip install -r requirements.txt ``` & start Open-WebUI with backend/start.sh (I have it as service) then if I change something in src I rebuild it (`NODE_OPTIONS=--max_old_space_size=12000 npm run build`), if changes in backend just restart openwebui. I follow code on github and edit with nano in my local copy.
Author
Owner

@gmacario commented on GitHub (Jul 5, 2025):

Fix via https://github.com/open-webui/open-webui/pull/15527

@gmacario commented on GitHub (Jul 5, 2025): Fix via https://github.com/open-webui/open-webui/pull/15527
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5699