[GH-ISSUE #16076] issue: llama4:latest tool calling results in JSON being sent back to the user #56441

Closed
opened 2026-05-05 19:26:00 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @vrurg on GitHub (Jul 27, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16076

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.6.18

Ollama Version (if applicable)

0.9.6

Operating System

macOS Sequoia 15.5

Browser (if applicable)

Firefox 141

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Tool must have been called the same way is it happens for other tool-supporting models.

Actual Behavior

I'm developing an MCP/OpenAPI server. The first tool I use as a playground to study the technology is a web fetch. The tool works well with models like qwen3, llama3.3, llama3.1 – but fails with llama4. What happens is that when given a simple request like "Give me a summary of the following page: https://tinyurl.com/3tumnxb6" the chat responds with a seemingly correct JSON body of the request like the following:

{"type": "function", "name": "web_fetch", "parameters": {"url": "https://tinyurl.com/..."}}

Model's "Function Calling" parameter is set to "Native". With "Default" the model just fails to discover the tool.

curl test for ollama ended with successful outcome, at least to my naive eye:

curl http://localhost:11434/api/chat -d '{                   
                                        "model": "llama4",
                                        "messages": [
                                          { "role": "user", "content": "Call the web_fetch tool on https://example.com" }
                                        ],
                                        "stream": false,
                                        "tools": [
                                          {
                                            "type": "function",
                                            "function": {
                                              "name": "web_fetch",
                                              "description": "Fetch content from a URL",
                                              "parameters": {
                                                "type": "object",
                                                "properties": {
                                                  "url": { "type": "string", "description": "The URL to fetch" }
                                                },
                                                "required": ["url"]
                                              }
                                            }
                                          }
                                        ]
                                      }'
{"model":"llama4","created_at":"2025-07-27T22:35:10.450477Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"web_fetch","arguments":{"url":"https://example.com"}}}]},"done_reason":"stop","done":true,"total_duration":18042806333,"load_duration":13455181125,"prompt_eval_count":660,"prompt_eval_duration":3870152209,"eval_count":20,"eval_duration":658503125}

Steps to Reproduce

  1. Open WebUI installation is done by pulling the Docker image from the Hub.
  2. Connected to the local Ollama process with http://host.docker.internal:11434 URL.
  3. The MCP/OpenAPI service is registered similarly, with http://host.docker.internal:8000 URL in the Admin Panel Settings, Tools page.
  4. The registered service is then enabled in the model's settings. The "Function Calling" parameter is set to "Native".
  5. Test with qwen3, llama3.3, llama3.1 is successful. The models receive the final page content and provide correct summary.
  6. Test with llama4 – the above mentioned JSON is returned. An eye-check of the chat parameters confirms that the tool is enabled.

Logs & Screenshots

Docker logs has no irregularities:

2025-07-27 23:47:31.489 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63704 - "POST /api/v1/chats/bda58a69-0d73-4dbc-9844-d78dd9e22ed2 HTTP/1.1" 200 - {}

2025-07-27 23:47:31.499 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63704 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

2025-07-27 23:47:31.507 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63704 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}

2025-07-27 23:47:50.376 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "POST /api/chat/completions HTTP/1.1" 200 - {}

2025-07-27 23:47:50.414 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

2025-07-27 23:47:50.420 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}

2025-07-27 23:47:51.517 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "POST /api/chat/completed HTTP/1.1" 200 - {}

2025-07-27 23:47:51.526 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "POST /api/v1/chats/bda58a69-0d73-4dbc-9844-d78dd9e22ed2 HTTP/1.1" 200 - {}

2025-07-27 23:47:51.532 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

2025-07-27 23:47:51.541 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}

2025-07-27 23:47:56.458 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

2025-07-27 23:47:56.464 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

2025-07-27 23:47:56.468 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}

2025-07-27 23:47:56.472 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}

2025-07-27 23:47:58.758 | INFO     | open_webui.models.chats:count_chats_by_tag_name_and_user_id:871 - Count of chats for tag 'technology': 6 - {}

2025-07-27 23:47:58.761 | INFO     | open_webui.models.chats:count_chats_by_tag_name_and_user_id:871 - Count of chats for tag 'web_content': 1 - {}

2025-07-27 23:47:58.763 | INFO     | open_webui.models.chats:count_chats_by_tag_name_and_user_id:871 - Count of chats for tag 'information_retrieval': 2 - {}

2025-07-27 23:47:58.780 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/bda58a69-0d73-4dbc-9844-d78dd9e22ed2 HTTP/1.1" 200 - {}

2025-07-27 23:47:58.787 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}

2025-07-27 23:48:08.826 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:62474 - "GET /_app/version.json HTTP/1.1" 304 - {}

Additional Information

No response

Originally created by @vrurg on GitHub (Jul 27, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/16076 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.6.18 ### Ollama Version (if applicable) 0.9.6 ### Operating System macOS Sequoia 15.5 ### Browser (if applicable) Firefox 141 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Tool must have been called the same way is it happens for other tool-supporting models. ### Actual Behavior I'm developing an MCP/OpenAPI server. The first tool I use as a playground to study the technology is a web fetch. The tool works well with models like qwen3, llama3.3, llama3.1 – but fails with llama4. What happens is that when given a simple request like "Give me a summary of the following page: https://tinyurl.com/3tumnxb6" the chat responds with a seemingly correct JSON body of the request like the following: ``` {"type": "function", "name": "web_fetch", "parameters": {"url": "https://tinyurl.com/..."}} ``` Model's "Function Calling" parameter is set to "Native". With "Default" the model just fails to discover the tool. `curl` test for ollama ended with successful outcome, at least to my naive eye: ``` curl http://localhost:11434/api/chat -d '{ "model": "llama4", "messages": [ { "role": "user", "content": "Call the web_fetch tool on https://example.com" } ], "stream": false, "tools": [ { "type": "function", "function": { "name": "web_fetch", "description": "Fetch content from a URL", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The URL to fetch" } }, "required": ["url"] } } } ] }' {"model":"llama4","created_at":"2025-07-27T22:35:10.450477Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"web_fetch","arguments":{"url":"https://example.com"}}}]},"done_reason":"stop","done":true,"total_duration":18042806333,"load_duration":13455181125,"prompt_eval_count":660,"prompt_eval_duration":3870152209,"eval_count":20,"eval_duration":658503125} ``` ### Steps to Reproduce 1. Open WebUI installation is done by pulling the Docker image from the Hub. 2. Connected to the local Ollama process with http://host.docker.internal:11434 URL. 3. The MCP/OpenAPI service is registered similarly, with http://host.docker.internal:8000 URL in the Admin Panel Settings, Tools page. 4. The registered service is then enabled in the model's settings. The "Function Calling" parameter is set to "Native". 5. Test with qwen3, llama3.3, llama3.1 is successful. The models receive the final page content and provide correct summary. 6. Test with llama4 – the above mentioned JSON is returned. An eye-check of the chat parameters confirms that the tool is enabled. ### Logs & Screenshots Docker logs has no irregularities: ``` 2025-07-27 23:47:31.489 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63704 - "POST /api/v1/chats/bda58a69-0d73-4dbc-9844-d78dd9e22ed2 HTTP/1.1" 200 - {} 2025-07-27 23:47:31.499 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63704 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-07-27 23:47:31.507 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63704 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-07-27 23:47:50.376 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "POST /api/chat/completions HTTP/1.1" 200 - {} 2025-07-27 23:47:50.414 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-07-27 23:47:50.420 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-07-27 23:47:51.517 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "POST /api/chat/completed HTTP/1.1" 200 - {} 2025-07-27 23:47:51.526 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "POST /api/v1/chats/bda58a69-0d73-4dbc-9844-d78dd9e22ed2 HTTP/1.1" 200 - {} 2025-07-27 23:47:51.532 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-07-27 23:47:51.541 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-07-27 23:47:56.458 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-07-27 23:47:56.464 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-07-27 23:47:56.468 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-07-27 23:47:56.472 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-07-27 23:47:58.758 | INFO | open_webui.models.chats:count_chats_by_tag_name_and_user_id:871 - Count of chats for tag 'technology': 6 - {} 2025-07-27 23:47:58.761 | INFO | open_webui.models.chats:count_chats_by_tag_name_and_user_id:871 - Count of chats for tag 'web_content': 1 - {} 2025-07-27 23:47:58.763 | INFO | open_webui.models.chats:count_chats_by_tag_name_and_user_id:871 - Count of chats for tag 'information_retrieval': 2 - {} 2025-07-27 23:47:58.780 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/bda58a69-0d73-4dbc-9844-d78dd9e22ed2 HTTP/1.1" 200 - {} 2025-07-27 23:47:58.787 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:63710 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {} 2025-07-27 23:48:08.826 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:62474 - "GET /_app/version.json HTTP/1.1" 304 - {} ``` ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 19:26:00 -05:00
Author
Owner

@tjbck commented on GitHub (Jul 28, 2025):

I presume this is a model template issue and must be addressed on Ollama repo.

<!-- gh-comment-id:3129809683 --> @tjbck commented on GitHub (Jul 28, 2025): I presume this is a model template issue and must be addressed on Ollama repo.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#56441