[GH-ISSUE #16834] issue: Intermittent Follow-up Prompts #56731

Closed
opened 2026-05-05 20:01:12 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @cma2t3r on GitHub (Aug 22, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16834

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.22 - v0.6.25

Ollama Version (if applicable)

No response

Operating System

Ubuntu 22.04

Browser (if applicable)

Chrome, Brave, Edge

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Choose a follow-up prompt

Actual Behavior

Follow-up prompts rarely get appended to chat

Steps to Reproduce

I am uncertain of how to recreate the issue ... maybe disabling all Ollama endpoints in admin settings/connections
I was hoping a newer version would clear up the intermittent problem.
I am testing LM Studio as an openai connection and the logs show the follow-up prompts get generated, but they rarely appear in the chat session no matter the browser [Chrome, Brave, Edge].

Logs & Screenshots

Image

LM Studio log:

2025-08-22 12:14:14 [DEBUG]
Received request: POST to /v1/chat/completions with body {
"model": "qwen/qwen3-4b-2507",
"messages": [
{
"role": "user",
"content": "### Task:\nSuggest 3-5 relevant follow-up questions... ...re! 👋 How can I assist you today?\n</chat_history>"
}
],
"stream": false
}
2025-08-22 12:14:14 [INFO]
[LM STUDIO SERVER] Running chat completion on conversation with 1 messages.

___ useless log data removed from here ____

2025-08-22 12:14:17 [INFO]
[qwen/qwen3-4b-2507] Generated prediction: {
"id": "chatcmpl-ddzmv5q8eya48y6ln116jg",
"object": "chat.completion",
"created": 1755879254,
"model": "qwen/qwen3-4b-2507",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "[\n "What are some common mistakes people make when starting a new project?",\n "Can you give me examples of how to set realistic goals?",\n "How can I stay motivated when progress feels slow?",\n "What tools or resources do you recommend for project planning?"\n]",
"reasoning_content": "",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 262,
"completion_tokens": 57,
"total_tokens": 319
},
"stats": {},
"system_fingerprint": "qwen/qwen3-4b-2507"
}

Browser console log:

Failed to load resource: the server responded with a status of 500 ()

index.ts:189
Object
detail
"WebUI could not connect to Ollama"
Prototype

Object

Additional Information

Image

All other generators work as expected.
Follow Up is enabled.

Originally created by @cma2t3r on GitHub (Aug 22, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/16834 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.22 - v0.6.25 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 22.04 ### Browser (if applicable) Chrome, Brave, Edge ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Choose a follow-up prompt ### Actual Behavior Follow-up prompts rarely get appended to chat ### Steps to Reproduce I am uncertain of how to recreate the issue ... maybe disabling all Ollama endpoints in admin settings/connections I was hoping a newer version would clear up the intermittent problem. I am testing LM Studio as an openai connection and the logs show the follow-up prompts get generated, but they rarely appear in the chat session no matter the browser [Chrome, Brave, Edge]. ### Logs & Screenshots <img width="1036" height="747" alt="Image" src="https://github.com/user-attachments/assets/7c5ffe05-6fa3-4d14-aed3-be3602301737" /> LM Studio log: 2025-08-22 12:14:14 [DEBUG] Received request: POST to /v1/chat/completions with body { "model": "qwen/qwen3-4b-2507", "messages": [ { "role": "user", "content": "### Task:\nSuggest 3-5 relevant follow-up questions... <Truncated in logs> ...re! 👋 How can I assist you today?\n</chat_history>" } ], "stream": false } 2025-08-22 12:14:14 [INFO] [LM STUDIO SERVER] Running chat completion on conversation with 1 messages. ___ useless log data removed from here ____ 2025-08-22 12:14:17 [INFO] [qwen/qwen3-4b-2507] Generated prediction: { "id": "chatcmpl-ddzmv5q8eya48y6ln116jg", "object": "chat.completion", "created": 1755879254, "model": "qwen/qwen3-4b-2507", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "[\n \"What are some common mistakes people make when starting a new project?\",\n \"Can you give me examples of how to set realistic goals?\",\n \"How can I stay motivated when progress feels slow?\",\n \"What tools or resources do you recommend for project planning?\"\n]", "reasoning_content": "", "tool_calls": [] }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 262, "completion_tokens": 57, "total_tokens": 319 }, "stats": {}, "system_fingerprint": "qwen/qwen3-4b-2507" } Browser console log: Failed to load resource: the server responded with a status of 500 () index.ts:189 Object detail : "WebUI could not connect to Ollama" [[Prototype]] : Object ### Additional Information <img width="1035" height="393" alt="Image" src="https://github.com/user-attachments/assets/d614dfc2-2b4b-43d1-8c80-d0aa15beb62f" /> All other generators work as expected. Follow Up is enabled.
GiteaMirror added the bug label 2026-05-05 20:01:12 -05:00
Author
Owner

@cma2t3r commented on GitHub (Aug 22, 2025):

Image

Enabling an Ollama endpoint in admin settings/connections gets rid of the JS/AJAX error in the browser console, but still no follow-up prompts appended to chat; and the follow-up prompts aren't always generated, took multiple retries to get a follow-up to appear in the LMS log.

LM Studio log:

2025-08-22 13:16:51 [DEBUG]
Received request: POST to /v1/chat/completions with body {
"model": "qwen/qwen3-4b-2507",
"messages": [
{
"role": "user",
"content": "### Task:\nSuggest 3-5 relevant follow-up questions... ... make this as helpful as possible!\n</chat_history>"
}
],
"stream": false
}
2025-08-22 13:16:51 [INFO]
[LM STUDIO SERVER] Running chat completion on conversation with 1 messages.

___ useless log data removed from here ___

2025-08-22 13:16:54 [INFO]
[qwen/qwen3-4b-2507] Generated prediction: {
"id": "chatcmpl-zjkhylkn80bgnvun84g0bs",
"object": "chat.completion",
"created": 1755883011,
"model": "qwen/qwen3-4b-2507",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "[\n "What are some common challenges people face when starting a new project?",\n "Can you give examples of successful projects that followed a similar approach?",\n "How can I get started with setting clear goals for my project?",\n "What tools or resources would you recommend for tracking progress?"\n]",
"reasoning_content": "",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 270,
"completion_tokens": 61,
"total_tokens": 331
},
"stats": {},
"system_fingerprint": "qwen/qwen3-4b-2507"
}

<!-- gh-comment-id:3215082518 --> @cma2t3r commented on GitHub (Aug 22, 2025): <img width="1038" height="593" alt="Image" src="https://github.com/user-attachments/assets/e3b6f8ed-3731-4f22-b63a-65526c02cc2b" /> Enabling an Ollama endpoint in admin settings/connections gets rid of the JS/AJAX error in the browser console, but still no follow-up prompts appended to chat; and the follow-up prompts aren't always generated, took multiple retries to get a follow-up to appear in the LMS log. LM Studio log: 2025-08-22 13:16:51 [DEBUG] Received request: POST to /v1/chat/completions with body { "model": "qwen/qwen3-4b-2507", "messages": [ { "role": "user", "content": "### Task:\nSuggest 3-5 relevant follow-up questions... <Truncated in logs> ... make this as helpful as possible!\n</chat_history>" } ], "stream": false } 2025-08-22 13:16:51 [INFO] [LM STUDIO SERVER] Running chat completion on conversation with 1 messages. ___ useless log data removed from here ___ 2025-08-22 13:16:54 [INFO] [qwen/qwen3-4b-2507] Generated prediction: { "id": "chatcmpl-zjkhylkn80bgnvun84g0bs", "object": "chat.completion", "created": 1755883011, "model": "qwen/qwen3-4b-2507", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "[\n \"What are some common challenges people face when starting a new project?\",\n \"Can you give examples of successful projects that followed a similar approach?\",\n \"How can I get started with setting clear goals for my project?\",\n \"What tools or resources would you recommend for tracking progress?\"\n]", "reasoning_content": "", "tool_calls": [] }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 270, "completion_tokens": 61, "total_tokens": 331 }, "stats": {}, "system_fingerprint": "qwen/qwen3-4b-2507" }
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#56731