mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
No answer from chat GPT / cut off answer when I change the value of num_report #2371
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Naesue on GitHub (Oct 13, 2024).
Bug Report
Installation Method
I used git clone, and then set the python environment with pyenv and launched the app with Docker.
Environment
Open WebUI Version: [e.g., v0.3.32]
Operating System: [e.g., macOS 15.0 Sequoia]
Browser (if applicable): [e.g., Chrome 129.0.6668.101 (Build officiel) (arm64) ; but also Safari version 18.0.1]
Confirmation:
Expected Behavior:
I expect to be able to change the max tokens (num_report) parameter when interacting with chat GPT 4o latest or o1-preview and be able to get longer answers from them.
Actual Behavior:
If I change the num_report to even just 129, chat GPT 4o latest answers but it gets quickly cut, and o1-preview does not generate an answer. When I tried with 256, chat GPT 4o latest answer was longer but still got cut, and o1-preview remained empty. (as if it was loading)
If I change the num_report to 127, so 1 below the default, I encounter the same issue.
I had this issue in both Safari and Chrome.
I tried individually with chat GPT 4o and o1-previews, and with these 2 bots at the same time and the result was the same
Description
Bug Summary:
I can't change the value of num_report, even 1 below or 1 more.
Everything works fine if the value remains on default.
Reproduction Details
Steps to Reproduce:
I am on a macbook M3. I also had the exact same issue on my mac mini M2.
I launched Docker via Docker desktop, then went to the open web UI folder and launched the server with "open-webui serve" in the terminal. I tried restarting my computer but the problem is the same.
Logs and Screenshots
Browser Console Logs:
Logs in Chrome:
submitPrompt ab6aa3e0-4560-460a-9ce1-ccdb8e5df9b1
UserMessage.svelte:85 UserMessage mounted
8ResponseMessage.svelte:328 ResponseMessage mounted
MultiResponseMessages.svelte:88 multiresponse:initHandler
8ResponseMessage.svelte:328 ResponseMessage mounted
Chat.svelte:803 modelId chatgpt-4o-latest
Chat.svelte:803 modelId o1-preview
MultiResponseMessages.svelte:138 {0: {…}, 1: {…}} {0: 0, 1: 0}
2ResponseMessage.svelte:328 ResponseMessage mounted
MultiResponseMessages.svelte:158 <div class="flex w-full message-51344793-471b-42b9-8192-f7dcf8c78828 svelte-icqdsw" id="message-51344793-471b-42b9-8192-f7dcf8c78828">…flex
+layout.svelte:82 usage {models: Array(1)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
Chat.svelte:1424 {id: 'chatcmpl-AHugZAEoj9DestHloQDhi91xKF6fH', object: 'chat.completion', created: 1728832839, model: 'o1-preview-2024-09-12', choices: Array(1), …}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:77 user-count {count: 1}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
Chat.svelte:1424 {id: 'chatcmpl-AHugZeSqoktRpdoi6e4QDrsFKoOyO', object: 'chat.completion', created: 1728832839, model: 'chatgpt-4o-latest', choices: Array(1), …}
Docker Container Logs:
No error message in Docker