No answer from chat GPT / cut off answer when I change the value of num_report #2371

Closed
opened 2025-11-11 15:05:54 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @Naesue on GitHub (Oct 13, 2024).

Bug Report

Installation Method

I used git clone, and then set the python environment with pyenv and launched the app with Docker.

Environment

  • Open WebUI Version: [e.g., v0.3.32]

  • Operating System: [e.g., macOS 15.0 Sequoia]

  • Browser (if applicable): [e.g., Chrome 129.0.6668.101 (Build officiel) (arm64) ; but also Safari version 18.0.1]

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

I expect to be able to change the max tokens (num_report) parameter when interacting with chat GPT 4o latest or o1-preview and be able to get longer answers from them.

Actual Behavior:

If I change the num_report to even just 129, chat GPT 4o latest answers but it gets quickly cut, and o1-preview does not generate an answer. When I tried with 256, chat GPT 4o latest answer was longer but still got cut, and o1-preview remained empty. (as if it was loading)
If I change the num_report to 127, so 1 below the default, I encounter the same issue.

I had this issue in both Safari and Chrome.
I tried individually with chat GPT 4o and o1-previews, and with these 2 bots at the same time and the result was the same

Description

Bug Summary:
I can't change the value of num_report, even 1 below or 1 more.
Everything works fine if the value remains on default.

Reproduction Details

Steps to Reproduce:
I am on a macbook M3. I also had the exact same issue on my mac mini M2.

I launched Docker via Docker desktop, then went to the open web UI folder and launched the server with "open-webui serve" in the terminal. I tried restarting my computer but the problem is the same.

Logs and Screenshots

SCR-20241013-pebu SCR-20241013-pelg SCR-20241013-penp SCR-20241013-plfo

Browser Console Logs:

Logs in Chrome:
submitPrompt ab6aa3e0-4560-460a-9ce1-ccdb8e5df9b1
UserMessage.svelte:85 UserMessage mounted
8ResponseMessage.svelte:328 ResponseMessage mounted
MultiResponseMessages.svelte:88 multiresponse:initHandler
8ResponseMessage.svelte:328 ResponseMessage mounted
Chat.svelte:803 modelId chatgpt-4o-latest
Chat.svelte:803 modelId o1-preview
MultiResponseMessages.svelte:138 {0: {…}, 1: {…}} {0: 0, 1: 0}
2ResponseMessage.svelte:328 ResponseMessage mounted
MultiResponseMessages.svelte:158 <div class=​"flex w-full message-51344793-471b-42b9-8192-f7dcf8c78828 svelte-icqdsw" id=​"message-51344793-471b-42b9-8192-f7dcf8c78828">​…​​flex
+layout.svelte:82 usage {models: Array(1)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
Chat.svelte:1424 {id: 'chatcmpl-AHugZAEoj9DestHloQDhi91xKF6fH', object: 'chat.completion', created: 1728832839, model: 'o1-preview-2024-09-12', choices: Array(1), …}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(3)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:77 user-count {count: 1}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
+layout.svelte:82 usage {models: Array(2)}
Chat.svelte:1424 {id: 'chatcmpl-AHugZeSqoktRpdoi6e4QDrsFKoOyO', object: 'chat.completion', created: 1728832839, model: 'chatgpt-4o-latest', choices: Array(1), …}

Docker Container Logs:
No error message in Docker

Originally created by @Naesue on GitHub (Oct 13, 2024). # Bug Report ## Installation Method I used git clone, and then set the python environment with pyenv and launched the app with Docker. ## Environment - **Open WebUI Version:** [e.g., v0.3.32] - **Operating System:** [e.g., macOS 15.0 Sequoia] - **Browser (if applicable):** [e.g., Chrome 129.0.6668.101 (Build officiel) (arm64) ; but also Safari version 18.0.1] **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: I expect to be able to change the max tokens (num_report) parameter when interacting with chat GPT 4o latest or o1-preview and be able to get longer answers from them. ## Actual Behavior: If I change the num_report to even just 129, chat GPT 4o latest answers but it gets quickly cut, and o1-preview does not generate an answer. When I tried with 256, chat GPT 4o latest answer was longer but still got cut, and o1-preview remained empty. (as if it was loading) If I change the num_report to 127, so 1 below the default, I encounter the same issue. I had this issue in both Safari and Chrome. I tried individually with chat GPT 4o and o1-previews, and with these 2 bots at the same time and the result was the same ## Description **Bug Summary:** I can't change the value of num_report, even 1 below or 1 more. Everything works fine if the value remains on default. ## Reproduction Details **Steps to Reproduce:** I am on a macbook M3. I also had the exact same issue on my mac mini M2. I launched Docker via Docker desktop, then went to the open web UI folder and launched the server with "open-webui serve" in the terminal. I tried restarting my computer but the problem is the same. ## Logs and Screenshots <img width="1389" alt="SCR-20241013-pebu" src="https://github.com/user-attachments/assets/c83d3912-cec8-46fb-9485-87cf6cbfe827"> <img width="1376" alt="SCR-20241013-pelg" src="https://github.com/user-attachments/assets/10fa7616-532b-4638-9f80-44fdaeb5e2cc"> <img width="918" alt="SCR-20241013-penp" src="https://github.com/user-attachments/assets/b15f958d-b67c-4a77-ba8b-66228be86ffe"> <img width="1386" alt="SCR-20241013-plfo" src="https://github.com/user-attachments/assets/56b1d06e-a85d-41e2-9085-73cd5ec6e6ab"> **Browser Console Logs:** Logs in Chrome: submitPrompt ab6aa3e0-4560-460a-9ce1-ccdb8e5df9b1 UserMessage.svelte:85 UserMessage mounted 8ResponseMessage.svelte:328 ResponseMessage mounted MultiResponseMessages.svelte:88 multiresponse:initHandler 8ResponseMessage.svelte:328 ResponseMessage mounted Chat.svelte:803 modelId chatgpt-4o-latest Chat.svelte:803 modelId o1-preview MultiResponseMessages.svelte:138 {0: {…}, 1: {…}} {0: 0, 1: 0} 2ResponseMessage.svelte:328 ResponseMessage mounted MultiResponseMessages.svelte:158 <div class=​"flex w-full message-51344793-471b-42b9-8192-f7dcf8c78828 svelte-icqdsw" id=​"message-51344793-471b-42b9-8192-f7dcf8c78828">​…​</div>​flex +layout.svelte:82 usage {models: Array(1)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} Chat.svelte:1424 {id: 'chatcmpl-AHugZAEoj9DestHloQDhi91xKF6fH', object: 'chat.completion', created: 1728832839, model: 'o1-preview-2024-09-12', choices: Array(1), …} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(3)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:77 user-count {count: 1} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} +layout.svelte:82 usage {models: Array(2)} Chat.svelte:1424 {id: 'chatcmpl-AHugZeSqoktRpdoi6e4QDrsFKoOyO', object: 'chat.completion', created: 1728832839, model: 'chatgpt-4o-latest', choices: Array(1), …} **Docker Container Logs:** No error message in Docker
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#2371