issue: Custom Models (from base Ollama model) ignores think (Ollama) advanced parameter (and possibly more) #5541

Closed
opened 2025-11-11 16:23:53 -06:00 by GiteaMirror · 6 comments
Owner

Originally created by @silentoplayz on GitHub (Jun 14, 2025).

Originally assigned to: @tjbck on GitHub.

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.14

Ollama Version (if applicable)

v0.9.0

Operating System

Edition: Windows 11 Pro | Version: 24H2 | OS Build: 26100.4351 | Windows Feature Experience Pack: 1000.26100.107.0

Browser (if applicable)

LibreWolf v135.0.1-1 (Firefox)

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

When the think (Ollama) advanced parameter is toggled off for a custom model derived from an Ollama base model, the model's responses should not contain thinking/thought tags. The parameter should effectively disable the internal thinking process of the model using Ollama's model parameter.

Actual Behavior

Despite think (Ollama) being toggled off in the advanced parameters for a custom model (created from an Ollama base model), the model continues to generate thinking/thought tags within its responses. This indicates that the think (Ollama) parameter is being ignored or is not functioning correctly for custom models when set to disabled. Other Ollama advanced parameters for models should be examined and tested thoroughly.

Steps to Reproduce

  1. Ensure you have an Open WebUI (e.g., using the main Docker image) and Ollama installation.
  2. Verify Ollama is running and accessible (default: http://localhost:11434).
  3. Pull a suitable reasoning/thinking base model, for example: ollama pull qwen:8b.
  4. Open Open WebUI in your browser (e.g., http://localhost:8080).
  5. Navigate to the "Workspace" section in the sidebar.
  6. Click "Create" and then select "Custom Model".
  7. In the "Create Custom Model" dialog:
  • Set "Base Model" to the Ollama model you pulled (e.g., qwen:8b).
  • Give it a distinct "Model Name" (e.g., MyQwenCustom).
  • Do not modify any parameters in the "Advanced Parameters" section for the custom model at this point.
  1. Click "Save Model".
  2. Start a new chat session.
  3. From the model dropdown at the top, select your newly created custom model (MyQwenCustom).
  4. Open the "Advanced Parameters" sidebar by clicking the Controls icon at the top right side of a chat.
  5. Locate the think (Ollama) parameter and toggle it off.
  6. Send a query to the model that typically triggers thought processes (e.g., "Explain the concept of quantum entanglement step by step.").
  7. Observe the model's response. It should contain thinking/thought tags, despite the think (Ollama) advanced parameter being toggled off.

For comparison (demonstrates expected behavior with base model):

  1. Start another new chat session.
  2. Select the base model directly (e.g., qwen:8b) from the model dropdown.
  3. Open the "Advanced Parameters" sidebar and toggle think (Ollama) off.
  4. Send the exact same query as in step 13 above.
  5. Observe the model's response. It should not contain thinking/thought tags, confirming the parameter works correctly for base models.

Logs & Screenshots

To be clear, I have not modified the custom model's or the base model's (Qwen 3 8B) advanced parameters.
Custom model (from base model):
Image
Base model: (I had to zoom out by 40% for this screenshot)
Image
When using only the base model, the think (Ollama) advanced parameter does work as anticipated and as expected.
Image
Image

Additional Information

Initially, I suspected this issue might be related to knowledge collections, but further testing revealed that knowledge collections or files have no bearing on the problem. The issue consistently occurs with custom models derived from Ollama base models, regardless of whether a knowledge collection is attached or not.

This behavior is specific to custom models; the think (Ollama) parameter functions as expected (i.e., it successfully disables think tags) when used directly with the base Ollama models. I am unsure if other advanced parameters might be similarly affected for custom models.

Originally created by @silentoplayz on GitHub (Jun 14, 2025). Originally assigned to: @tjbck on GitHub. ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.14 ### Ollama Version (if applicable) v0.9.0 ### Operating System Edition: Windows 11 Pro | Version: 24H2 | OS Build: 26100.4351 | Windows Feature Experience Pack: 1000.26100.107.0 ### Browser (if applicable) LibreWolf v135.0.1-1 (Firefox) ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior When the `think (Ollama)` advanced parameter is toggled **off** for a custom model derived from an Ollama base model, the model's responses should not contain thinking/thought tags. The parameter should effectively disable the internal thinking process of the model using Ollama's model parameter. ### Actual Behavior Despite `think (Ollama)` being toggled **off** in the advanced parameters for a custom model (created from an Ollama base model), the model continues to generate thinking/thought tags within its responses. This indicates that the `think (Ollama)` parameter is being ignored or is not functioning correctly for custom models when set to disabled. Other Ollama advanced parameters for models should be examined and tested thoroughly. ### Steps to Reproduce 1. Ensure you have an Open WebUI (e.g., using the `main` Docker image) and Ollama installation. 2. Verify Ollama is running and accessible (default: `http://localhost:11434`). 3. Pull a suitable reasoning/thinking base model, for example: `ollama pull qwen:8b`. 4. Open Open WebUI in your browser (e.g., `http://localhost:8080`). 5. Navigate to the "Workspace" section in the sidebar. 6. Click "Create" and then select "Custom Model". 7. In the "Create Custom Model" dialog: * Set "Base Model" to the Ollama model you pulled (e.g., `qwen:8b`). * Give it a distinct "Model Name" (e.g., `MyQwenCustom`). * **Do not modify any parameters in the "Advanced Parameters" section for the custom model at this point.** 8. Click "Save Model". 9. Start a new chat session. 10. From the model dropdown at the top, select your newly created custom model (`MyQwenCustom`). 11. Open the "Advanced Parameters" sidebar by clicking the `Controls` icon at the top right side of a chat. 12. Locate the `think (Ollama)` parameter and toggle it **off**. 13. Send a query to the model that typically triggers thought processes (e.g., "Explain the concept of quantum entanglement step by step."). 14. Observe the model's response. It should contain thinking/thought tags, despite the `think (Ollama)` advanced parameter being toggled **off**. **For comparison (demonstrates expected behavior with base model):** 1. Start another new chat session. 2. Select the *base model* directly (e.g., `qwen:8b`) from the model dropdown. 3. Open the "Advanced Parameters" sidebar and toggle `think (Ollama)` **off**. 4. Send the exact same query as in step 13 above. 5. Observe the model's response. It should **not** contain thinking/thought tags, confirming the parameter works correctly for base models. ### Logs & Screenshots To be clear, I **have not** modified the custom model's or the base model's (Qwen 3 8B) advanced parameters. **Custom model (from base model)**: ![Image](https://github.com/user-attachments/assets/ddf37931-64a5-4193-8b4a-acb4ac90352c) **Base model:** (I had to zoom out by 40% for this screenshot) ![Image](https://github.com/user-attachments/assets/177881fa-04fc-4461-8dd2-d8f2990629c7) When using only the base model, the `think (Ollama)` advanced parameter does work as anticipated and as expected. ![Image](https://github.com/user-attachments/assets/efecfb42-097d-4620-a73c-9efe181a4c70) ![Image](https://github.com/user-attachments/assets/6f909f9b-ccf1-402b-b06c-8a1c647e0b11) ### Additional Information Initially, I suspected this issue might be related to knowledge collections, but further testing revealed that knowledge collections or files have no bearing on the problem. The issue consistently occurs with custom models derived from Ollama base models, regardless of whether a knowledge collection is attached or not. This behavior is specific to custom models; the `think (Ollama)` parameter functions as expected (i.e., it successfully disables think tags) when used directly with the base Ollama models. I am unsure if other advanced parameters might be similarly affected for custom models.
GiteaMirror added the bug label 2025-11-11 16:23:53 -06:00
Author
Owner

@rgaricano commented on GitHub (Jun 14, 2025):

Could you try adding to 63256136ef/backend/open_webui/models/knowledge.py (L100)

100 model_config = ConfigDict(extra="allow")
?

@rgaricano commented on GitHub (Jun 14, 2025): Could you try adding to https://github.com/open-webui/open-webui/blob/63256136ef8322210c01c2bb322097d1ccfb8c6f/backend/open_webui/models/knowledge.py#L100 100 ` model_config = ConfigDict(extra="allow")` ?
Author
Owner

@silentoplayz commented on GitHub (Jun 14, 2025):

model_config = ConfigDict(extra="allow")

That didn't appear to make a difference.
Image

@silentoplayz commented on GitHub (Jun 14, 2025): > model_config = ConfigDict(extra="allow") That didn't appear to make a difference. ![Image](https://github.com/user-attachments/assets/a573e6d8-57ea-4d04-8e04-31a17035136a)
Author
Owner

@silentoplayz commented on GitHub (Jun 14, 2025):

Update!

There is a very real possibility that the Merge Responses button in multi-response scenarios also ignores the think (Ollama) advanced parameter, as shown in the screenshots attached below. (Tested on two different Open WebUI instances on my PC)
Image
Image

Please let me know if I should open up a separate, new issue for this.

@silentoplayz commented on GitHub (Jun 14, 2025): ## Update! There is a very real possibility that the `Merge Responses` button in multi-response scenarios also ignores the `think (Ollama)` advanced parameter, as shown in the screenshots attached below. (Tested on two different Open WebUI instances on my PC) ![Image](https://github.com/user-attachments/assets/12f14de6-10c7-4d8f-864d-eb81ac483156) ![Image](https://github.com/user-attachments/assets/bd98d828-af1e-4874-b7d0-6361e42556d2) ## Please let me know if I should open up a separate, new issue for this.
Author
Owner

@tjbck commented on GitHub (Jun 16, 2025):

Can't seem to reproduce on my end, was able to correctly create custom model with a think param enabled.

Image

@tjbck commented on GitHub (Jun 16, 2025): Can't seem to reproduce on my end, was able to correctly create custom model with a think param enabled. ![Image](https://github.com/user-attachments/assets/150a9a8b-eedf-47da-a4e5-c75b0d515aac)
Author
Owner

@tjbck commented on GitHub (Jun 16, 2025):

Should be addressed with ab877e1d7ea77f6a5c5db63bda72041a92d96f7a!

@tjbck commented on GitHub (Jun 16, 2025): Should be addressed with ab877e1d7ea77f6a5c5db63bda72041a92d96f7a!
Author
Owner

@silentoplayz commented on GitHub (Jun 16, 2025):

Should be addressed with ab877e1!

This comment solves it! Thank you for the speedy fix! https://github.com/open-webui/open-webui/issues/14975#issuecomment-2973265418 is still relevant though, as this is a separate issue on its own.

@silentoplayz commented on GitHub (Jun 16, 2025): > Should be addressed with [ab877e1](https://github.com/open-webui/open-webui/commit/ab877e1d7ea77f6a5c5db63bda72041a92d96f7a)! This comment solves it! Thank you for the speedy fix! https://github.com/open-webui/open-webui/issues/14975#issuecomment-2973265418 is still relevant though, as this is a separate issue on its own.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5541