[PR #10360] [MERGED] fix: Update Ollama option handling in payload.py's convert_payload_openai_to_ollama #38137

Closed
opened 2026-04-25 11:17:14 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/10360
Author: @ferret99gt
Created: 2/19/2025
Status: Merged
Merged: 2/20/2025
Merged by: @tjbck

Base: devHead: ollama-option-conversion-fixes


📝 Commits (7)

  • a560f78 Remove mapping of presence_penalty to new_topix_penalty
  • e6919c3 Remove mapping of frequency_penalty to repeat_penalty
  • aea8977 Remove mapping of max_completion_tokens
  • adde373 Remove parameters that map directly, as they are part of options
  • fea169a Core fix for num_predict not working.
  • 57b01cf Fix for system prompt setting
  • 8125b04 Remove empty ollama_options

📊 Changes

1 file changed (+10 additions, -25 deletions)

View changed files

📝 backend/open_webui/utils/payload.py (+10 -25)

📄 Description

Pull Request Checklist

Note to first-time contributors: Please open a discussion post in Discussions and describe your changes before submitting a pull request.

Before submitting, make sure you've checked the following:

  • Target branch: Please verify that the pull request targets the dev branch.
  • Description: Provide a concise description of the changes made in this pull request.
  • Changelog: Ensure a changelog entry following the format of Keep a Changelog is added at the bottom of the PR description.
  • Documentation: Have you updated relevant documentation Open WebUI Docs, or other documentation sources?
  • Dependencies: Are there any new dependencies? Have you updated the dependency versions in the documentation?
  • Testing: Have you written and run sufficient tests for validating the changes?
  • Code review: Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards?
  • Prefix: To cleary categorize this pull request, prefix the pull request title, using one of the following:
    • BREAKING CHANGE: Significant changes that may affect compatibility
    • build: Changes that affect the build system or external dependencies
    • ci: Changes to our continuous integration processes or workflows
    • chore: Refactor, cleanup, or other non-functional code changes
    • docs: Documentation update or addition
    • feat: Introduces a new feature or enhancement to the codebase
    • fix: Bug fix or error correction
    • i18n: Internationalization or localization changes
    • perf: Performance improvement
    • refactor: Code restructuring for better maintainability, readability, or scalability
    • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc.)
    • test: Adding missing tests or correcting existing tests
    • WIP: Work in progress, a temporary label for incomplete or ongoing work

Changelog Entry

Description

  • This is part 1 of 2 of recreating PR #10015 more atomicly.
  • The goal is to update Payload.py's convert_payload_openai_to_ollama to better handle options for Ollama, particularly num_predict which currently does not work if the user sets it via personal or chat-specific advanced parameters.
  • For most of these changes, the issue was that the parameters/options were being looked for in openai_payload, but they were actually present in openai_payload['options'], causing them to not be found.
  • Each commit is very atomic with notes, please let me know if this was too granular.

Removed

  • Removed the mapping of presence_penalty to new_topic_penalty. new_topic_penalty is not a valid Ollama parameter, and Ollama supports presence_penalty natively. Presence_penalty is being added to Open WebUI in PR #10016.
  • Removed the mapping of frequency_penalty to repeat_penalty. Repeat_penalty is a valid Ollama parameter, but Ollama also supports frequence_penalty natively. Repeat_penalty is being added to Open WebUI in PR #10016, allowing Ollama users to choose which method they prefer.
  • Remove mapping of max_completion_tokens to num_predict. The UI never sends max_completion_tokens. It is only used for OpenAI models o1 and o3, and OpenAI.py and Tasks.py populate it from max_tokens.
  • Remove mapping of temperature, seed and top_p from payload to options. These options are found in the options dictionary already, and are not in the base payload.
  • Removed an empty ollama options dictionary that was not used.

Fixed

  • Fixed mapping of max_tokens to num_predict. The issue was that max_tokens was being looked for in openai_payload, but was present in openai_payload['options'] instead. After remapping, we delete max_tokens to stop Ollama from throwing a warning for invalid options.
  • Fixed passing "system" prompt to Ollama. The system prompt is present in openai_payload['options'], but Ollama does not accept it as an option. It must be part of the payload. After copying it to the payload, we delete it from the options to prevent Ollama throwing a warning for invalid options.

Additional Information

  • Originally submitted as PR #10015 but broken up to be more atomic.
  • Originally opened as a discussion as #9770.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/10360 **Author:** [@ferret99gt](https://github.com/ferret99gt) **Created:** 2/19/2025 **Status:** ✅ Merged **Merged:** 2/20/2025 **Merged by:** [@tjbck](https://github.com/tjbck) **Base:** `dev` ← **Head:** `ollama-option-conversion-fixes` --- ### 📝 Commits (7) - [`a560f78`](https://github.com/open-webui/open-webui/commit/a560f789e4b99bf2505bfb154e3c40899e4114c6) Remove mapping of presence_penalty to new_topix_penalty - [`e6919c3`](https://github.com/open-webui/open-webui/commit/e6919c324234c60d6841b66935df9637ee94ed75) Remove mapping of frequency_penalty to repeat_penalty - [`aea8977`](https://github.com/open-webui/open-webui/commit/aea8977d050edd77842f61b3cf7ad0b1f699f3d7) Remove mapping of max_completion_tokens - [`adde373`](https://github.com/open-webui/open-webui/commit/adde37394b2788fb1f33a2b6f8efc674a6938ed0) Remove parameters that map directly, as they are part of options - [`fea169a`](https://github.com/open-webui/open-webui/commit/fea169a9c00d1a72e7e124bd55ba9063df415777) Core fix for num_predict not working. - [`57b01cf`](https://github.com/open-webui/open-webui/commit/57b01cf8fbbdd4a86f62ea909ca3cfe74f9c3714) Fix for system prompt setting - [`8125b04`](https://github.com/open-webui/open-webui/commit/8125b0499b1d7c4b1860674320ceb91dcdfba744) Remove empty ollama_options ### 📊 Changes **1 file changed** (+10 additions, -25 deletions) <details> <summary>View changed files</summary> 📝 `backend/open_webui/utils/payload.py` (+10 -25) </details> ### 📄 Description # Pull Request Checklist ### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) and describe your changes before submitting a pull request. **Before submitting, make sure you've checked the following:** - [X] **Target branch:** Please verify that the pull request targets the `dev` branch. - [X] **Description:** Provide a concise description of the changes made in this pull request. - [X] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description. - [X] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources? - [X] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation? - [X] **Testing:** Have you written and run sufficient tests for validating the changes? - [X] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards? - [X] **Prefix:** To cleary categorize this pull request, prefix the pull request title, using one of the following: - **BREAKING CHANGE**: Significant changes that may affect compatibility - **build**: Changes that affect the build system or external dependencies - **ci**: Changes to our continuous integration processes or workflows - **chore**: Refactor, cleanup, or other non-functional code changes - **docs**: Documentation update or addition - **feat**: Introduces a new feature or enhancement to the codebase - **fix**: Bug fix or error correction - **i18n**: Internationalization or localization changes - **perf**: Performance improvement - **refactor**: Code restructuring for better maintainability, readability, or scalability - **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc.) - **test**: Adding missing tests or correcting existing tests - **WIP**: Work in progress, a temporary label for incomplete or ongoing work # Changelog Entry ### Description - This is part 1 of 2 of recreating PR #10015 more atomicly. - The goal is to update Payload.py's convert_payload_openai_to_ollama to better handle options for Ollama, particularly num_predict which currently does not work if the user sets it via personal or chat-specific advanced parameters. - For most of these changes, the issue was that the parameters/options were being looked for in openai_payload, but they were actually present in openai_payload['options'], causing them to not be found. - Each commit is very atomic with notes, please let me know if this was *too* granular. ### Removed - Removed the mapping of presence_penalty to new_topic_penalty. new_topic_penalty is not a valid Ollama parameter, and Ollama supports presence_penalty natively. Presence_penalty is being added to Open WebUI in PR #10016. - Removed the mapping of frequency_penalty to repeat_penalty. Repeat_penalty is a valid Ollama parameter, but Ollama also supports frequence_penalty natively. Repeat_penalty is being added to Open WebUI in PR #10016, allowing Ollama users to choose which method they prefer. - Remove mapping of max_completion_tokens to num_predict. The UI never sends max_completion_tokens. It is only used for OpenAI models o1 and o3, and OpenAI.py and Tasks.py populate it from max_tokens. - Remove mapping of temperature, seed and top_p from payload to options. These options are found in the options dictionary already, and are not in the base payload. - Removed an empty ollama options dictionary that was not used. ### Fixed - Fixed mapping of max_tokens to num_predict. The issue was that max_tokens was being looked for in openai_payload, but was present in openai_payload['options'] instead. After remapping, we delete max_tokens to stop Ollama from throwing a warning for invalid options. - Fixed passing "system" prompt to Ollama. The system prompt is present in openai_payload['options'], but Ollama does not accept it as an option. It must be part of the payload. After copying it to the payload, we delete it from the options to prevent Ollama throwing a warning for invalid options. --- ### Additional Information - Originally submitted as PR #10015 but broken up to be more atomic. - Originally opened as a discussion as #9770. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 11:17:14 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#38137