[PR #537] [MERGED] Add max_tokens workaround for gpt-4-vision-preview model #20365

Closed
opened 2026-04-20 02:55:21 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/537
Author: @CreatorGhost
Created: 1/19/2024
Status: Merged
Merged: 1/22/2024
Merged by: @tjbck

Base: mainHead: fix-gpt-4-vision


📝 Commits (4)

  • 8662437 Add workaround for gpt-4-vision-preview model
  • 60afd6e Add workaround for gpt-4-vision-preview model that support 4k tokens
  • b26e0fb refac
  • 83181b7 fix: add max_token only when field not present

📊 Changes

1 file changed (+29 additions, -20 deletions)

View changed files

📝 backend/apps/openai/main.py (+29 -20)

📄 Description

This commit adds a workaround for the gpt-4-vision-preview model in the OpenAI API.

Due to an issue with this model, it's necessary to set the 'max_tokens' parameter in the request body. The changes in this commit check if the model is 'gpt-4-vision-preview' and, if so, set 'max_tokens' to 4000.

This workaround is temporary and should be removed once the issue with the model is fixed by OpenAI.

A screenshot has been added as a reference to show that the gpt-4-vision-preview model is now working with this workaround. The screenshot can be found in the related pull request or issue.

image

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/537 **Author:** [@CreatorGhost](https://github.com/CreatorGhost) **Created:** 1/19/2024 **Status:** ✅ Merged **Merged:** 1/22/2024 **Merged by:** [@tjbck](https://github.com/tjbck) **Base:** `main` ← **Head:** `fix-gpt-4-vision` --- ### 📝 Commits (4) - [`8662437`](https://github.com/open-webui/open-webui/commit/8662437a9f7d06b4fc6d22713e472428636f7562) Add workaround for gpt-4-vision-preview model - [`60afd6e`](https://github.com/open-webui/open-webui/commit/60afd6ecddf4eae0808f193a6c146eb378ba076b) Add workaround for gpt-4-vision-preview model that support 4k tokens - [`b26e0fb`](https://github.com/open-webui/open-webui/commit/b26e0fb7e707e202b42a8e3e5fc5067c26968f00) refac - [`83181b7`](https://github.com/open-webui/open-webui/commit/83181b7968b462ad5e218802273213b6fa6ca3ab) fix: add max_token only when field not present ### 📊 Changes **1 file changed** (+29 additions, -20 deletions) <details> <summary>View changed files</summary> 📝 `backend/apps/openai/main.py` (+29 -20) </details> ### 📄 Description This commit adds a workaround for the gpt-4-vision-preview model in the OpenAI API. Due to an issue with this model, it's necessary to set the 'max_tokens' parameter in the request body. The changes in this commit check if the model is 'gpt-4-vision-preview' and, if so, set 'max_tokens' to 4000. This workaround is temporary and should be removed once the issue with the model is fixed by OpenAI. A screenshot has been added as a reference to show that the gpt-4-vision-preview model is now working with this workaround. The screenshot can be found in the related pull request or issue. <img width="1418" alt="image" src="https://github.com/ollama-webui/ollama-webui/assets/46842909/7e5953da-120f-4f50-a0cf-8845ec65b48b"> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-20 02:55:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#20365