mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #13180] feat: Support gpt-image-1 from OpenAI new Image gen model #55500
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @sFritsch09 on GitHub (Apr 23, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/13180
Check Existing Issues
Problem Description
To support the GPT-Image-1 model for image generation and set quality with corresponding size, you typically need to specify parameters such as the image resolution or dimensions and a quality setting if supported by the API or SDK you are using.
Since GPT-Image-1 is a hypothetical or specific model name, I’ll provide a general example of how you might call an image generation API with quality and size parameters. If you have a specific platform or API in mind, please share it for a more tailored example.
Example: Calling GPT-Image-1 with Quality and Size Parameters
Notes:
Customize Image Output
You can configure the following output options:
Size: Image dimensions (e.g., 1024x1024, 1024x1536)
Quality: Rendering quality (e.g. low, medium, high)
Format: File output format
Compression: Compression level (0-100%) for JPEG and WebP formats
Background: Transparent or opaque
Model Docs:
https://platform.openai.com/docs/guides/image-generation?image-generation-model=gpt-image-1
https://platform.openai.com/docs/models/gpt-image-1
Desired Solution you'd like
Support gpt-image-1
Alternatives Considered
No response
Additional Context
No response
@spammenotinoz commented on GitHub (Apr 24, 2025):
I uploaded a pipe, seems to do the job for both image creation and editing.
https://openwebui.com/f/spammenot/gpt_image_1
But please review the code, I can not code.
Edit: 0.3.0 updated to non-blocking
@coskunm commented on GitHub (Apr 24, 2025):
Is it working? I tried but it is not generating image. Also I didnt understand proxy variables. Should you send an example of valves screen?
@spammenotinoz commented on GitHub (Apr 24, 2025):
Like 4.1, you do need to be a verified organisation to use this model.
Here is a sample screenshot of my settings, proxy ect.. left as default

Yes it works
@kristaller486 commented on GitHub (Apr 24, 2025):
Unfortunately, it doesn't work if you need to edit the generated image.
@spammenotinoz commented on GitHub (Apr 24, 2025):
As above, works for me. Do you have an example and I can check.
Note only png, webp, or jpg file less than 25MB are currently supported by OpenAI.
Otherwise, I wonder if it works for me because I hard coded client side image compression at 1024x1024
@danieldilly commented on GitHub (Apr 24, 2025):
After adding this pipeline function Open WebUI doesn't load for me anymore. I just get a black background. How can I fix this? This was the first function I ever tried using.
@danieldilly commented on GitHub (Apr 24, 2025):
I removed the container and re-added it and it still doesn't work.
Can I not use Open WebUI ever again now or what?
Update: I had to remove the volume and I lost all my chats and prompts and everything but at least its working again
Update: Tried the function again, same result. After enabling it I just get a black screen when loading Open WebUI. It totally breaks it for me. Thanks for trying though.
@barnabehvrd commented on GitHub (Apr 24, 2025):
I do not have the same issue as @danieldilly with
v0.3.1Have you tried with this version ?However, I have another issue :
Whenever I ask for an image generation, the image the plugin will also edit the image it just has generated, resulting in ~ 0.02$ per image generation for a low quality 1024x1024 image due to several API calls with a lot of input tokens due to the editing.
cf : https://files.voltis.cloud/S7Tqu883bBc1UWqBCYcl8eG6q37yQKmf.mp4
I'm also happy to share the little dinosaur with you :

Finally, @spammenotinoz could you create a GitHub repo so we could move the issue to it, instead of talking in this issue ?
@spammenotinoz commented on GitHub (Apr 25, 2025):
Uploaded version 0.3.2, logic change. Edit function is only invoked when latest user message (prompt) contains an image.
Removed proxy support (poor implementation, was impacting reliability).
Long weekend here, I will think about adding a repo, but as mentioned above I can't code. The issue here is unlike ChatGPT to edit you need to call a different endpoint. Your not actually conversing with an LLM.
ie: I am not coding to detect if the user wants to edit an existing image, just making the assumption that if there is a message in the prompt, invoke the EDIT function\endpoint.
Limitation being if you want to edit an image returned by the model, you need to paste that in.
I can't code, this was really just a quick implementation, but overall has been working for me.
@JiangNanGenius commented on GitHub (Apr 26, 2025):
Thank you for your helpful code! I have a suggestion: could you add an option to modify the base URL? This would be especially useful for users outside the US, as it allows them to specify different API servers or proxy addresses. Hope you can consider this—thank you!
@JiangNanGenius commented on GitHub (Apr 26, 2025):
"""
title: OpenAI Image Generator (GPTo\GPT-image-1)
description: Quick Pipe to enable image creation and editing with gpt-image-1
author: MorningBean.ai
author_url: https://morningbean.ai
funding_url: FREE
version: 0.3.2
license: MIT
requirements: typing, pydantic, openai
environment_variables:
disclaimer: This pipe is provided as is without any guarantees.
Please ensure that it meets your requirements.
0.3.2 Logic fix to only invoke editing when latets user message (prompt) contains an image.
0.3.0 BugFix move to Non-Blocking
"""
import json
import random
import base64
import asyncio
import re
import tempfile
import os
from typing import List, AsyncGenerator, Callable, Awaitable
from pydantic import BaseModel, Field
from openai import OpenAI
class Pipe:
class Valves(BaseModel):
OPENAI_API_KEYS: str = Field(
default="", description="OpenAI API Keys, comma-separated"
)
IMAGE_NUM: int = Field(default=2, description="Number of images (1-10)")
IMAGE_SIZE: str = Field(
default="1024x1024",
description="Image size: 1024x1024, 1536x1024, 1024x1536, auto",
)
IMAGE_QUALITY: str = Field(
default="auto", description="Image quality: high, medium, low, auto"
)
MODERATION: str = Field(
default="auto", description="Moderation strictness: auto (default) or low"
)
BASE_URL: str = Field(
default="https://api.openai.com", description="Custom base URL for OpenAI"
)
@JiangNanGenius commented on GitHub (Apr 26, 2025):
i tried to change to add base url settings, tested success
@GaussianGuaicai commented on GitHub (Apr 26, 2025):
The biggest downside is too much tokens when you had Title Generate or Auto Completion enabled, because image datas is treated as text tokens too, you always get the millions tokens warning.
@spammenotinoz commented on GitHub (Apr 26, 2025):
Ahh, I use Title Generate and don't seem to have the issue. My titles use nano and cost about 2c per day.
I don't use auto completion, but surely these bug would impact standard chats with images? As these features are also based on the user prompt not the response.
Will look into this after the holidays.
@mazierovictor commented on GitHub (Apr 27, 2025):
How to integrate this function into an assistant?
@MichaelMKenny commented on GitHub (Apr 28, 2025):
I've updated @spammenotinoz pipe to fix generating multiple images for each image provided when editing an image. It also doesn't re-call the OpenAI image gen API when a failure happens. Here's my gist:
https://gist.github.com/MichaelMKenny/c6f07ce661165d1a84ef7b41ad08216b
Enjoy! And thank you @spammenotinoz for making the original code :)
@auggie246 commented on GitHub (Apr 28, 2025):
When I upload an image via openwebui, the
bodythat's getting passed into pipe, does not contain any images