mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 19:38:46 -05:00
[GH-ISSUE #11931] feat: Models that support image generation, such as grok-2-image-1212 #16405
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lovepiece on GitHub (Mar 21, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/11931
Check Existing Issues
Problem Description
The grok api introduces a model for image generation. It seems that the Grok-2-image-1212 model is not supported by openwebui yet.
About model is introduced in the website: https://docs.x.ai/docs/guides/image-generations#parameters
Can you add support for grok's image generation model?
Desired Solution you'd like
This is the use of python example, Is it possible to display the connected picture in the dialog box after the generation?
Alternatives Considered
Solution
Additional Context
No response
@tjbck commented on GitHub (Mar 21, 2025):
They're supported.
@tan-yong-sheng commented on GitHub (Mar 22, 2025):
Hi @tjbck any guides for that? I couldn't find any docs on LiteLLM on this, thanks.
@Classic298 commented on GitHub (Mar 22, 2025):
@tan-yong-sheng Set up the model in LiteLLM, use OpenAI endpoint in image generation and point to your LiteLLM proxy server and set up the image generation model. Done. If LiteLLM has no docs on grok's specific model, perhaps that's because they don't support it yet.
OpenWebUI itself does support any image model as long as it's accessible by one of the supported Image Generation endpoints. If LiteLLM doesn't support Grok yet, wait a few days or a week, they are usually fast with implementing popular models. But that's an issue of LiteLLM then, not OpenWebUI
@tan-yong-sheng commented on GitHub (Mar 22, 2025):
Thanks for answering. Understood, thanks a lot. Btw, my typo for previous message. It should be openwebui instead of litellm
@t0saki commented on GitHub (Mar 22, 2025):
Using grok-2-image in xAI's API directly in OWUI results in the message
The size parameter is not supported at the moment. Leave it empty., but OWUI does not allow leaving the resolution blank. I've created a tool that uses Cloudflare Workers to filter out unsupported parameters from the requests.xAI-Image-Gen-API-Refine
@recklessop commented on GitHub (Apr 17, 2025):
I am seeing the same problem. But why do i want to run yet another thing? why cant Open-webui just support not sending a size?
@t0saki commented on GitHub (Apr 17, 2025):
I believe it would be a small fix to let Open-WebUI support sending without some params, but writing a script and deploy it to clouflare is faster to me. It cost me less than a hour, but if I want to modify the OWUI, compiling and reviewing PR may takes longer time.
@wzmzw commented on GitHub (Apr 26, 2025):
Hi, after I imported Grok-API in OWUI, I can select the grok-2-image-1212 model, but after entering the prompt word, it shows OpenAI: Network Problem. Have you solved this problem?
@tan-yong-sheng commented on GitHub (Apr 26, 2025):
Hi, no, I didn't use litellm for serving image generation endpoint in the end.
I just stick to solution instead: https://github.com/open-webui/open-webui/issues/11931#issuecomment-2745417418
@lovepiece commented on GitHub (Apr 27, 2025):
I wrote a python script for openwebui to use grok-2-image, which modifies the request by listening and forwarding to support the format of grok-2-image.
In fact, I asked AI to help me do this. Currently, there is no problem in using it under normal circumstances. You only need to fill in the address and port of openwebui's AI drawing as the address and port of pyhon running.
@wzmzw commented on GitHub (Apr 27, 2025):
Hello, what option does "the address and port of openwebui's AI drawing" refer to? I tried running on port 9988 and it still shows "OpenAI: Network Problem". There is no problem with my network environment.
@lovepiece commented on GitHub (Apr 27, 2025):
Set it in the OPENAI API in the image generation settings, for example: http://192.168.10.11:9988/v1, this address is the server address running the python script,and fill in your own grok api for the secret key
@wzmzw commented on GitHub (Apr 27, 2025):
I have tried changing this option, including the local address and http://localhost:9988/v1 as you said. My problem still occurs. So how do you set up your Grok API? Is it All endpoints and All models or just images?
@lovepiece commented on GitHub (Apr 28, 2025):
Please make sure that your openwebui can connect to the address where python is running. If the openwebui is deployed by docker, you should confirm again whether the network can connect to the address where python is running. My API uses all the models.
@wzmzw commented on GitHub (Apr 28, 2025):
Thanks for your help. I also realized the generation of multiple images. The following is based on your code modification.