[GH-ISSUE #1459] feat: Image generation api call is missing cfg scale parameter #12504

Closed
opened 2026-04-19 19:26:15 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @simper on GitHub (Apr 8, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/1459

Is your feature request related to a problem? Please describe.
Integration with automatic1111 and comfy ui is good, but the settings configuration ui is lack of the key parameter "CFG Scale" value, which impacts the image generation result significantly, with the default stable diffusion value being 7, some sdxl lighting models' generation result is barely usable.

Describe the solution you'd like
Add the the "CFG Scale" param in image generation configuration page and take it into API calls properly.

Describe alternatives you've considered
No good way to change the default value in automatic1111 stable diffusion side.

Additional context
Even more customization approaches in image generation UI can be considered in future roadmap, for example, real time input dialogue for this CFG scale value before clicking generation button in chat view.

Originally created by @simper on GitHub (Apr 8, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/1459 **Is your feature request related to a problem? Please describe.** Integration with automatic1111 and comfy ui is good, but the settings configuration ui is lack of the key parameter "CFG Scale" value, which impacts the image generation result significantly, with the default stable diffusion value being 7, some sdxl lighting models' generation result is barely usable. **Describe the solution you'd like** Add the the "CFG Scale" param in image generation configuration page and take it into API calls properly. **Describe alternatives you've considered** No good way to change the default value in automatic1111 stable diffusion side. **Additional context** Even more customization approaches in image generation UI can be considered in future roadmap, for example, real time input dialogue for this CFG scale value before clicking generation button in chat view.
Author
Owner

@silentoplayz commented on GitHub (Apr 8, 2024):

Being able to set the sampler used would be really beneficial for image generation as well!

https://stable-diffusion-art.com/samplers/

<!-- gh-comment-id:2042255236 --> @silentoplayz commented on GitHub (Apr 8, 2024): Being able to set the sampler used would be really beneficial for image generation as well! https://stable-diffusion-art.com/samplers/
Author
Owner

@Xerophayze commented on GitHub (Apr 22, 2024):

I'm going to one up this as well. I've got this integrated and I love that I can generate images in line. But I primarily use lightning models which is great because it provides a quick render, but I have no control over the sampler or the cfg setting and so the images tend to look low res and or come out messed up. Is this something that can just be adjusted within the code itself?

<!-- gh-comment-id:2070191050 --> @Xerophayze commented on GitHub (Apr 22, 2024): I'm going to one up this as well. I've got this integrated and I love that I can generate images in line. But I primarily use lightning models which is great because it provides a quick render, but I have no control over the sampler or the cfg setting and so the images tend to look low res and or come out messed up. Is this something that can just be adjusted within the code itself?
Author
Owner

@Xerophayze commented on GitHub (Apr 22, 2024):

Hey one more thing, is there a way to add a switch like /imagine to engage the image generation without having to have a prompt there in the first place?

<!-- gh-comment-id:2070330253 --> @Xerophayze commented on GitHub (Apr 22, 2024): Hey one more thing, is there a way to add a switch like /imagine to engage the image generation without having to have a prompt there in the first place?
Author
Owner

@spammenotinoz commented on GitHub (May 14, 2024):

I'm going to one up this as well. I've got this integrated and I love that I can generate images in line. But I primarily use lightning models which is great because it provides a quick render, but I have no control over the sampler or the cfg setting and so the images tend to look low res and or come out messed up. Is this something that can just be adjusted within the code itself?

Yes, update
backend\apps\images\main.py
Make sure you find the correct section, I would quote lines, but my code is modified to support API authentication, so I am out of whack.
You can ofcourse use environment variables, but I have just hard-coded for now.
Model and Size are already env variables, I added cfg, negative and save, but you can add here anything you want.


    else:
        if form_data.model:
            set_model_handler(form_data.model)

        data = {
            "prompt": form_data.prompt,
            "batch_size": form_data.n,
            "cfg_scale": 4,
            "negative_prompt": "text, logo, ugly, soft, blurry, out of focus, low quality, garish, distorted, disfigured",
            "save_images": "true",
            "width": width,
            "height": height,
        }
<!-- gh-comment-id:2110173464 --> @spammenotinoz commented on GitHub (May 14, 2024): > I'm going to one up this as well. I've got this integrated and I love that I can generate images in line. But I primarily use lightning models which is great because it provides a quick render, but I have no control over the sampler or the cfg setting and so the images tend to look low res and or come out messed up. Is this something that can just be adjusted within the code itself? Yes, update backend\apps\images\main.py Make sure you find the correct section, I would quote lines, but my code is modified to support API authentication, so I am out of whack. You can ofcourse use environment variables, but I have just hard-coded for now. Model and Size are already env variables, I added cfg, negative and save, but you can add here anything you want. ---------------------------------------------------------- else: if form_data.model: set_model_handler(form_data.model) data = { "prompt": form_data.prompt, "batch_size": form_data.n, "cfg_scale": 4, "negative_prompt": "text, logo, ugly, soft, blurry, out of focus, low quality, garish, distorted, disfigured", "save_images": "true", "width": width, "height": height, }
Author
Owner

@silentoplayz commented on GitHub (Jun 3, 2024):

Related #1958

<!-- gh-comment-id:2146114352 --> @silentoplayz commented on GitHub (Jun 3, 2024): Related #1958
Author
Owner

@silentoplayz commented on GitHub (Jun 3, 2024):

Related #1714

<!-- gh-comment-id:2146120183 --> @silentoplayz commented on GitHub (Jun 3, 2024): Related #1714
Author
Owner

@silentoplayz commented on GitHub (Sep 15, 2024):

Related - https://github.com/open-webui/open-webui/pull/5256

<!-- gh-comment-id:2351416604 --> @silentoplayz commented on GitHub (Sep 15, 2024): Related - https://github.com/open-webui/open-webui/pull/5256
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#12504