[GH-ISSUE #4328] ComfyUI Integration #52235

Closed
opened 2026-05-05 13:23:09 -05:00 by GiteaMirror · 30 comments
Owner

Originally created by @pkeffect on GitHub (Aug 3, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/4328

Bug Report

Description

The integration of ComfyUI into Open-WebUI seems to have been broken with the latest Flux inclusion. No matter what model, including a flux model but not limited to them alone, chosen will give this error:

Bug Summary:
Something went wrong :/ 1 validation error for ImageGenerationPayload flux_fp8_clip Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str] For further information visit https://errors.pydantic.dev/2.8/v/bool_parsing

Steps to Reproduce:
Make sure your ComfyUI endpoint is set correctly. Choose any model and try to generate an image.

Expected Behavior:
Should be generating an image whether a Flux model is used or not.

Actual Behavior:
Tossing the same error no matter what model is chosen as listed above.

Environment

  • Open WebUI Version: [e.g., 0.1.120]

  • v0.3.11

  • Ollama (if applicable): [e.g., 0.1.30, 0.1.32-rc1]

  • v0.3.3

  • Operating System: [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04]

  • Browser (if applicable): [e.g., Chrome 100.0, Firefox 98.0]

Reproduction Details

Confirmation:

  • [x ] I have read and followed all the instructions provided in the README.md.
  • [ x] I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • [ x] I have included the Docker container logs.

Logs and Screenshots

Browser Console Logs:
[Include relevant browser console logs, if applicable]

Docker Container Logs:
[Include relevant Docker container logs, if applicable]
INFO: 172.30.0.1:58950 - "POST /images/api/v1/generations HTTP/1.1" 400 Bad Request
Screenshots (if applicable):
[Attach any relevant screenshots to help illustrate the issue]

Installation Method

[Describe the method you used to install the project, e.g., manual installation, Docker, package manager, etc.]

Additional Information

[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]

Note

If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

Originally created by @pkeffect on GitHub (Aug 3, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/4328 # Bug Report ## Description The integration of ComfyUI into Open-WebUI seems to have been broken with the latest `Flux` inclusion. No matter what model, including a flux model but not limited to them alone, chosen will give this error: **Bug Summary:** ```Something went wrong :/ 1 validation error for ImageGenerationPayload flux_fp8_clip Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str] For further information visit https://errors.pydantic.dev/2.8/v/bool_parsing``` **Steps to Reproduce:** Make sure your ComfyUI endpoint is set correctly. Choose any model and try to generate an image. **Expected Behavior:** Should be generating an image whether a `Flux` model is used or not. **Actual Behavior:** Tossing the same error no matter what model is chosen as listed above. ## Environment - **Open WebUI Version:** [e.g., 0.1.120] - v0.3.11 - **Ollama (if applicable):** [e.g., 0.1.30, 0.1.32-rc1] - v0.3.3 - **Operating System:** [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04] - **Browser (if applicable):** [e.g., Chrome 100.0, Firefox 98.0] ## Reproduction Details **Confirmation:** - [x ] I have read and followed all the instructions provided in the README.md. - [ x] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ x] I have included the Docker container logs. ## Logs and Screenshots **Browser Console Logs:** [Include relevant browser console logs, if applicable] **Docker Container Logs:** [Include relevant Docker container logs, if applicable] ```INFO: 172.30.0.1:58950 - "POST /images/api/v1/generations HTTP/1.1" 400 Bad Request``` **Screenshots (if applicable):** [Attach any relevant screenshots to help illustrate the issue] ## Installation Method [Describe the method you used to install the project, e.g., manual installation, Docker, package manager, etc.] ## Additional Information [Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.] ## Note If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
Author
Owner

@neotherack commented on GitHub (Aug 3, 2024):

Having the same thing, just updated.
I'm running non-docker version with CUDA enabled.

It was running on the previouses versions.

image

<!-- gh-comment-id:2267007198 --> @neotherack commented on GitHub (Aug 3, 2024): Having the same thing, just updated. I'm running non-docker version with CUDA enabled. It was running on the previouses versions. ![image](https://github.com/user-attachments/assets/24c3caa4-2610-40b6-be16-d2be8a2a02f4)
Author
Owner

@silentoplayz commented on GitHub (Aug 3, 2024):

I also get the Something went wrong :/ 1 validation error for ImageGenerationPayload flux_fp8_clip Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str] For further information visit https://errors.pydantic.dev/2.8/v/bool_parsing error, even when the safetensors file selected within Open WebUI v0.3.11 is a SD model.

ComfyUI integration has been tested in Open WebUI v0.3.10 as well and it works great with a Stable Diffusion safetensors file selected.

<!-- gh-comment-id:2267084740 --> @silentoplayz commented on GitHub (Aug 3, 2024): I also get the `Something went wrong :/ 1 validation error for ImageGenerationPayload flux_fp8_clip Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str] For further information visit https://errors.pydantic.dev/2.8/v/bool_parsing` error, even when the safetensors file selected within Open WebUI v0.3.11 is a SD model. ComfyUI integration has been tested in Open WebUI v0.3.10 as well and it works great with a Stable Diffusion safetensors file selected.
Author
Owner

@tjbck commented on GitHub (Aug 3, 2024):

@JohnTheNerd

<!-- gh-comment-id:2267099476 --> @tjbck commented on GitHub (Aug 3, 2024): @JohnTheNerd
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

interesting - I personally use flux (and I'm running the same code from my PR right now). I'll take a look when I get to my computer.

<!-- gh-comment-id:2267100453 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): interesting - I personally use flux (and I'm running the same code from my PR right now). I'll take a look when I get to my computer.
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

I see the issue that causes what's happening to OP. I accidentally defined COMFYUI_FLUX_FP8_CLIP as a string instead of a boolean in config.py - which upsets Pydantic when it's not set and therefore is an empty string. I'll create a PR to fix it, but a potential workaround until the real fix arrives is to simply set COMFYUI_FLUX_FP8_CLIP to "true" or "false", regardless of whether you use Flux. I am unable to test fp16 Flux as I lack the hardware - I can however make sure other models are not affected and make sure the request is at least sent with or without the lower precision environment variable set.

I however have no idea why Flux itself isn't working - it works just fine on my setup. I tested with this environment:

      - 'DEFAULT_MODELS=llama-3.1-70b'
      - 'DEFAULT_USER_ROLE=user'
      - 'OPENAI_API_BASE_URLS=http://10.1.3.14:5000/v1'
      - 'OPENAI_API_KEYS=redacted'
      - 'ENABLE_OLLAMA_API=false'
      - 'PDF_EXTRACT_IMAGES=true'
      - 'RAG_EMBEDDING_ENGINE=openai'
      - 'RAG_OPENAI_API_BASE_URL=https://litellm.internal.johnthenerd.com/v1'
      - 'RAG_EMBEDDING_MODEL=mxbai-embed-large'
      - 'ENABLE_RAG_WEB_SEARCH=true'
      - 'RAG_WEB_SEARCH_ENGINE=duckduckgo'
      - 'WHISPER_MODEL_AUTO_UPDATE=true'
      - 'ENABLE_IMAGE_GENERATION=true'
      - 'IMAGE_GENERATION_ENGINE=comfyui'
      - 'COMFYUI_BASE_URL=http://10.1.3.13:8188'
      - 'IMAGE_SIZE=1024x1024'
      - 'IMAGE_GENERATION_MODEL=flux1-dev.sft'
      - 'COMFYUI_SCHEDULER=sgm_uniform'
      - 'COMFYUI_CFG_SCALE=5.5'
      - 'COMFYUI_SD3=false'
      - 'COMFYUI_FLUX=true'
      - 'COMFYUI_FLUX_WEIGHT_DTYPE=fp8_e4m3fn'
      - 'COMFYUI_FLUX_FP8_CLIP=true'
      - 'IMAGE_STEPS=20'
      - 'WEBUI_AUTH=true'
      - 'WEBUI_AUTH_TRUSTED_EMAIL_HEADER=X-authentik-email'
      - 'WEBUI_AUTH_TRUSTED_NAME_HEADER=X-authentik-name'
      - 'AUDIO_TTS_OPENAI_API_BASE_URL=http://open-webui_tts:8000/v1'
      - 'AUDIO_TTS_ENGINE=openai'
      - 'AUDIO_STT_ENGINE=openai'
      - 'AUDIO_STT_OPENAI_API_BASE_URL=http://10.1.3.16:8000/v1'
      - 'AUDIO_STT_MODEL=Systran/faster-whisper-large-v3'
<!-- gh-comment-id:2267105472 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): I see the issue that causes what's happening to OP. I accidentally defined `COMFYUI_FLUX_FP8_CLIP` as a string instead of a boolean in config.py - which upsets Pydantic when it's not set and therefore is an empty string. I'll create a PR to fix it, but a potential workaround until the real fix arrives is to simply set `COMFYUI_FLUX_FP8_CLIP` to "true" or "false", regardless of whether you use Flux. I am unable to test fp16 Flux as I lack the hardware - I can however make sure other models are not affected and make sure the request is at least sent with or without the lower precision environment variable set. I however have no idea why Flux itself isn't working - it works just fine on my setup. I tested with this environment: ``` - 'DEFAULT_MODELS=llama-3.1-70b' - 'DEFAULT_USER_ROLE=user' - 'OPENAI_API_BASE_URLS=http://10.1.3.14:5000/v1' - 'OPENAI_API_KEYS=redacted' - 'ENABLE_OLLAMA_API=false' - 'PDF_EXTRACT_IMAGES=true' - 'RAG_EMBEDDING_ENGINE=openai' - 'RAG_OPENAI_API_BASE_URL=https://litellm.internal.johnthenerd.com/v1' - 'RAG_EMBEDDING_MODEL=mxbai-embed-large' - 'ENABLE_RAG_WEB_SEARCH=true' - 'RAG_WEB_SEARCH_ENGINE=duckduckgo' - 'WHISPER_MODEL_AUTO_UPDATE=true' - 'ENABLE_IMAGE_GENERATION=true' - 'IMAGE_GENERATION_ENGINE=comfyui' - 'COMFYUI_BASE_URL=http://10.1.3.13:8188' - 'IMAGE_SIZE=1024x1024' - 'IMAGE_GENERATION_MODEL=flux1-dev.sft' - 'COMFYUI_SCHEDULER=sgm_uniform' - 'COMFYUI_CFG_SCALE=5.5' - 'COMFYUI_SD3=false' - 'COMFYUI_FLUX=true' - 'COMFYUI_FLUX_WEIGHT_DTYPE=fp8_e4m3fn' - 'COMFYUI_FLUX_FP8_CLIP=true' - 'IMAGE_STEPS=20' - 'WEBUI_AUTH=true' - 'WEBUI_AUTH_TRUSTED_EMAIL_HEADER=X-authentik-email' - 'WEBUI_AUTH_TRUSTED_NAME_HEADER=X-authentik-name' - 'AUDIO_TTS_OPENAI_API_BASE_URL=http://open-webui_tts:8000/v1' - 'AUDIO_TTS_ENGINE=openai' - 'AUDIO_STT_ENGINE=openai' - 'AUDIO_STT_OPENAI_API_BASE_URL=http://10.1.3.16:8000/v1' - 'AUDIO_STT_MODEL=Systran/faster-whisper-large-v3' ```
Author
Owner

@silentoplayz commented on GitHub (Aug 3, 2024):

Thank you very much for sharing your environment variables utilized for your Open WebUI instance @JohnTheNerd.
I have set COMFYUI_SD3 to false and added these 3 environment variables to my .env file.

COMFYUI_FLUX="true"
COMFYUI_FLUX_WEIGHT_DTYPE="fp8_e4m3fn"
COMFYUI_FLUX_FP8_CLIP="true"

I have also updated the COMFYUI_SAMPLER environment variable within my .env file from dpmpp_2m to euler as well, because that's what the example workflows use for the new Flux model.

After having done this and restarted my Open WebUI instance, and ensuring my ComfyUI connection works within Open WebUI, I now get this error within Open WebUI upon trying to generate an image with this setup:

Something went wrong :/ 'NoneType' object is not subscriptable

image

image

I get this error within the running ComfyUI terminal console as well:

Failed to validate prompt for output 9:
* VAELoader 10:
  - Value not in list: vae_name: 'ae.sft' not in ['FLUX1\\ae.safetensors', 'ae.safetensors']
Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
<!-- gh-comment-id:2267116587 --> @silentoplayz commented on GitHub (Aug 3, 2024): Thank you very much for sharing your environment variables utilized for your Open WebUI instance @JohnTheNerd. I have set `COMFYUI_SD3` to `false` and added these 3 environment variables to my .env file. ``` COMFYUI_FLUX="true" COMFYUI_FLUX_WEIGHT_DTYPE="fp8_e4m3fn" COMFYUI_FLUX_FP8_CLIP="true" ``` I have also updated the `COMFYUI_SAMPLER` environment variable within my .env file from `dpmpp_2m` to `euler` as well, because that's what the example workflows use for the new Flux model. After having done this and restarted my Open WebUI instance, and ensuring my ComfyUI connection works within Open WebUI, I now get this error within Open WebUI upon trying to generate an image with this setup: ``` Something went wrong :/ 'NoneType' object is not subscriptable ``` ![image](https://github.com/user-attachments/assets/b85f11d7-00fe-494e-9f08-6f18c5114428) ![image](https://github.com/user-attachments/assets/0f2f2f8e-77fa-494e-a026-80288bce8385) I get this error within the running ComfyUI terminal console as well: ``` Failed to validate prompt for output 9: * VAELoader 10: - Value not in list: vae_name: 'ae.sft' not in ['FLUX1\\ae.safetensors', 'ae.safetensors'] Output will be ignored invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} ```
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

I'm very confused - where did you get the safetensors file for Flux? The only way I know of running it involves putting the sft file in the unet folder of ComfyUI, and that's what the example workflow seems to use.

I looked at the official HF repo when it first released, and I only saw the sft file. it does seem that they have added safetensors files a short while my PR was merged, but I have no idea how to use that. @silentoplayz can you link to the workflow you used with the safetensors?

BTW I have a fix pushed at ghcr.io/johnthenerd/open-webui:comfyui-fix - but it's the x86 version without any CUDA or ollama (there are some GitHub Actions issues I'm having, it seems to not like pushing the package anymore, so I just built and pushed from my laptop). Could you test it and let me know? If you're uncomfortable running the image I built, you can clone my fork and checkout the comfyui-fix branch - which has the fix. It's a very small change.

<!-- gh-comment-id:2267119451 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): I'm very confused - where did you get the safetensors file for Flux? The only way I know of running it involves putting the sft file in the unet folder of ComfyUI, and that's what [the example workflow](https://comfyanonymous.github.io/ComfyUI_examples/flux/) seems to use. I looked at [the official HF repo](https://huggingface.co/black-forest-labs/FLUX.1-dev) when it first released, and I only saw the sft file. it does seem that they have added safetensors files a short while my PR was merged, but I have no idea how to use that. @silentoplayz can you link to the workflow you used with the safetensors? BTW I have a fix pushed at ghcr.io/johnthenerd/open-webui:comfyui-fix - but it's the x86 version without any CUDA or ollama (there are some GitHub Actions issues I'm having, it seems to not like pushing the package anymore, so I just built and pushed from my laptop). Could you test it and let me know? If you're uncomfortable running the image I built, you can [clone my fork](https://github.com/JohnTheNerd/open-webui/) and checkout the `comfyui-fix` branch - which has the fix. It's [a very small change](https://github.com/open-webui/open-webui/compare/dev..JohnTheNerd:open-webui:comfyui-fix?expand=1).
Author
Owner

@silentoplayz commented on GitHub (Aug 3, 2024):

I'm very confused - where did you get the safetensors file for Flux? The only way I know of running it involves putting the sft file in the unet folder of ComfyUI, and that's what the example workflow seems to use.

.sft = .safetensors, but abbreviated. I've simply renamed the model format to the original .safetensors that I'm used to seeing. The model works fine this way inside of the latest version of ComfyUI, along with the example workflows provided. I have tested .sft format just now for the model file and it does not work and throws the same error within Open WebUI as well.

<!-- gh-comment-id:2267124743 --> @silentoplayz commented on GitHub (Aug 3, 2024): > I'm very confused - where did you get the safetensors file for Flux? The only way I know of running it involves putting the sft file in the unet folder of ComfyUI, and that's what [the example workflow](https://comfyanonymous.github.io/ComfyUI_examples/flux/) seems to use. .`sft` = .`safetensors`, but abbreviated. I've simply renamed the model format to the original `.safetensors` that I'm used to seeing. The model works fine this way inside of the latest version of ComfyUI, along with the [example workflows](https://comfyanonymous.github.io/ComfyUI_examples/flux/) provided. I have tested `.sft` format just now for the model file and it does not work and throws the same error within Open WebUI as well.
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

The fixed Docker image should only fix the case where the environment variable is not set.

The example workflow I linked seems to expect the model file to be in the unet directory (I'm really unsure why, and it's quite annoying to work with, so I'd love to fix that). The VAE is also expected to be called ae.sft in the vae directory, alongside the text encoders as mentioned in the same page.

<!-- gh-comment-id:2267125977 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): The fixed Docker image should only fix the case where the environment variable is not set. The example workflow I linked seems to expect the model file to be in the `unet` directory (I'm really unsure why, and it's quite annoying to work with, so I'd love to fix that). The VAE is also expected to be called `ae.sft` in the vae directory, alongside the text encoders as mentioned in the same page.
Author
Owner

@silentoplayz commented on GitHub (Aug 3, 2024):

The fixed Docker image should only fix the case where the environment variable is not set.

The example workflow I linked seems to expect the model file to be in the unet directory (I'm really unsure why, and it's quite annoying to work with, so I'd love to fix that). The VAE is also expected to be called ae.sft in the vae directory, alongside the text encoders as mentioned in the same page.

Admittedly, while trying to get this all figured out on your Docker build, I decided to switch everything back to the .sft extension. This includes the model itself in both the checkpoints and unet folders, and the VAE. Generating an image works successfully now within Open WebUI via ComfyUI integration with the Flux model!

image
image

<!-- gh-comment-id:2267129433 --> @silentoplayz commented on GitHub (Aug 3, 2024): > The fixed Docker image should only fix the case where the environment variable is not set. > > The example workflow I linked seems to expect the model file to be in the `unet` directory (I'm really unsure why, and it's quite annoying to work with, so I'd love to fix that). The VAE is also expected to be called `ae.sft` in the vae directory, alongside the text encoders as mentioned in the same page. Admittedly, while trying to get this all figured out on your Docker build, I decided to switch everything back to the .sft extension. This includes the model itself in both the `checkpoints` and `unet` folders, and the VAE. Generating an image works successfully now within Open WebUI via ComfyUI integration with the Flux model! ![image](https://github.com/user-attachments/assets/3c61e47e-a721-40cc-91f1-d2b62abca393) ![image](https://github.com/user-attachments/assets/5317a811-9e5e-4f7a-ac20-11c7a8710d4e)
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

Great! In that case, I will first create a PR that closes this issue, then create a PR in the documentation indicating the exact filenames that must be used.

Do you know if I can somehow get it to load from the checkpoints directory instead of unet? It's otherwise annoying as it cannot be found through the UI anymore, and must be loaded in manually via the environment.

<!-- gh-comment-id:2267130877 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): Great! In that case, I will first create a PR that closes this issue, then create a PR in the documentation indicating the exact filenames that must be used. Do you know if I can somehow get it to load from the checkpoints directory instead of unet? It's otherwise annoying as it cannot be found through the UI anymore, and must be loaded in manually via the environment.
Author
Owner

@silentoplayz commented on GitHub (Aug 3, 2024):

Do you know if I can somehow get it to load from the checkpoints directory instead of unet? It's otherwise annoying as it cannot be found through the UI anymore, and must be loaded in manually via the environment.

Instructions for creating a symbolic link:

mklink <link_name> <target_path>

In the given example:

  • Link name: X:\ComfyUI-Zluda\ComfyUI-Zluda\models\unet\flux1-schnell-fp8.sft
  • Target path: X:\ComfyUI-Zluda\ComfyUI-Zluda\models\checkpoints\flux1-schnell-fp8.sft

Command to create the symbolic link:

mklink X:\ComfyUI-Zluda\ComfyUI-Zluda\models\unet\flux1-schnell-fp8.sft X:\ComfyUI-Zluda\ComfyUI-Zluda\models\checkpoints\flux1-schnell-fp8.sft

Note:

  • The target path must be an existing file or directory.
  • The link name must not already exist.
  • The user creating the link must have appropriate permissions to access the target path.

Or you could just have a copy of the model in the checkpoints directory like I have done.

<!-- gh-comment-id:2267132070 --> @silentoplayz commented on GitHub (Aug 3, 2024): > Do you know if I can somehow get it to load from the checkpoints directory instead of unet? It's otherwise annoying as it cannot be found through the UI anymore, and must be loaded in manually via the environment. **Instructions for creating a symbolic link:** ``` mklink <link_name> <target_path> ``` **In the given example:** * **Link name:** `X:\ComfyUI-Zluda\ComfyUI-Zluda\models\unet\flux1-schnell-fp8.sft` * **Target path:** `X:\ComfyUI-Zluda\ComfyUI-Zluda\models\checkpoints\flux1-schnell-fp8.sft` **Command to create the symbolic link:** ``` mklink X:\ComfyUI-Zluda\ComfyUI-Zluda\models\unet\flux1-schnell-fp8.sft X:\ComfyUI-Zluda\ComfyUI-Zluda\models\checkpoints\flux1-schnell-fp8.sft ``` **Note:** * The target path must be an existing file or directory. * The link name must not already exist. * The user creating the link must have appropriate permissions to access the target path. Or you could just have a copy of the model in the checkpoints directory *like I have done*.
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

Sounds good - I will document it accordingly.

<!-- gh-comment-id:2267132845 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): Sounds good - I will document it accordingly.
Author
Owner

@pkeffect commented on GitHub (Aug 3, 2024):

Thank you both.

<!-- gh-comment-id:2267133280 --> @pkeffect commented on GitHub (Aug 3, 2024): Thank you both.
Author
Owner

@JohnTheNerd commented on GitHub (Aug 3, 2024):

Documentation changes are at https://github.com/open-webui/docs/pull/170 - do you have any other suggestions? @silentoplayz

<!-- gh-comment-id:2267156739 --> @JohnTheNerd commented on GitHub (Aug 3, 2024): Documentation changes are at https://github.com/open-webui/docs/pull/170 - do you have any other suggestions? @silentoplayz
Author
Owner

@silentoplayz commented on GitHub (Aug 3, 2024):

Documentation changes are at open-webui/docs#170 - do you have any other suggestions? @silentoplayz

LGTM!

<!-- gh-comment-id:2267160181 --> @silentoplayz commented on GitHub (Aug 3, 2024): > Documentation changes are at [open-webui/docs#170](https://github.com/open-webui/docs/pull/170) - do you have any other suggestions? @silentoplayz LGTM!
Author
Owner

@ther3zz commented on GitHub (Aug 4, 2024):

@JohnTheNerd Do the files specifically have to be directly in the unet and vae directories?
For example, will it still work if theyre in models/checkpoints/FLUX1/flux1-schnell.sft?

I seem to be getting the same error that @silentoplayz was receiving but I've implemented your change from here and also ran
ln -s ComfyUI/models/unet/FLUX1/flux1-schnell.sft ComfyUI/models/checkpoints/FLUX1/flux1-schnell.sft

I think the only thing different is I have the models in their own FLUX1 directory within checkpoints/unet/vae dirs

Error from comfyui side:

got prompt
Failed to validate prompt for output 9:
* DualCLIPLoader 11:
  - Value not in list: clip_name1: 'clip_l.safetensors' not in ['FLUX1/clip_l.safetensors', 'FLUX1/t5xxl_fp16.safetensors']
  - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in ['FLUX1/clip_l.safetensors', 'FLUX1/t5xxl_fp16.safetensors']
* VAELoader 10:
  - Value not in list: vae_name: 'ae.sft' not in ['FLUX1/ae.sft']
Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}

<!-- gh-comment-id:2267210436 --> @ther3zz commented on GitHub (Aug 4, 2024): @JohnTheNerd Do the files specifically have to be directly in the unet and vae directories? For example, will it still work if theyre in models/checkpoints/FLUX1/flux1-schnell.sft? I seem to be getting the same error that @silentoplayz was receiving but I've implemented your [change from here](https://github.com/open-webui/open-webui/compare/dev..JohnTheNerd:open-webui:comfyui-fix?expand=1) and also ran `ln -s ComfyUI/models/unet/FLUX1/flux1-schnell.sft ComfyUI/models/checkpoints/FLUX1/flux1-schnell.sft` I think the only thing different is I have the models in their own FLUX1 directory within checkpoints/unet/vae dirs Error from comfyui side: ``` got prompt Failed to validate prompt for output 9: * DualCLIPLoader 11: - Value not in list: clip_name1: 'clip_l.safetensors' not in ['FLUX1/clip_l.safetensors', 'FLUX1/t5xxl_fp16.safetensors'] - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in ['FLUX1/clip_l.safetensors', 'FLUX1/t5xxl_fp16.safetensors'] * VAELoader 10: - Value not in list: vae_name: 'ae.sft' not in ['FLUX1/ae.sft'] Output will be ignored invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} ```
Author
Owner

@JohnTheNerd commented on GitHub (Aug 4, 2024):

@ther3zz yes, the exact paths given in the config PR above is exactly where the files must reside. otherwise you will get errors.

<!-- gh-comment-id:2267223546 --> @JohnTheNerd commented on GitHub (Aug 4, 2024): @ther3zz yes, the exact paths given in the config PR above is exactly where the files must reside. otherwise you will get errors.
Author
Owner

@ther3zz commented on GitHub (Aug 4, 2024):

@ther3zz yes, the exact paths given in the config PR above is exactly where the files must reside. otherwise you will get errors.

thanks for confirming, did softlinks there and it works now!!

<!-- gh-comment-id:2267230193 --> @ther3zz commented on GitHub (Aug 4, 2024): > @ther3zz yes, the exact paths given in the config PR above is exactly where the files must reside. otherwise you will get errors. thanks for confirming, did softlinks there and it works now!!
Author
Owner

@tjbck commented on GitHub (Aug 4, 2024):

Fix merged to dev, testing wanted here!

<!-- gh-comment-id:2267531329 --> @tjbck commented on GitHub (Aug 4, 2024): Fix merged to dev, testing wanted here!
Author
Owner

@spammenotinoz commented on GitHub (Aug 4, 2024):


  • The target path must be an existing file or directory.
  • The link name must not already exist.
  • The user creating the link must have appropriate permissions to access the target path.

Or you could just have a copy of the model in the checkpoints directory like I have done.

THANK-YOU this was the missing step. I had FLUX working in Comfy but Open-WebUI could not find the models. The Symbolic Link fixed that.

Without swearing, shikes this model is better than Midjourney!!

<!-- gh-comment-id:2267536233 --> @spammenotinoz commented on GitHub (Aug 4, 2024): > **** > > * The target path must be an existing file or directory. > * The link name must not already exist. > * The user creating the link must have appropriate permissions to access the target path. > > Or you could just have a copy of the model in the checkpoints directory _like I have done_. THANK-YOU this was the missing step. I had FLUX working in Comfy but Open-WebUI could not find the models. The Symbolic Link fixed that. Without swearing, shikes this model is better than Midjourney!!
Author
Owner

@brunocerq commented on GitHub (Aug 6, 2024):

Using ComfyUi with SD3 and I get the same error in OpenwebUi ver v0.3.11

<!-- gh-comment-id:2271912558 --> @brunocerq commented on GitHub (Aug 6, 2024): Using ComfyUi with SD3 and I get the same error in OpenwebUi ver v0.3.11
Author
Owner

@donghyeon commented on GitHub (Aug 8, 2024):

I believe the PR related to this issue should align with the official ComfyUI guidelines.

I followed the official examples provided by ComfyUI: https://comfyanonymous.github.io/ComfyUI_examples/flux/. According to their guide, the files should be organized as follows:

ComfyUI
└── models
    ├── checkpoints
    │   ├── flux1-dev-fp8.safetensors
    │   └── flux1-schnell-fp8.safetensors
    ├── clip
    │   ├── clip_l.safetensors
    │   ├── t5xxl_fp8_e4m3fn.safetensors
    │   └── t5xxl_fp16.safetensors
    ├── unet
    │   ├── flux1-dev.safetensors
    │   └── flux1-schnell.safetensors
    └── vae
        └── ae.safetensors

However, the current version of Open WebUI requires the following structure:

ComfyUI
└── models
    ├── checkpoints
    │   ├── flux1-dev-fp8.safetensors
    │   └── flux1-schnell-fp8.safetensors
    ├── clip
    │   ├── clip_l.safetensors
    │   ├── t5xxl_fp8_e4m3fn.safetensors
    │   └── t5xxl_fp16.safetensors
    ├── unet
    │   ├── **flux1-dev-fp8.safetensors**
    │   └── **flux1-schnell-fp8.safetensors**
    └── vae
        └── **ae.sft**

In the models/unet directory, the same checkpoint files from models/checkpoints must be included, and the .safetensors extension needs to be changed to .sft in the models/vae directory. This inconsistency can be confusing for users.

Currently, I was able to generate high-quality images by rearranging the files as shown above and starting Open WebUI with the environment variable COMFYUI_FLUX=True. If you are using Docker to start the Open WebUI server, the command should be:
docker run -d -p 3000:8080 -e COMFYUI_FLUX=True --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

<!-- gh-comment-id:2275897127 --> @donghyeon commented on GitHub (Aug 8, 2024): I believe the PR related to this issue should align with the official ComfyUI guidelines. I followed the official examples provided by ComfyUI: https://comfyanonymous.github.io/ComfyUI_examples/flux/. According to their guide, the files should be organized as follows: ``` ComfyUI └── models ├── checkpoints │ ├── flux1-dev-fp8.safetensors │ └── flux1-schnell-fp8.safetensors ├── clip │ ├── clip_l.safetensors │ ├── t5xxl_fp8_e4m3fn.safetensors │ └── t5xxl_fp16.safetensors ├── unet │ ├── flux1-dev.safetensors │ └── flux1-schnell.safetensors └── vae └── ae.safetensors ``` However, the current version of Open WebUI requires the following structure: ``` ComfyUI └── models ├── checkpoints │ ├── flux1-dev-fp8.safetensors │ └── flux1-schnell-fp8.safetensors ├── clip │ ├── clip_l.safetensors │ ├── t5xxl_fp8_e4m3fn.safetensors │ └── t5xxl_fp16.safetensors ├── unet │ ├── **flux1-dev-fp8.safetensors** │ └── **flux1-schnell-fp8.safetensors** └── vae └── **ae.sft** ``` In the `models/unet` directory, the same checkpoint files from `models/checkpoints` must be included, and the `.safetensors` extension needs to be changed to `.sft` in the `models/vae` directory. This inconsistency can be confusing for users. Currently, I was able to generate high-quality images by rearranging the files as shown above and starting Open WebUI with the environment variable `COMFYUI_FLUX=True`. If you are using Docker to start the Open WebUI server, the command should be: `docker run -d -p 3000:8080 -e COMFYUI_FLUX=True --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main`
Author
Owner

@brunocerq commented on GitHub (Aug 8, 2024):

Agree with @donghyeon , I too managed to get it working by changing file extensions from .safetensors to .sft and editing the .env file.
I think the way it is now, although working it's not optimal since users must change some file extensions and it's just confusing, just use the standard structure and extensions given by Flux.

<!-- gh-comment-id:2276061163 --> @brunocerq commented on GitHub (Aug 8, 2024): Agree with @donghyeon , I too managed to get it working by changing file extensions from .safetensors to .sft and editing the .env file. I think the way it is now, although working it's not optimal since users must change some file extensions and it's just confusing, just use the standard structure and extensions given by Flux.
Author
Owner

@tjbck commented on GitHub (Aug 8, 2024):

PR welcome!

<!-- gh-comment-id:2276063886 --> @tjbck commented on GitHub (Aug 8, 2024): PR welcome!
Author
Owner

@Taluen79 commented on GitHub (Aug 8, 2024):

Why not just let the open-webui admin input a Comfyui workflow JSON instead of all these env variables all over the place? Comfyui workflow can be saved in an API format, which can then be loaded and reused when making call to a remote Comfyui end-point.

This will enable anyone hosting open-webui to have different workflows that they have developed, instead of everyone just using the exact same vanilla workflow.

<!-- gh-comment-id:2276696891 --> @Taluen79 commented on GitHub (Aug 8, 2024): Why not just let the open-webui admin input a Comfyui workflow JSON instead of all these env variables all over the place? Comfyui workflow can be saved in an API format, which can then be loaded and reused when making call to a remote Comfyui end-point. This will enable anyone hosting open-webui to have different workflows that they have developed, instead of everyone just using the exact same vanilla workflow.
Author
Owner

@JohnTheNerd commented on GitHub (Aug 9, 2024):

completely agree with @Taluen79 and I would like to implement that. however, I do not know anything about the front-end side of open-webui and would prefer not to learn at this time. this is why all of my image generation related changes have only been configurable via the environment. this would notably also enable the use of LoRA's which is not possible in the current implementation.

any ideas on how I could do this without touching the front-end code, or would anyone be able to implement the front-end if I wrote the back-end piece?

another notable point of consideration is that the custom workflows, unless we keep and maintain the existing implementation, is a breaking change to all current users. not sure how to handle this.

<!-- gh-comment-id:2278307163 --> @JohnTheNerd commented on GitHub (Aug 9, 2024): completely agree with @Taluen79 and I would like to implement that. however, I do not know anything about the front-end side of open-webui and would prefer not to learn at this time. this is why all of my image generation related changes have only been configurable via the environment. this would notably also enable the use of LoRA's which is not possible in the current implementation. any ideas on how I could do this without touching the front-end code, or would anyone be able to implement the front-end if I wrote the back-end piece? another notable point of consideration is that the custom workflows, unless we keep and maintain the existing implementation, is a breaking change to all current users. not sure how to handle this.
Author
Owner

@JohnTheNerd commented on GitHub (Aug 9, 2024):

one thing that comes to mind is yet another environment variable that allows the user to point at a local JSON file with the workflow. however, due to the uniqueness of each workflow, all other environment variables and configuration options related to image generation would need to be ignored whenever a custom workflow is provided.

<!-- gh-comment-id:2278317302 --> @JohnTheNerd commented on GitHub (Aug 9, 2024): one thing that comes to mind is yet another environment variable that allows the user to point at a local JSON file with the workflow. however, due to the uniqueness of each workflow, all other environment variables and configuration options related to image generation would need to be ignored whenever a custom workflow is provided.
Author
Owner

@Gee1111 commented on GitHub (Sep 3, 2024):

im using the gguf version and got this error: "Something went wrong :/ Expecting value: line 1 column 1 (char 0)"

<!-- gh-comment-id:2327624029 --> @Gee1111 commented on GitHub (Sep 3, 2024): im using the gguf version and got this error: "Something went wrong :/ Expecting value: line 1 column 1 (char 0)"
Author
Owner

@cinprens commented on GitHub (Oct 8, 2024):

Screenshot 2024-10-08 222223
I don't understand why it gives me such an error, comfyui works properly, there is no problem, but when I want to integrate openwebui, it doesn't work.

<!-- gh-comment-id:2400653443 --> @cinprens commented on GitHub (Oct 8, 2024): ![Screenshot 2024-10-08 222223](https://github.com/user-attachments/assets/85626904-6e04-4930-9af3-3ae44e9ba30a) I don't understand why it gives me such an error, comfyui works properly, there is no problem, but when I want to integrate openwebui, it doesn't work.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#52235