[GH-ISSUE #20183] issue: unable to disable chat/completions for image generation models (using litellm) #34643

Closed
opened 2026-04-25 08:43:35 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @dhohengassner on GitHub (Dec 26, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/20183

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

ghcr.io/open-webui/open-webui:0.6.43

Ollama Version (if applicable)

No response

Operating System

AL2023

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Litellm Image generation models (Azure AI) can be used without chat completion. https://docs.litellm.ai/docs/providers/azure_ai_img

Actual Behavior

We currently use Black Forest Models through Litellm which are only able to call an API call for image generation:
78b5d40664/model_prices_and_context_window.json (L176)

The image generation works fine but afterwards openAI tries to call the chat completions endpoint:

httpx.HTTPStatusError: Client error '404 Not Found' for url 'https://xxx.openai.azure.com/chat/completions'

I tried to disable all Chat Completion options in user and admin section like it is described in:
https://github.com/open-webui/open-webui/issues/11522
https://github.com/open-webui/open-webui/issues/7703

Still I fail after image generation with this models. Any help or hint is appreciated.

Steps to Reproduce

Use litellm with image generation models: https://docs.litellm.ai/docs/providers/azure_ai_img

Logs & Screenshots

Image

{
"message": "litellm.proxy.proxy_server._handle_llm_api_exception(): Exception occured - litellm.NotFoundError: NotFoundError: Azure_aiException - {"error":{"code":"404","message": "Resource not found"}}. Received Model Group=azure_ai/flux-1.1-pro\nAvailable Model Group Fallbacks=None",
"level": "ERROR",
"timestamp": "2025-12-26T11:27:22.443420",
"stacktrace": "Traceback (most recent call last):\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 156, in _make_common_async_call\n response = await async_httpx_client.post(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<10 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper\n result = await func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py", line 450, in post\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py", line 406, in post\n response.raise_for_status()\n ~~~~~~~~~~~~~~~~~~~~~~~~~^^\n File "/usr/lib/python3.13/site-packages/httpx/_models.py", line 829, in raise_for_status\n raise HTTPStatusError(message, request=request, response=self)\nhttpx.HTTPStatusError: Client error '404 Not Found' for url 'https://xxx.openai.azure.com/chat/completions'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/lib/python3.13/site-packages/litellm/main.py", line 604, in acompletion\n response = await init_response\n ^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 653, in acompletion_stream_function\n completion_stream, _response_headers = await self.make_async_call_stream_helper(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<15 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 712, in make_async_call_stream_helper\n response = await self._make_common_async_call(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<10 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 181, in _make_common_async_call\n raise self._handle_error(e=e, provider_config=provider_config)\n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 3601, in _handle_error\n raise provider_config.get_error_class(\n ...<3 lines>...\n )\nlitellm.llms.openai.common_utils.OpenAIError: {"error":{"code":"404","message": "Resource not found"}}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/lib/python3.13/site-packages/litellm/proxy/proxy_server.py", line 4915, in chat_completion\n result = await base_llm_response_processor.base_process_llm_request(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<16 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/proxy/common_request_processing.py", line 539, in base_process_llm_request\n responses = await llm_responses\n ^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1293, in acompletion\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1269, in acompletion\n response = await self.async_function_with_fallbacks(**kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4297, in async_function_with_fallbacks\n return await self.async_function_with_fallbacks_common_utils(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<8 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4255, in async_function_with_fallbacks_common_utils\n raise original_exception\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4289, in async_function_with_fallbacks\n response = await self.async_function_with_retries(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4411, in async_function_with_retries\n self.should_retry_this_error(\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^\n error=e,\n ^^^^^^^^\n ...<4 lines>...\n content_policy_fallbacks=content_policy_fallbacks,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4587, in should_retry_this_error\n raise error\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4385, in async_function_with_retries\n response = await self.make_call(original_function, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4505, in make_call\n response = await response\n ^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1575, in _acompletion\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1527, in _acompletion\n response = await _response\n ^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1643, in wrapper_async\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1489, in wrapper_async\n result = await original_function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/main.py", line 623, in acompletion\n raise exception_type(\n ~~~~~~~~~~~~~~^\n model=model,\n ^^^^^^^^^^^^\n ...<3 lines>...\n extra_kwargs=kwargs,\n ^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2329, in exception_type\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 492, in exception_type\n raise NotFoundError(\n ...<5 lines>...\n )\nlitellm.exceptions.NotFoundError: litellm.NotFoundError: NotFoundError: Azure_aiException - {"error":{"code":"404","message": "Resource not found"}}. Received Model Group=azure_ai/flux-1.1-pro\nAvailable Model Group Fallbacks=None"
}

Additional Information

I am aware that the error comes from LiteLLM but they are expected and the cause is that Flux Models do not support chat endpoints.

Originally created by @dhohengassner on GitHub (Dec 26, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/20183 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version ghcr.io/open-webui/open-webui:0.6.43 ### Ollama Version (if applicable) _No response_ ### Operating System AL2023 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Litellm Image generation models (Azure AI) can be used without chat completion. https://docs.litellm.ai/docs/providers/azure_ai_img ### Actual Behavior We currently use Black Forest Models through Litellm which are only able to call an API call for image generation: https://github.com/BerriAI/litellm/blob/78b5d40664aa21b1b493d889cce575d5d30f257b/model_prices_and_context_window.json#L176 The image generation works fine but afterwards openAI tries to call the chat completions endpoint: `httpx.HTTPStatusError: Client error '404 Not Found' for url 'https://xxx.openai.azure.com/chat/completions'` I tried to disable all Chat Completion options in user and admin section like it is described in: https://github.com/open-webui/open-webui/issues/11522 https://github.com/open-webui/open-webui/issues/7703 Still I fail after image generation with this models. Any help or hint is appreciated. ### Steps to Reproduce Use litellm with image generation models: https://docs.litellm.ai/docs/providers/azure_ai_img ### Logs & Screenshots <img width="2170" height="1306" alt="Image" src="https://github.com/user-attachments/assets/3c05eb44-dc3e-41f7-8ae5-b86bd559209f" /> { "message": "litellm.proxy.proxy_server._handle_llm_api_exception(): Exception occured - litellm.NotFoundError: NotFoundError: Azure_aiException - {\"error\":{\"code\":\"404\",\"message\": \"Resource not found\"}}. Received Model Group=azure_ai/flux-1.1-pro\nAvailable Model Group Fallbacks=None", "level": "ERROR", "timestamp": "2025-12-26T11:27:22.443420", "stacktrace": "Traceback (most recent call last):\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py\", line 156, in _make_common_async_call\n response = await async_httpx_client.post(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<10 lines>...\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/logging_utils.py\", line 190, in async_wrapper\n result = await func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py\", line 450, in post\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py\", line 406, in post\n response.raise_for_status()\n ~~~~~~~~~~~~~~~~~~~~~~~~~^^\n File \"/usr/lib/python3.13/site-packages/httpx/_models.py\", line 829, in raise_for_status\n raise HTTPStatusError(message, request=request, response=self)\nhttpx.HTTPStatusError: Client error '404 Not Found' for url 'https://xxx.openai.azure.com/chat/completions'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.13/site-packages/litellm/main.py\", line 604, in acompletion\n response = await init_response\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py\", line 653, in acompletion_stream_function\n completion_stream, _response_headers = await self.make_async_call_stream_helper(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<15 lines>...\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py\", line 712, in make_async_call_stream_helper\n response = await self._make_common_async_call(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<10 lines>...\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py\", line 181, in _make_common_async_call\n raise self._handle_error(e=e, provider_config=provider_config)\n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py\", line 3601, in _handle_error\n raise provider_config.get_error_class(\n ...<3 lines>...\n )\nlitellm.llms.openai.common_utils.OpenAIError: {\"error\":{\"code\":\"404\",\"message\": \"Resource not found\"}}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.13/site-packages/litellm/proxy/proxy_server.py\", line 4915, in chat_completion\n result = await base_llm_response_processor.base_process_llm_request(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<16 lines>...\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/proxy/common_request_processing.py\", line 539, in base_process_llm_request\n responses = await llm_responses\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 1293, in acompletion\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 1269, in acompletion\n response = await self.async_function_with_fallbacks(**kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4297, in async_function_with_fallbacks\n return await self.async_function_with_fallbacks_common_utils(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<8 lines>...\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4255, in async_function_with_fallbacks_common_utils\n raise original_exception\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4289, in async_function_with_fallbacks\n response = await self.async_function_with_retries(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4411, in async_function_with_retries\n self.should_retry_this_error(\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^\n error=e,\n ^^^^^^^^\n ...<4 lines>...\n content_policy_fallbacks=content_policy_fallbacks,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4587, in should_retry_this_error\n raise error\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4385, in async_function_with_retries\n response = await self.make_call(original_function, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 4505, in make_call\n response = await response\n ^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 1575, in _acompletion\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 1527, in _acompletion\n response = await _response\n ^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/utils.py\", line 1643, in wrapper_async\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/utils.py\", line 1489, in wrapper_async\n result = await original_function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/main.py\", line 623, in acompletion\n raise exception_type(\n ~~~~~~~~~~~~~~^\n model=model,\n ^^^^^^^^^^^^\n ...<3 lines>...\n extra_kwargs=kwargs,\n ^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py\", line 2329, in exception_type\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py\", line 492, in exception_type\n raise NotFoundError(\n ...<5 lines>...\n )\nlitellm.exceptions.NotFoundError: litellm.NotFoundError: NotFoundError: Azure_aiException - {\"error\":{\"code\":\"404\",\"message\": \"Resource not found\"}}. Received Model Group=azure_ai/flux-1.1-pro\nAvailable Model Group Fallbacks=None" } ### Additional Information I am aware that the error comes from LiteLLM but they are expected and the cause is that Flux Models do not support chat endpoints.
GiteaMirror added the bug label 2026-04-25 08:43:35 -05:00
Author
Owner

@owui-terminator[bot] commented on GitHub (Dec 26, 2025):

🔍 Similar Issues Found

I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions:

  1. #18995 issue: image generation and edition doesn’t work on temporary chats
    by futureshield • Nov 06, 2025 • bug

  2. #20091 issue: image is regarded as binary in temp chat
    by funnycups • Dec 22, 2025 • bug

  3. #19393 issue: shared chats with images - images won't show
    by Classic298 • Nov 23, 2025 • bug

  4. #20095 issue: temporary chat causes image attachments to appear as text
    by mudkipdev • Dec 22, 2025 • bug

  5. #20059 issue: Chat response is not working
    by navilg • Dec 20, 2025 • bug

Show 5 more related issues
  1. #19987 issue: There is a lack of visual consistency between the home page and the chat interface.
    by i-iooi-i • Dec 16, 2025 • bug

  2. #19861 issue:
    by QuitHub • Dec 10, 2025 • bug

  3. #16953 issue: When switching to another chat while the image is being generated, the original chat cannot display the generated image properly.
    by haochiu • Aug 27, 2025 • bug

  4. #19085 issue: Chat UI loads forever instead of showing error
    by TamKej • Nov 10, 2025 • bug

  5. #19187 issue: Image generation menu gone.
    by calebrio02 • Nov 14, 2025 • bug


💡 Tips:

  • If this is a duplicate, please consider closing this issue and adding any additional details to the existing one
  • If you found a solution in any of these issues, please share it here to help others

This comment was generated automatically by a bot. Please react with a 👍 if this comment was helpful, or a 👎 if it was not.

<!-- gh-comment-id:3692738273 --> @owui-terminator[bot] commented on GitHub (Dec 26, 2025): 🔍 **Similar Issues Found** I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions: 1. [#18995](https://github.com/open-webui/open-webui/issues/18995) **issue: image generation and edition doesn’t work on temporary chats** *by futureshield • Nov 06, 2025 • `bug`* 2. [#20091](https://github.com/open-webui/open-webui/issues/20091) **issue: image is regarded as binary in temp chat** *by funnycups • Dec 22, 2025 • `bug`* 3. [#19393](https://github.com/open-webui/open-webui/issues/19393) **issue: shared chats with images - images won't show** *by Classic298 • Nov 23, 2025 • `bug`* 4. [#20095](https://github.com/open-webui/open-webui/issues/20095) **issue: temporary chat causes image attachments to appear as text** *by mudkipdev • Dec 22, 2025 • `bug`* 5. [#20059](https://github.com/open-webui/open-webui/issues/20059) **issue: Chat response is not working** *by navilg • Dec 20, 2025 • `bug`* <details> <summary>Show 5 more related issues</summary> 6. [#19987](https://github.com/open-webui/open-webui/issues/19987) **issue: There is a lack of visual consistency between the home page and the chat interface.** *by i-iooi-i • Dec 16, 2025 • `bug`* 7. [#19861](https://github.com/open-webui/open-webui/issues/19861) **issue:** *by QuitHub • Dec 10, 2025 • `bug`* 8. [#16953](https://github.com/open-webui/open-webui/issues/16953) **issue: When switching to another chat while the image is being generated, the original chat cannot display the generated image properly.** *by haochiu • Aug 27, 2025 • `bug`* 9. [#19085](https://github.com/open-webui/open-webui/issues/19085) **issue: Chat UI loads forever instead of showing error** *by TamKej • Nov 10, 2025 • `bug`* 10. [#19187](https://github.com/open-webui/open-webui/issues/19187) **issue: Image generation menu gone.** *by calebrio02 • Nov 14, 2025 • `bug`* </details> --- 💡 **Tips:** - If this is a duplicate, please consider closing this issue and adding any additional details to the existing one - If you found a solution in any of these issues, please share it here to help others *This comment was generated automatically by a bot.* Please react with a 👍 if this comment was helpful, or a 👎 if it was not.
Author
Owner

@Classic298 commented on GitHub (Dec 26, 2025):

How did you configure it? as a chat model? Don't configure it as a chat model. Add it only in the image settings and use a different model. More steps to reproduce needed here.

<!-- gh-comment-id:3693316681 --> @Classic298 commented on GitHub (Dec 26, 2025): How did you configure it? as a chat model? Don't configure it as a chat model. Add it only in the image settings and use a different model. More steps to reproduce needed here.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#34643