mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-05 18:38:17 -05:00
[GH-ISSUE #20183] issue: unable to disable chat/completions for image generation models (using litellm) #34643
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @dhohengassner on GitHub (Dec 26, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/20183
Check Existing Issues
Installation Method
Git Clone
Open WebUI Version
ghcr.io/open-webui/open-webui:0.6.43
Ollama Version (if applicable)
No response
Operating System
AL2023
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
Litellm Image generation models (Azure AI) can be used without chat completion. https://docs.litellm.ai/docs/providers/azure_ai_img
Actual Behavior
We currently use Black Forest Models through Litellm which are only able to call an API call for image generation:
78b5d40664/model_prices_and_context_window.json (L176)The image generation works fine but afterwards openAI tries to call the chat completions endpoint:
httpx.HTTPStatusError: Client error '404 Not Found' for url 'https://xxx.openai.azure.com/chat/completions'I tried to disable all Chat Completion options in user and admin section like it is described in:
https://github.com/open-webui/open-webui/issues/11522
https://github.com/open-webui/open-webui/issues/7703
Still I fail after image generation with this models. Any help or hint is appreciated.
Steps to Reproduce
Use litellm with image generation models: https://docs.litellm.ai/docs/providers/azure_ai_img
Logs & Screenshots
{
"message": "litellm.proxy.proxy_server._handle_llm_api_exception(): Exception occured - litellm.NotFoundError: NotFoundError: Azure_aiException - {"error":{"code":"404","message": "Resource not found"}}. Received Model Group=azure_ai/flux-1.1-pro\nAvailable Model Group Fallbacks=None",
"level": "ERROR",
"timestamp": "2025-12-26T11:27:22.443420",
"stacktrace": "Traceback (most recent call last):\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 156, in _make_common_async_call\n response = await async_httpx_client.post(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<10 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper\n result = await func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py", line 450, in post\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py", line 406, in post\n response.raise_for_status()\n ~~~~~~~~~~~~~~~~~~~~~~~~~^^\n File "/usr/lib/python3.13/site-packages/httpx/_models.py", line 829, in raise_for_status\n raise HTTPStatusError(message, request=request, response=self)\nhttpx.HTTPStatusError: Client error '404 Not Found' for url 'https://xxx.openai.azure.com/chat/completions'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/lib/python3.13/site-packages/litellm/main.py", line 604, in acompletion\n response = await init_response\n ^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 653, in acompletion_stream_function\n completion_stream, _response_headers = await self.make_async_call_stream_helper(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<15 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 712, in make_async_call_stream_helper\n response = await self._make_common_async_call(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<10 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 181, in _make_common_async_call\n raise self._handle_error(e=e, provider_config=provider_config)\n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 3601, in _handle_error\n raise provider_config.get_error_class(\n ...<3 lines>...\n )\nlitellm.llms.openai.common_utils.OpenAIError: {"error":{"code":"404","message": "Resource not found"}}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/lib/python3.13/site-packages/litellm/proxy/proxy_server.py", line 4915, in chat_completion\n result = await base_llm_response_processor.base_process_llm_request(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<16 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/proxy/common_request_processing.py", line 539, in base_process_llm_request\n responses = await llm_responses\n ^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1293, in acompletion\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1269, in acompletion\n response = await self.async_function_with_fallbacks(**kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4297, in async_function_with_fallbacks\n return await self.async_function_with_fallbacks_common_utils(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<8 lines>...\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4255, in async_function_with_fallbacks_common_utils\n raise original_exception\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4289, in async_function_with_fallbacks\n response = await self.async_function_with_retries(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4411, in async_function_with_retries\n self.should_retry_this_error(\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^\n error=e,\n ^^^^^^^^\n ...<4 lines>...\n content_policy_fallbacks=content_policy_fallbacks,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4587, in should_retry_this_error\n raise error\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4385, in async_function_with_retries\n response = await self.make_call(original_function, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4505, in make_call\n response = await response\n ^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1575, in _acompletion\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1527, in _acompletion\n response = await _response\n ^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1643, in wrapper_async\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1489, in wrapper_async\n result = await original_function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/lib/python3.13/site-packages/litellm/main.py", line 623, in acompletion\n raise exception_type(\n ~~~~~~~~~~~~~~^\n model=model,\n ^^^^^^^^^^^^\n ...<3 lines>...\n extra_kwargs=kwargs,\n ^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2329, in exception_type\n raise e\n File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 492, in exception_type\n raise NotFoundError(\n ...<5 lines>...\n )\nlitellm.exceptions.NotFoundError: litellm.NotFoundError: NotFoundError: Azure_aiException - {"error":{"code":"404","message": "Resource not found"}}. Received Model Group=azure_ai/flux-1.1-pro\nAvailable Model Group Fallbacks=None"
}
Additional Information
I am aware that the error comes from LiteLLM but they are expected and the cause is that Flux Models do not support chat endpoints.
@owui-terminator[bot] commented on GitHub (Dec 26, 2025):
🔍 Similar Issues Found
I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions:
#18995 issue: image generation and edition doesn’t work on temporary chats
by futureshield • Nov 06, 2025 •
bug#20091 issue: image is regarded as binary in temp chat
by funnycups • Dec 22, 2025 •
bug#19393 issue: shared chats with images - images won't show
by Classic298 • Nov 23, 2025 •
bug#20095 issue: temporary chat causes image attachments to appear as text
by mudkipdev • Dec 22, 2025 •
bug#20059 issue: Chat response is not working
by navilg • Dec 20, 2025 •
bugShow 5 more related issues
#19987 issue: There is a lack of visual consistency between the home page and the chat interface.
by i-iooi-i • Dec 16, 2025 •
bug#19861 issue:
by QuitHub • Dec 10, 2025 •
bug#16953 issue: When switching to another chat while the image is being generated, the original chat cannot display the generated image properly.
by haochiu • Aug 27, 2025 •
bug#19085 issue: Chat UI loads forever instead of showing error
by TamKej • Nov 10, 2025 •
bug#19187 issue: Image generation menu gone.
by calebrio02 • Nov 14, 2025 •
bug💡 Tips:
This comment was generated automatically by a bot. Please react with a 👍 if this comment was helpful, or a 👎 if it was not.
@Classic298 commented on GitHub (Dec 26, 2025):
How did you configure it? as a chat model? Don't configure it as a chat model. Add it only in the image settings and use a different model. More steps to reproduce needed here.