[GH-ISSUE #11248] Emoji Phone Call #7410

Closed
opened 2026-04-12 19:29:46 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @DraculaVladimir on GitHub (Jun 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11248

What is the issue?

The Emoji in the phone call is no longer working.
Also if possible it would be nice to specify a seperate model to generate the emojis. Face generation can be expensive VIA API. Using a local 1b modle on Ollama would be better than me using gpt-4o for emoji generation.

OpenWebUi: v0.6.15
Installation: Via Docker open-webui/open-webui:dev-cuda

Relevant log output

│                            └ <function request_response.<locals>.app.<locals>.app at 0x7e1c0c96f1a0>

          └ <function wrap_app_handling_exceptions at 0x7e1dac514540>

  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app

    await app(scope, receive, sender)

          │   │      │        └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7e1c0c96fb00>

          │   │      └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7e1c0c96f7e0>

          │   └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...

          └ <function request_response.<locals>.app.<locals>.app at 0x7e1c0c96f1a0>

  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 73, in app

    response = await f(request)

                     │ └ <starlette.requests.Request object at 0x7e1c0c99a290>

                     └ <function get_request_handler.<locals>.app at 0x7e1c2ef74900>

  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app

    raw_response = await run_endpoint_function(

                         └ <function run_endpoint_function at 0x7e1dac5179c0>

  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function

    return await dependant.call(**values)

                 │         │      └ {'user': UserModel(id='34856ae5-67e3-4261-a41b-36228e68afbd', name='mateus', email='mateusgoncalvesdelima@gmail.com', role='a...

                 │         └ <function generate_emoji at 0x7e1c7cace160>

                 └ Dependant(path_params=[], query_params=[], header_params=[], cookie_params=[], body_params=[ModelField(field_info=Body(Pydant...


  File "/app/backend/open_webui/routers/tasks.py", line 713, in generate_emoji

    return await generate_chat_completion(request, form_data=payload, user=user)

                 │                        │                  │             └ UserModel(id='34856ae5-67e3-4261-a41b-36228e68afbd', name='mateus', email='mateusgoncalvesdelima@gmail.com', role='admin', pr...

                 │                        │                  └ {'model': 'OpenAI.gpt-4.1-nano', 'messages': [{'role': 'user', 'content': "Your task is to reflect the speaker's likely facia...

                 │                        └ <starlette.requests.Request object at 0x7e1c0c99a290>

                 └ <function generate_chat_completion at 0x7e1c7ca4a5c0>


  File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion

    return await generate_openai_chat_completion(

                 └ <function generate_chat_completion at 0x7e1c7cbf8ea0>


> File "/app/backend/open_webui/routers/openai.py", line 878, in generate_chat_completion

    r.raise_for_status()

    │ └ <function ClientResponse.raise_for_status at 0x7e1dad04b4c0>

    └ <ClientResponse(https://api.openai.com/v1/chat/completions) [400 Bad Request]>

      <CIMultiDictProxy('Date': 'Mon, 30 Jun 2025 20...


  File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1161, in raise_for_status

    raise ClientResponseError(

          └ <class 'aiohttp.client_exceptions.ClientResponseError'>


aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions'

2025-06-30 20:21:40.360 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.17.0.1:42818 - "POST /api/v1/tasks/emoji/completions HTTP/1.1" 400 - {}

2025-06-30 20:22:13.267 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.17.0.1:37792 - "GET /_app/version.json HTTP/1.1" 200 - {}

OS

Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.9.3

Originally created by @DraculaVladimir on GitHub (Jun 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11248 ### What is the issue? The Emoji in the phone call is no longer working. Also if possible it would be nice to specify a seperate model to generate the emojis. Face generation can be expensive VIA API. Using a local 1b modle on Ollama would be better than me using gpt-4o for emoji generation. OpenWebUi: v0.6.15 Installation: Via Docker ` open-webui/open-webui:dev-cuda` ### Relevant log output ```shell │ └ <function request_response.<locals>.app.<locals>.app at 0x7e1c0c96f1a0> └ <function wrap_app_handling_exceptions at 0x7e1dac514540> File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7e1c0c96fb00> │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7e1c0c96f7e0> │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... └ <function request_response.<locals>.app.<locals>.app at 0x7e1c0c96f1a0> File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 73, in app response = await f(request) │ └ <starlette.requests.Request object at 0x7e1c0c99a290> └ <function get_request_handler.<locals>.app at 0x7e1c2ef74900> File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app raw_response = await run_endpoint_function( └ <function run_endpoint_function at 0x7e1dac5179c0> File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function return await dependant.call(**values) │ │ └ {'user': UserModel(id='34856ae5-67e3-4261-a41b-36228e68afbd', name='mateus', email='mateusgoncalvesdelima@gmail.com', role='a... │ └ <function generate_emoji at 0x7e1c7cace160> └ Dependant(path_params=[], query_params=[], header_params=[], cookie_params=[], body_params=[ModelField(field_info=Body(Pydant... File "/app/backend/open_webui/routers/tasks.py", line 713, in generate_emoji return await generate_chat_completion(request, form_data=payload, user=user) │ │ │ └ UserModel(id='34856ae5-67e3-4261-a41b-36228e68afbd', name='mateus', email='mateusgoncalvesdelima@gmail.com', role='admin', pr... │ │ └ {'model': 'OpenAI.gpt-4.1-nano', 'messages': [{'role': 'user', 'content': "Your task is to reflect the speaker's likely facia... │ └ <starlette.requests.Request object at 0x7e1c0c99a290> └ <function generate_chat_completion at 0x7e1c7ca4a5c0> File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion return await generate_openai_chat_completion( └ <function generate_chat_completion at 0x7e1c7cbf8ea0> > File "/app/backend/open_webui/routers/openai.py", line 878, in generate_chat_completion r.raise_for_status() │ └ <function ClientResponse.raise_for_status at 0x7e1dad04b4c0> └ <ClientResponse(https://api.openai.com/v1/chat/completions) [400 Bad Request]> <CIMultiDictProxy('Date': 'Mon, 30 Jun 2025 20... File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1161, in raise_for_status raise ClientResponseError( └ <class 'aiohttp.client_exceptions.ClientResponseError'> aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' 2025-06-30 20:21:40.360 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.17.0.1:42818 - "POST /api/v1/tasks/emoji/completions HTTP/1.1" 400 - {} 2025-06-30 20:22:13.267 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.17.0.1:37792 - "GET /_app/version.json HTTP/1.1" 200 - {} ``` ### OS Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.9.3
GiteaMirror added the bug label 2026-04-12 19:29:46 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 30, 2025):

Not an ollama issue. The issue tracker for OpenWebUI is here.

<!-- gh-comment-id:3020600929 --> @rick-github commented on GitHub (Jun 30, 2025): Not an ollama issue. The issue tracker for OpenWebUI is [here](https://github.com/open-webui/open-webui/issues).
Author
Owner

@DraculaVladimir commented on GitHub (Jun 30, 2025):

fuck, my bad. too many tabs open

<!-- gh-comment-id:3020611528 --> @DraculaVladimir commented on GitHub (Jun 30, 2025): fuck, my bad. too many tabs open
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7410