[GH-ISSUE #5349] IOS an Mac Devices chat stucks in Lines as Respond #13950

Closed
opened 2026-04-19 20:28:41 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @CSAnetGmbH on GitHub (Sep 11, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/5349

Bug Report

Installation Method

Docker Windows

Environment

  • Open WebUI Version: 0.3.21

  • Ollama (if applicable): 3.10

  • Operating System: IOS

  • Browser (if applicable): Safari

Confirmation:

  • [ x] I have read and followed all the instructions provided in the README.md.
  • [x ] I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

When I chat with an LLM no Response generated, I see only the "thinking" Lines
Attachment0

Originally created by @CSAnetGmbH on GitHub (Sep 11, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/5349 # Bug Report ## Installation Method Docker Windows ## Environment - **Open WebUI Version:** 0.3.21 - **Ollama (if applicable):** 3.10 - **Operating System:** IOS - **Browser (if applicable):** Safari **Confirmation:** - [ x] I have read and followed all the instructions provided in the README.md. - [x ] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ ] I have included the Docker container logs. - [ ] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: When I chat with an LLM no Response generated, I see only the "thinking" Lines ![Attachment0](https://github.com/user-attachments/assets/39d99330-0323-4d87-b9e8-60d73e2c15ef)
Author
Owner

@peuportier commented on GitHub (Sep 11, 2024):

Please try pulling a model like Llama3 or Gemma:2b to check if the issue persists.

On your Mac, open a terminal and type:

ollama pull llama3

Once the model has finished downloading, go to your OWUI interface, select the "Llama3" model, and try initiating a chat. If you receive a response, the problem might be with the original model you’re using.

Additionally, please share the configuration of your Mac, as it will help us troubleshoot further. Based on the current information, it's challenging for the team to pinpoint the exact cause of the issue.

Thank you!

<!-- gh-comment-id:2343967708 --> @peuportier commented on GitHub (Sep 11, 2024): Please try pulling a model like Llama3 or Gemma:2b to check if the issue persists. On your Mac, open a terminal and type: `ollama pull llama3` Once the model has finished downloading, go to your OWUI interface, select the "Llama3" model, and try initiating a chat. If you receive a response, the problem might be with the original model you’re using. Additionally, please share the configuration of your Mac, as it will help us troubleshoot further. Based on the current information, it's challenging for the team to pinpoint the exact cause of the issue. Thank you!
Author
Owner

@CSAnetGmbH commented on GitHub (Sep 11, 2024):

Thanks for your Respond, but the Openwebui Instance is on a Windows Docker. Only the CLient is a mac !

I have Tried it with different Models. On an Iphone, Macbook and IPAD
With differens Browsers Mozilla Chrome and Safari.

Here is my Log from the Windows 11 Docker with Olama:
Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.
Generating WEBUI_SECRET_KEY
Loading WEBUI_SECRET_KEY from .webui_secret_key
CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries.
/app/backend/open_webui
/app/backend
/app
Running migrations
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [open_webui.env] 'DEFAULT_LOCALE' loaded from the latest database entry
INFO [open_webui.env] 'DEFAULT_PROMPT_SUGGESTIONS' loaded from the latest database entry
WARNI [open_webui.env]

WARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS.

INFO [open_webui.env] Embedding model set: sentence-transformers/all-MiniLM-L6-v2
INFO [open_webui.apps.audio.main] whisper_device_type: cuda
WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests.
/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO [open_webui.apps.openai.main] get_all_models()
INFO [open_webui.apps.ollama.main] get_all_models()


/ _ \ _ __ ___ _ __ \ \ / /| | | | | |_ |
| | | | '
\ / _ \ '_ \ \ \ /\ / / _ \ '_ | | | || |
| || | |) | / | | | \ V V / / |) | || || |
_
/| .
/ _|| || _/_/ _|./ _/|_|
|
|

v0.3.21 - building the best open-source AI user interface.

https://github.com/open-webui/open-webui

Running migrations
INFO: 172.17.0.1:57020 - "GET /static/splash.png HTTP/1.1" 200 OK
INFO: 172.17.0.1:57020 - "GET /manifest.json HTTP/1.1" 200 OK
INFO: 172.17.0.1:57020 - "GET /api/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /static/favicon.png HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /api/v1/auths/ HTTP/1.1" 401 Unauthorized
INFO: ('172.17.0.1', 57030) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted]
INFO: connection open
INFO [open_webui.apps.webui.models.auths] authenticate_user:
INFO: 172.17.0.1:57026 - "POST /api/v1/auths/signin HTTP/1.1" 200 OK
user-join XRYew-4ZiPGcYnCoAAAB {'auth': {'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImQ4YWQyMzY2LTE0YzktNGNkNC1hY2Q3LTMzOGRmYzY0NWExZSJ9.2vIQHkC-P2CXsyIx7HaLyGhTllnOXIIZ0sIJEanMNFY'}}
user Ralf Abhau(d8ad2366-14c9-4cd4-acd7-338dfc645a1e) connected with session ID XRYew-4ZiPGcYnCoAAAB
INFO: 172.17.0.1:57026 - "GET /api/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /api/changelog HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO [open_webui.apps.openai.main] get_all_models()
INFO [open_webui.apps.ollama.main] get_all_models()
INFO: 172.17.0.1:57020 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:57054 - "GET /api/v1/functions/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:57040 - "GET /api/v1/documents/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:57052 - "GET /api/v1/tools/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:57062 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /api/models HTTP/1.1" 200 OK
INFO: 172.17.0.1:57020 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 172.17.0.1:57020 - "POST /api/v1/chats/tags HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO: 172.17.0.1:57062 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
INFO: 172.17.0.1:57026 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO: 172.17.0.1:46924 - "GET /api/v1/users/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:46924 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:46936 - "GET /api/webhook HTTP/1.1" 200 OK
INFO: 172.17.0.1:46936 - "GET /ollama/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:46936 - "GET /ollama/urls HTTP/1.1" 200 OK
INFO: 172.17.0.1:46924 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /manifest.json HTTP/1.1" 200 OK
INFO: 172.17.0.1:35232 - "GET /static/splash.png HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/v1/auths/ HTTP/1.1" 401 Unauthorized
INFO: ('172.17.0.1', 35248) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted]
INFO: connection open
INFO: 172.17.0.1:35228 - "GET /static/favicon.png HTTP/1.1" 200 OK
INFO: connection closed
Unknown session ID kXri9_qoWvCSnF2kAAAD disconnected
INFO: 172.17.0.1:35252 - "GET /manifest.json HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/config HTTP/1.1" 200 OK
INFO: ('172.17.0.1', 35266) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted]
INFO: connection open
INFO [open_webui.apps.webui.models.auths] authenticate_user:
INFO: 172.17.0.1:35228 - "POST /api/v1/auths/signin HTTP/1.1" 200 OK
user-join nuJ4IP_snSYVN-GOAAAF {'auth': {'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImQ4YWQyMzY2LTE0YzktNGNkNC1hY2Q3LTMzOGRmYzY0NWExZSJ9.2vIQHkC-P2CXsyIx7HaLyGhTllnOXIIZ0sIJEanMNFY'}}
user Ralf Abhau(d8ad2366-14c9-4cd4-acd7-338dfc645a1e) connected with session ID nuJ4IP_snSYVN-GOAAAF
INFO: 172.17.0.1:35228 - "GET /api/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/changelog HTTP/1.1" 200 OK
INFO: 172.17.0.1:35268 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO [open_webui.apps.openai.main] get_all_models()
INFO [open_webui.apps.ollama.main] get_all_models()
INFO: 172.17.0.1:35268 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/models HTTP/1.1" 200 OK
INFO: 172.17.0.1:35290 - "GET /api/v1/functions/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:35274 - "GET /api/v1/tools/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:35268 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK
INFO: 172.17.0.1:35306 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK
INFO: 172.17.0.1:35272 - "GET /api/v1/documents/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "POST /api/v1/chats/tags HTTP/1.1" 200 OK
INFO: 172.17.0.1:35268 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO: 172.17.0.1:35228 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "GET /static/favicon.png HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "POST /api/v1/chats/new HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO [open_webui.apps.ollama.main] url: http://192.168.123.1:11434
INFO: 172.17.0.1:50020 - "POST /ollama/api/chat HTTP/1.1" 200 OK
INFO: 172.17.0.1:50030 - "POST /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK
INFO: 172.17.0.1:50030 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "GET /api/v1/users/ HTTP/1.1" 200 OK
INFO: 172.17.0.1:50030 - "GET /api/webhook HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "GET /ollama/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:50020 - "GET /ollama/urls HTTP/1.1" 200 OK
INFO: 172.17.0.1:50030 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "GET /_app/immutable/nodes/5.CtaeTveP.js HTTP/1.1" 200 OK
INFO: 172.17.0.1:38076 - "GET /_app/immutable/nodes/11.LRpwkVnD.js HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "GET /app/immutable/nodes/16.Djvk3wO.js HTTP/1.1" 200 OK
INFO: 172.17.0.1:38076 - "GET /_app/immutable/chunks/EllipsisHorizontal.BmapJyfT.js HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "GET /_app/immutable/chunks/index.bsgogFL4.js HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=cas%2Fdiscolm-mfto-german%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=cas%2Fmistral-ft-optimized-1227%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=gemma2%3A27b HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=clm6068%2Fgpt4all-13b-snoozy-q4_0%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=gemma2%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=GFalcon-UA%2Fnous-hermes-2-vision%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=glm4%3A9b-chat-q8_0 HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=hermes3%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=koesn%2Fllama3-openbiollm-8b%3Aq6_K HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=llama3.1%3A8b-instruct-q8_0 HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=llama3.1%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=llava%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mistral-nemo%3A12b-instruct-2407-q8_0 HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mistral-nemo%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mistral%3A7b-instruct-v0.3-q8_0 HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=monotykamary%2Fmedichat-llama3%3A8b_q8_0 HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mskimomadto%2Fchat-gph-vision%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=pdevine%2Fgrammarfix%3Alatest HTTP/1.1" 200 OK
INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=taozhiyuai%2Fopenbiollm-llama-3%3A70b_q2_k HTTP/1.1" 200 OK
INFO [open_webui.apps.openai.main] get_all_models()
INFO [open_webui.apps.ollama.main] get_all_models()
INFO: 172.17.0.1:38070 - "GET /api/models HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO: 172.17.0.1:50332 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6/tags HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 172.17.0.1:50320 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO [open_webui.apps.ollama.main] url: http://192.168.123.1:11434
INFO: 172.17.0.1:34766 - "POST /ollama/api/chat HTTP/1.1" 200 OK
INFO: 172.17.0.1:34776 - "POST /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK
INFO: 172.17.0.1:34776 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO [open_webui.apps.ollama.main] url: http://192.168.123.1:11434
INFO: 172.17.0.1:34766 - "POST /ollama/api/chat HTTP/1.1" 200 OK
INFO: 172.17.0.1:34776 - "POST /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK
INFO: 172.17.0.1:34776 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO: 172.17.0.1:38540 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:38542 - "GET /api/webhook HTTP/1.1" 200 OK
INFO: connection closed

<!-- gh-comment-id:2343981588 --> @CSAnetGmbH commented on GitHub (Sep 11, 2024): Thanks for your Respond, but the Openwebui Instance is on a Windows Docker. Only the CLient is a mac ! I have Tried it with different Models. On an Iphone, Macbook and IPAD With differens Browsers Mozilla Chrome and Safari. Here is my Log from the Windows 11 Docker with Olama: Loading WEBUI_SECRET_KEY from file, not provided as an environment variable. Generating WEBUI_SECRET_KEY Loading WEBUI_SECRET_KEY from .webui_secret_key CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries. /app/backend/open_webui /app/backend /app Running migrations INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [open_webui.env] 'DEFAULT_LOCALE' loaded from the latest database entry INFO [open_webui.env] 'DEFAULT_PROMPT_SUGGESTIONS' loaded from the latest database entry WARNI [open_webui.env] WARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS. INFO [open_webui.env] Embedding model set: sentence-transformers/all-MiniLM-L6-v2 INFO [open_webui.apps.audio.main] whisper_device_type: cuda WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests. /usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( INFO: Started server process [1] INFO: Waiting for application startup. INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit) INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() ___ __ __ _ _ _ ___ / _ \ _ __ ___ _ __ \ \ / /__| |__ | | | |_ _| | | | | '_ \ / _ \ '_ \ \ \ /\ / / _ \ '_ \| | | || | | |_| | |_) | __/ | | | \ V V / __/ |_) | |_| || | \___/| .__/ \___|_| |_| \_/\_/ \___|_.__/ \___/|___| |_| v0.3.21 - building the best open-source AI user interface. https://github.com/open-webui/open-webui Running migrations INFO: 172.17.0.1:57020 - "GET /static/splash.png HTTP/1.1" 200 OK INFO: 172.17.0.1:57020 - "GET /manifest.json HTTP/1.1" 200 OK INFO: 172.17.0.1:57020 - "GET /api/config HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /static/favicon.png HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /api/v1/auths/ HTTP/1.1" 401 Unauthorized INFO: ('172.17.0.1', 57030) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted] INFO: connection open INFO [open_webui.apps.webui.models.auths] authenticate_user: INFO: 172.17.0.1:57026 - "POST /api/v1/auths/signin HTTP/1.1" 200 OK user-join XRYew-4ZiPGcYnCoAAAB {'auth': {'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImQ4YWQyMzY2LTE0YzktNGNkNC1hY2Q3LTMzOGRmYzY0NWExZSJ9.2vIQHkC-P2CXsyIx7HaLyGhTllnOXIIZ0sIJEanMNFY'}} user Ralf Abhau(d8ad2366-14c9-4cd4-acd7-338dfc645a1e) connected with session ID XRYew-4ZiPGcYnCoAAAB INFO: 172.17.0.1:57026 - "GET /api/config HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /api/changelog HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() INFO: 172.17.0.1:57020 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK INFO: 172.17.0.1:57054 - "GET /api/v1/functions/ HTTP/1.1" 200 OK INFO: 172.17.0.1:57040 - "GET /api/v1/documents/ HTTP/1.1" 200 OK INFO: 172.17.0.1:57052 - "GET /api/v1/tools/ HTTP/1.1" 200 OK INFO: 172.17.0.1:57062 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /api/models HTTP/1.1" 200 OK INFO: 172.17.0.1:57020 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 172.17.0.1:57020 - "POST /api/v1/chats/tags HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO: 172.17.0.1:57062 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK INFO: 172.17.0.1:57026 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO: 172.17.0.1:46924 - "GET /api/v1/users/ HTTP/1.1" 200 OK INFO: 172.17.0.1:46924 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK INFO: 172.17.0.1:46936 - "GET /api/webhook HTTP/1.1" 200 OK INFO: 172.17.0.1:46936 - "GET /ollama/config HTTP/1.1" 200 OK INFO: 172.17.0.1:46936 - "GET /ollama/urls HTTP/1.1" 200 OK INFO: 172.17.0.1:46924 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /manifest.json HTTP/1.1" 200 OK INFO: 172.17.0.1:35232 - "GET /static/splash.png HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/config HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/v1/auths/ HTTP/1.1" 401 Unauthorized INFO: ('172.17.0.1', 35248) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted] INFO: connection open INFO: 172.17.0.1:35228 - "GET /static/favicon.png HTTP/1.1" 200 OK INFO: connection closed Unknown session ID kXri9_qoWvCSnF2kAAAD disconnected INFO: 172.17.0.1:35252 - "GET /manifest.json HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/config HTTP/1.1" 200 OK INFO: ('172.17.0.1', 35266) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted] INFO: connection open INFO [open_webui.apps.webui.models.auths] authenticate_user: INFO: 172.17.0.1:35228 - "POST /api/v1/auths/signin HTTP/1.1" 200 OK user-join nuJ4IP_snSYVN-GOAAAF {'auth': {'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImQ4YWQyMzY2LTE0YzktNGNkNC1hY2Q3LTMzOGRmYzY0NWExZSJ9.2vIQHkC-P2CXsyIx7HaLyGhTllnOXIIZ0sIJEanMNFY'}} user Ralf Abhau(d8ad2366-14c9-4cd4-acd7-338dfc645a1e) connected with session ID nuJ4IP_snSYVN-GOAAAF INFO: 172.17.0.1:35228 - "GET /api/config HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/changelog HTTP/1.1" 200 OK INFO: 172.17.0.1:35268 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() INFO: 172.17.0.1:35268 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/models HTTP/1.1" 200 OK INFO: 172.17.0.1:35290 - "GET /api/v1/functions/ HTTP/1.1" 200 OK INFO: 172.17.0.1:35274 - "GET /api/v1/tools/ HTTP/1.1" 200 OK INFO: 172.17.0.1:35268 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK INFO: 172.17.0.1:35306 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK INFO: 172.17.0.1:35272 - "GET /api/v1/documents/ HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "POST /api/v1/chats/tags HTTP/1.1" 200 OK INFO: 172.17.0.1:35268 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO: 172.17.0.1:35228 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "GET /static/favicon.png HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "POST /api/v1/chats/new HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO [open_webui.apps.ollama.main] url: http://192.168.123.1:11434 INFO: 172.17.0.1:50020 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 172.17.0.1:50030 - "POST /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK INFO: 172.17.0.1:50030 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "GET /api/v1/users/ HTTP/1.1" 200 OK INFO: 172.17.0.1:50030 - "GET /api/webhook HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "GET /ollama/config HTTP/1.1" 200 OK INFO: 172.17.0.1:50020 - "GET /ollama/urls HTTP/1.1" 200 OK INFO: 172.17.0.1:50030 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "GET /_app/immutable/nodes/5.CtaeTveP.js HTTP/1.1" 200 OK INFO: 172.17.0.1:38076 - "GET /_app/immutable/nodes/11.LRpwkVnD.js HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "GET /_app/immutable/nodes/16.Djvk3wO_.js HTTP/1.1" 200 OK INFO: 172.17.0.1:38076 - "GET /_app/immutable/chunks/EllipsisHorizontal.BmapJyfT.js HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "GET /_app/immutable/chunks/index.bsgogFL4.js HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=cas%2Fdiscolm-mfto-german%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=cas%2Fmistral-ft-optimized-1227%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=gemma2%3A27b HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=clm6068%2Fgpt4all-13b-snoozy-q4_0%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=gemma2%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=GFalcon-UA%2Fnous-hermes-2-vision%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=glm4%3A9b-chat-q8_0 HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=hermes3%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=koesn%2Fllama3-openbiollm-8b%3Aq6_K HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=llama3.1%3A8b-instruct-q8_0 HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=llama3.1%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=llava%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mistral-nemo%3A12b-instruct-2407-q8_0 HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mistral-nemo%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mistral%3A7b-instruct-v0.3-q8_0 HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=monotykamary%2Fmedichat-llama3%3A8b_q8_0 HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=mskimomadto%2Fchat-gph-vision%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=pdevine%2Fgrammarfix%3Alatest HTTP/1.1" 200 OK INFO: 172.17.0.1:38070 - "POST /api/v1/models/update?id=taozhiyuai%2Fopenbiollm-llama-3%3A70b_q2_k HTTP/1.1" 200 OK INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() INFO: 172.17.0.1:38070 - "GET /api/models HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO: 172.17.0.1:50332 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6/tags HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 172.17.0.1:50320 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO [open_webui.apps.ollama.main] url: http://192.168.123.1:11434 INFO: 172.17.0.1:34766 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 172.17.0.1:34776 - "POST /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK INFO: 172.17.0.1:34776 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO [open_webui.apps.ollama.main] url: http://192.168.123.1:11434 INFO: 172.17.0.1:34766 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 172.17.0.1:34776 - "POST /api/v1/chats/7783672b-dc60-41fe-8348-719d85403fa6 HTTP/1.1" 200 OK INFO: 172.17.0.1:34776 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO: 172.17.0.1:38540 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK INFO: 172.17.0.1:38542 - "GET /api/webhook HTTP/1.1" 200 OK INFO: connection closed
Author
Owner

@peuportier commented on GitHub (Sep 11, 2024):

Netzwerkprobleme zwischen Mac und Docker (Windows-Host)

  • Netzwerklatenz oder Verbindungsprobleme: Verzögerungen oder Verbindungsabbrüche zwischen dem Mac und dem Docker-Container könnten auftreten. Stellen Sie sicher, dass die erforderlichen Ports richtig weitergeleitet werden.

Ressourcenzuweisung auf Windows (Docker-Host):

Unzureichende Ressourcen: Docker könnte nicht genug CPU, Arbeitsspeicher oder GPU-Ressourcen zugewiesen bekommen haben, um das Modell effizient auszuführen.
Hohe Auslastung des Hosts: Stellen Sie sicher, dass der Windows-Host über ausreichende Ressourcen verfügt.

Modell oder Code hängt aufgrund von Verarbeitung:
Großes Modell oder lange Verarbeitung: Einige Modelle können lange brauchen, um Anfragen zu verarbeiten, insbesondere auf einer CPU. Überprüfen Sie die GPU-Konfiguration.
Fehlkonfiguration im Modellcode: Ein Fehler im Modellcode könnte dazu führen, dass das Modell „hängt“.

Docker-Einstellungen auf Windows:
Freigegebene Ordner und Volumes: Langsame Dateiübertragungen zwischen Windows und Docker könnten Verzögerungen verursachen. Optimieren Sie die I/O-Leistung.
Docker-Netzwerkprobleme: Überprüfen Sie die Netzwerkkonfiguration von Docker, insbesondere die Nutzung von Bridged oder Host Networking.

CORS- oder Sicherheitsprobleme:
Stellen Sie sicher, dass es keine Sicherheitsbeschränkungen gibt, die Antworten zwischen dem Mac und Docker auf Windows blockieren.

Modell-spezifische Probleme:
Protokollierung und Debugging: Aktivieren Sie die Protokollierung im Modell, um Fehler oder Engpässe zu identifizieren. Testen Sie direkte Anfragen vom Windows-Rechner an den Docker-Container.

Zuerst möchte ich wissen: Wenn du ein anderes Modell von Ollama lädst, kannst du dann mit ihm chatten, oder ist dieses Problem allgemein?

<!-- gh-comment-id:2344005539 --> @peuportier commented on GitHub (Sep 11, 2024): Netzwerkprobleme zwischen Mac und Docker (Windows-Host) - Netzwerklatenz oder Verbindungsprobleme: Verzögerungen oder Verbindungsabbrüche zwischen dem Mac und dem Docker-Container könnten auftreten. Stellen Sie sicher, dass die erforderlichen Ports richtig weitergeleitet werden. Ressourcenzuweisung auf Windows (Docker-Host): Unzureichende Ressourcen: Docker könnte nicht genug CPU, Arbeitsspeicher oder GPU-Ressourcen zugewiesen bekommen haben, um das Modell effizient auszuführen. Hohe Auslastung des Hosts: Stellen Sie sicher, dass der Windows-Host über ausreichende Ressourcen verfügt. Modell oder Code hängt aufgrund von Verarbeitung: Großes Modell oder lange Verarbeitung: Einige Modelle können lange brauchen, um Anfragen zu verarbeiten, insbesondere auf einer CPU. Überprüfen Sie die GPU-Konfiguration. Fehlkonfiguration im Modellcode: Ein Fehler im Modellcode könnte dazu führen, dass das Modell „hängt“. Docker-Einstellungen auf Windows: Freigegebene Ordner und Volumes: Langsame Dateiübertragungen zwischen Windows und Docker könnten Verzögerungen verursachen. Optimieren Sie die I/O-Leistung. Docker-Netzwerkprobleme: Überprüfen Sie die Netzwerkkonfiguration von Docker, insbesondere die Nutzung von Bridged oder Host Networking. CORS- oder Sicherheitsprobleme: Stellen Sie sicher, dass es keine Sicherheitsbeschränkungen gibt, die Antworten zwischen dem Mac und Docker auf Windows blockieren. Modell-spezifische Probleme: Protokollierung und Debugging: Aktivieren Sie die Protokollierung im Modell, um Fehler oder Engpässe zu identifizieren. Testen Sie direkte Anfragen vom Windows-Rechner an den Docker-Container. **Zuerst möchte ich wissen: Wenn du ein anderes Modell von Ollama lädst, kannst du dann mit ihm chatten, oder ist dieses Problem allgemein?**
Author
Owner

@CSAnetGmbH commented on GitHub (Sep 11, 2024):

Danke für die ausführliche Antwort, das Problem besteht nur bei IOS und Mac Clients mit kienem der 9 Models, auf dem PC, Android Client arbeitet alles super schnell.

Der Server auf dem Open WEbUI läuft, ist mit einer i9 CPU 11 gen ausgestattet und hat Eine Geforce TI4090 mit 16 GB Ram.

<!-- gh-comment-id:2344013071 --> @CSAnetGmbH commented on GitHub (Sep 11, 2024): Danke für die ausführliche Antwort, das Problem besteht nur bei IOS und Mac Clients mit kienem der 9 Models, auf dem PC, Android Client arbeitet alles super schnell. Der Server auf dem Open WEbUI läuft, ist mit einer i9 CPU 11 gen ausgestattet und hat Eine Geforce TI4090 mit 16 GB Ram.
Author
Owner

@peuportier commented on GitHub (Sep 11, 2024):

Das ist ein interessantes Problem. Es wäre vielleicht sinnvoll, die internen Firewall-Einstellungen auf deinem Mac zu überprüfen, um zu sehen, ob irgendwelche Ports für die Ollama-API blockiert sind. Manchmal können die Firewall-Einstellungen von macOS eingehenden und ausgehenden Datenverkehr filtern, was das Problem verursachen könnte. Übrigens, deine Server-Konfiguration ist ausgezeichnet! Mit einer i9-CPU und einer Geforce TI4090 solltest du 7-8B-Modelle sehr flüssig betreiben können.

<!-- gh-comment-id:2344023012 --> @peuportier commented on GitHub (Sep 11, 2024): Das ist ein interessantes Problem. Es wäre vielleicht sinnvoll, die internen Firewall-Einstellungen auf deinem Mac zu überprüfen, um zu sehen, ob irgendwelche Ports für die Ollama-API blockiert sind. Manchmal können die Firewall-Einstellungen von macOS eingehenden und ausgehenden Datenverkehr filtern, was das Problem verursachen könnte. Übrigens, deine Server-Konfiguration ist ausgezeichnet! Mit einer i9-CPU und einer Geforce TI4090 solltest du 7-8B-Modelle sehr flüssig betreiben können.
Author
Owner

@peuportier commented on GitHub (Sep 11, 2024):

Versuche auch, diesen Befehl von deinem Mac im Terminal auszuführen und überprüfe, was die Antwort ist: curl http://localhost:11434/api/version. Verwende curl, um eine Testanfrage an die Ollama-API zu senden und zu überprüfen, ob sie korrekt antwortet. Danach kannst du curl http://localhost:11434/api/models ausführen, wobei du localhost durch die IP deines Servers ersetzt.

<!-- gh-comment-id:2344029109 --> @peuportier commented on GitHub (Sep 11, 2024): Versuche auch, diesen Befehl von deinem Mac im Terminal auszuführen und überprüfe, was die Antwort ist: curl http://localhost:11434/api/version. Verwende curl, um eine Testanfrage an die Ollama-API zu senden und zu überprüfen, ob sie korrekt antwortet. Danach kannst du curl http://localhost:11434/api/models ausführen, wobei du localhost durch die IP deines Servers ersetzt.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#13950