[GH-ISSUE #5925] All models disappeared - Error with logs #65735

Closed
opened 2026-05-03 22:26:33 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @nicholhai on GitHub (Jul 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5925

What is the issue?

I just rebooted my server and was able to login on the web portal but all my models disappeared. Cannot download new ones either. Tried to view logs (I am learning) and got the following. Any ideas?

user@zephyr:~$ sudo docker logs -f 8c941502f633
Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.
Generating WEBUI_SECRET_KEY
Loading WEBUI_SECRET_KEY from .webui_secret_key
USER_AGENT environment variable not set, consider setting it to identify your requests.
INFO: Started server process [1]
INFO: Waiting for application startup.
/app


/ _ \ _ __ ___ _ __ \ \ / /| | | | | |_ |
| | | | '
\ / _ \ '_ \ \ \ /\ / / _ \ '_ | | | || |
| || | |) | / | | | \ V V / / |) | || || |
_
/| .
/ _|| || _/_/ _|./ _/|_|
|
|

v0.3.10 - building the best open-source AI user interface.

https://github.com/open-webui/open-webui

INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 7e5b5dc7342b, init
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO [apps.openai.main] get_all_models()
INFO [apps.ollama.main] get_all_models()
INFO: 192.168.3.17:54463 - "GET /admin/settings/ HTTP/1.1" 304 Not Modified
INFO: 192.168.3.17:54463 - "GET /static/splash.png HTTP/1.1" 200 OK
INFO: 192.168.3.17:54463 - "GET /api/config HTTP/1.1" 200 OK
INFO: 192.168.3.17:54464 - "GET /static/favicon.png HTTP/1.1" 200 OK
INFO: 192.168.3.17:54463 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P3an7c7 HTTP/1.1" 200 OK
INFO: 192.168.3.17:54465 - "GET /api/v1/auths/ HTTP/1.1" 401 Unauthorized
INFO: 192.168.3.17:54464 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P3an7cN&sid=-Vh24Ep6pUJoV1MdAAAA HTTP/1.1" 200 OK
INFO: ('192.168.3.17', 54466) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket&sid=-Vh24Ep6pUJoV1MdAAAA" [accepted]
INFO: 192.168.3.17:54465 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P3an7cN.0&sid=-Vh24Ep6pUJoV1MdAAAA HTTP/1.1" 200 OK
INFO: connection open
INFO: 192.168.3.17:54465 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P3an7ck&sid=-Vh24Ep6pUJoV1MdAAAA HTTP/1.1" 200 OK
INFO: 192.168.3.17:54465 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO [apps.webui.models.auths] authenticate_user: vbagwalla@gmail.com
INFO: 192.168.3.17:54465 - "POST /api/v1/auths/signin HTTP/1.1" 400 Bad Request
INFO [apps.webui.models.auths] insert_new_auth
INFO: 192.168.3.17:54468 - "POST /api/v1/auths/signup HTTP/1.1" 200 OK
user-join tRHiRqANjHsVDMLdAAAB {'auth': {'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjI4NDY1MmU1LTBkN2ItNGIyMC05ZGM3LTAxMTA4ODNhNDRlNCJ9.vSxWpMlFW8jJ_fBmQ3gqwO6vmqij2mVnkMXQQVo4Ryc'}}
user Vicky Bagwalla(284652e5-0d7b-4b20-9dc7-0110883a44e4) connected with session ID tRHiRqANjHsVDMLdAAAB
INFO: 192.168.3.17:54468 - "GET /api/changelog HTTP/1.1" 200 OK
INFO: 192.168.3.17:54468 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO [apps.openai.main] get_all_models()
INFO [apps.ollama.main] get_all_models()
INFO: 192.168.3.17:54469 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54470 - "GET /api/v1/documents/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54468 - "GET /api/models HTTP/1.1" 200 OK
INFO: 192.168.3.17:54471 - "GET /api/v1/tools/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54473 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK
INFO: 192.168.3.17:54472 - "GET /api/v1/functions/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54469 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK
INFO: 192.168.3.17:54472 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 192.168.3.17:54469 - "POST /api/v1/chats/tags HTTP/1.1" 200 OK
INFO: 192.168.3.17:54472 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO: 192.168.3.17:54469 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 192.168.3.17:54473 - "GET /api/v1/chats/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54480 - "GET /api/v1/users/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54480 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK
INFO: 192.168.3.17:54482 - "GET /api/webhook HTTP/1.1" 200 OK
INFO: 127.0.0.1:54270 - "GET /health HTTP/1.1" 200 OK
INFO: 192.168.3.17:54482 - "GET /ollama/config HTTP/1.1" 200 OK
INFO: 192.168.3.17:54480 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 192.168.3.17:54482 - "GET /ollama/urls HTTP/1.1" 200 OK
INFO [apps.ollama.main] url: http://host.docker.internal:11434
INFO: 192.168.3.17:54487 - "POST /ollama/api/pull/0 HTTP/1.1" 200 OK
INFO [apps.openai.main] get_all_models()
INFO [apps.ollama.main] get_all_models()
INFO: 192.168.3.17:54487 - "GET /api/models HTTP/1.1" 200 OK
INFO: 192.168.3.17:54488 - "GET /ollama/api/version HTTP/1.1" 200 OK
INFO: 192.168.3.17:54488 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 192.168.3.17:54488 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "POST /api/v1/chats/new HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "GET /api/v1/chats/ HTTP/1.1" 200 OK
INFO [apps.ollama.main] url: http://host.docker.internal:11434
INFO: 127.0.0.1:52894 - "GET /health HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "POST /ollama/api/chat HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "POST /api/chat/completed HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "POST /api/v1/chats/441f42b9-18b5-4239-8fe3-b0cd46513858 HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "GET /api/v1/chats/ HTTP/1.1" 200 OK
INFO: 192.168.3.17:54492 - "POST /api/v1/chats/441f42b9-18b5-4239-8fe3-b0cd46513858 HTTP/1.1" 200 OK
error from daemon in stream: Error grabbing logs: invalid character 'l' after object key

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @nicholhai on GitHub (Jul 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5925 ### What is the issue? I just rebooted my server and was able to login on the web portal but all my models disappeared. Cannot download new ones either. Tried to view logs (I am learning) and got the following. Any ideas? user@zephyr:~$ sudo docker logs -f 8c941502f633 Loading WEBUI_SECRET_KEY from file, not provided as an environment variable. Generating WEBUI_SECRET_KEY Loading WEBUI_SECRET_KEY from .webui_secret_key USER_AGENT environment variable not set, consider setting it to identify your requests. INFO: Started server process [1] INFO: Waiting for application startup. /app ___ __ __ _ _ _ ___ / _ \ _ __ ___ _ __ \ \ / /__| |__ | | | |_ _| | | | | '_ \ / _ \ '_ \ \ \ /\ / / _ \ '_ \| | | || | | |_| | |_) | __/ | | | \ V V / __/ |_) | |_| || | \___/| .__/ \___|_| |_| \_/\_/ \___|_.__/ \___/|___| |_| v0.3.10 - building the best open-source AI user interface. https://github.com/open-webui/open-webui INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> 7e5b5dc7342b, init INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit) INFO [apps.openai.main] get_all_models() INFO [apps.ollama.main] get_all_models() INFO: 192.168.3.17:54463 - "GET /admin/settings/ HTTP/1.1" 304 Not Modified INFO: 192.168.3.17:54463 - "GET /static/splash.png HTTP/1.1" 200 OK INFO: 192.168.3.17:54463 - "GET /api/config HTTP/1.1" 200 OK INFO: 192.168.3.17:54464 - "GET /static/favicon.png HTTP/1.1" 200 OK INFO: 192.168.3.17:54463 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P3an7c7 HTTP/1.1" 200 OK INFO: 192.168.3.17:54465 - "GET /api/v1/auths/ HTTP/1.1" 401 Unauthorized INFO: 192.168.3.17:54464 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P3an7cN&sid=-Vh24Ep6pUJoV1MdAAAA HTTP/1.1" 200 OK INFO: ('192.168.3.17', 54466) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket&sid=-Vh24Ep6pUJoV1MdAAAA" [accepted] INFO: 192.168.3.17:54465 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P3an7cN.0&sid=-Vh24Ep6pUJoV1MdAAAA HTTP/1.1" 200 OK INFO: connection open INFO: 192.168.3.17:54465 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P3an7ck&sid=-Vh24Ep6pUJoV1MdAAAA HTTP/1.1" 200 OK INFO: 192.168.3.17:54465 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO [apps.webui.models.auths] authenticate_user: vbagwalla@gmail.com INFO: 192.168.3.17:54465 - "POST /api/v1/auths/signin HTTP/1.1" 400 Bad Request INFO [apps.webui.models.auths] insert_new_auth INFO: 192.168.3.17:54468 - "POST /api/v1/auths/signup HTTP/1.1" 200 OK user-join tRHiRqANjHsVDMLdAAAB {'auth': {'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjI4NDY1MmU1LTBkN2ItNGIyMC05ZGM3LTAxMTA4ODNhNDRlNCJ9.vSxWpMlFW8jJ_fBmQ3gqwO6vmqij2mVnkMXQQVo4Ryc'}} user Vicky Bagwalla(284652e5-0d7b-4b20-9dc7-0110883a44e4) connected with session ID tRHiRqANjHsVDMLdAAAB INFO: 192.168.3.17:54468 - "GET /api/changelog HTTP/1.1" 200 OK INFO: 192.168.3.17:54468 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO [apps.openai.main] get_all_models() INFO [apps.ollama.main] get_all_models() INFO: 192.168.3.17:54469 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54470 - "GET /api/v1/documents/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54468 - "GET /api/models HTTP/1.1" 200 OK INFO: 192.168.3.17:54471 - "GET /api/v1/tools/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54473 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK INFO: 192.168.3.17:54472 - "GET /api/v1/functions/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54469 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK INFO: 192.168.3.17:54472 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 192.168.3.17:54469 - "POST /api/v1/chats/tags HTTP/1.1" 200 OK INFO: 192.168.3.17:54472 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO: 192.168.3.17:54469 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 192.168.3.17:54473 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54480 - "GET /api/v1/users/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54480 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK INFO: 192.168.3.17:54482 - "GET /api/webhook HTTP/1.1" 200 OK INFO: 127.0.0.1:54270 - "GET /health HTTP/1.1" 200 OK INFO: 192.168.3.17:54482 - "GET /ollama/config HTTP/1.1" 200 OK INFO: 192.168.3.17:54480 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 192.168.3.17:54482 - "GET /ollama/urls HTTP/1.1" 200 OK INFO [apps.ollama.main] url: http://host.docker.internal:11434 INFO: 192.168.3.17:54487 - "POST /ollama/api/pull/0 HTTP/1.1" 200 OK INFO [apps.openai.main] get_all_models() INFO [apps.ollama.main] get_all_models() INFO: 192.168.3.17:54487 - "GET /api/models HTTP/1.1" 200 OK INFO: 192.168.3.17:54488 - "GET /ollama/api/version HTTP/1.1" 200 OK INFO: 192.168.3.17:54488 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 192.168.3.17:54488 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "POST /api/v1/chats/new HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO [apps.ollama.main] url: http://host.docker.internal:11434 INFO: 127.0.0.1:52894 - "GET /health HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "POST /api/chat/completed HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "POST /api/v1/chats/441f42b9-18b5-4239-8fe3-b0cd46513858 HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO: 192.168.3.17:54492 - "POST /api/v1/chats/441f42b9-18b5-4239-8fe3-b0cd46513858 HTTP/1.1" 200 OK error from daemon in stream: Error grabbing logs: invalid character 'l' after object key ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-03 22:26:33 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 24, 2024):

You may have started a new ollama server that's not connected to your previous one. Did you run the same commands as before? What's the output of docker ps and docker logs ollama?

<!-- gh-comment-id:2248834094 --> @rick-github commented on GitHub (Jul 24, 2024): You may have started a new ollama server that's not connected to your previous one. Did you run the same commands as before? What's the output of `docker ps` and `docker logs ollama`?
Author
Owner

@nicholhai commented on GitHub (Jul 24, 2024):

I am just going to wipe the server and do a fresh install. have a lot of unwanted repo's on there that I likely installed trying to troubleshoot

<!-- gh-comment-id:2248859387 --> @nicholhai commented on GitHub (Jul 24, 2024): I am just going to wipe the server and do a fresh install. have a lot of unwanted repo's on there that I likely installed trying to troubleshoot
Author
Owner

@Cephra commented on GitHub (Jul 24, 2024):

Hello again @nicholhai :D

What command did you use to start the ollama container?

Also can you send the output of docker images

<!-- gh-comment-id:2248881348 --> @Cephra commented on GitHub (Jul 24, 2024): Hello again @nicholhai :D What command did you use to start the ollama container? Also can you send the output of `docker images`
Author
Owner

@nicholhai commented on GitHub (Jul 25, 2024):

Same as the ones from last time. Very frustrating. I even tried the Manual install and all goes well till the last step of starting the sh start.sh

<!-- gh-comment-id:2250058823 --> @nicholhai commented on GitHub (Jul 25, 2024): Same as the ones from last time. Very frustrating. I even tried the Manual install and all goes well till the last step of starting the sh start.sh
Author
Owner

@Cephra commented on GitHub (Jul 25, 2024):

Hm.. perhaps since you have deleted the volume used by open-webui and used the version with included ollama, you may have deleted all the models you previously downloaded.. 😕

But you should be able to just download them again.

<!-- gh-comment-id:2250124354 --> @Cephra commented on GitHub (Jul 25, 2024): Hm.. perhaps since you have deleted the volume used by open-webui and used the version with included ollama, you may have deleted all the models you previously downloaded.. 😕 But you should be able to just download them again.
Author
Owner

@nicholhai commented on GitHub (Jul 25, 2024):

Did not work :(

I tried reinstalling the OS and tried starting the docker again with:

sudo docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

And I get:

Unable to find image 'ollama/ollama:latest' locally
latest: Pulling from ollama/ollama
3713021b0277: Pull complete
4318e2a18092: Pull complete
5173e475bc3a: Pull complete
Digest: sha256:217f0de100f62f5bcdbf73699856a4c0155695de7944854e7c84af87e2a6e2c0
Status: Downloaded newer image for ollama/ollama:latest
e6f0cbf4e0704937017c119e45c17657960d427dc79b779ff67bdca251126ff3
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.

<!-- gh-comment-id:2250225109 --> @nicholhai commented on GitHub (Jul 25, 2024): Did not work :( I tried reinstalling the OS and tried starting the docker again with: sudo docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama And I get: Unable to find image 'ollama/ollama:latest' locally latest: Pulling from ollama/ollama 3713021b0277: Pull complete 4318e2a18092: Pull complete 5173e475bc3a: Pull complete Digest: sha256:217f0de100f62f5bcdbf73699856a4c0155695de7944854e7c84af87e2a6e2c0 Status: Downloaded newer image for ollama/ollama:latest e6f0cbf4e0704937017c119e45c17657960d427dc79b779ff67bdca251126ff3 docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.
Author
Owner

@Cephra commented on GitHub (Jul 25, 2024):

Can you try following the exact steps pointed out here?
Especially the Nvidia related stuff. It says "Install the NVIDIA Container Toolkit⁠.", maybe that is missing?

<!-- gh-comment-id:2250238000 --> @Cephra commented on GitHub (Jul 25, 2024): Can you try following the exact steps pointed out [here](https://hub.docker.com/r/ollama/ollama)? Especially the Nvidia related stuff. It says "Install the NVIDIA Container Toolkit⁠.", maybe that is missing?
Author
Owner

@nicholhai commented on GitHub (Jul 25, 2024):

I did that :(

<!-- gh-comment-id:2250243417 --> @nicholhai commented on GitHub (Jul 25, 2024): I did that :(
Author
Owner

@Cephra commented on GitHub (Jul 25, 2024):

Okay what if you omit the --gpus=all when starting the container? Does it work then? I just want to make sure that it's nvidia related.
Also, you're still on Mac OS, right?

<!-- gh-comment-id:2250251312 --> @Cephra commented on GitHub (Jul 25, 2024): Okay what if you omit the `--gpus=all` when starting the container? Does it work then? I just want to make sure that it's nvidia related. Also, you're still on Mac OS, right?
Author
Owner

@nicholhai commented on GitHub (Jul 25, 2024):

This is on Ubuntu server 24.04

<!-- gh-comment-id:2250300792 --> @nicholhai commented on GitHub (Jul 25, 2024): This is on Ubuntu server 24.04
Author
Owner

@nicholhai commented on GitHub (Jul 25, 2024):

I bought this "gaming machine" specifically for this purpose

<!-- gh-comment-id:2250306229 --> @nicholhai commented on GitHub (Jul 25, 2024): I bought this "gaming machine" specifically for this purpose
Author
Owner

@Cephra commented on GitHub (Jul 25, 2024):

Have you looked at this: https://github.com/ollama/ollama/blob/main/docs/gpu.md#nvidia ?

<!-- gh-comment-id:2250633754 --> @Cephra commented on GitHub (Jul 25, 2024): Have you looked at this: https://github.com/ollama/ollama/blob/main/docs/gpu.md#nvidia ?
Author
Owner

@mxyng commented on GitHub (Jul 29, 2024):

This issue doesn't seem like it's related to Ollama since it's referencing open webui which also seems to be proxying the requests. You might get more help asking in the openwebui repo

<!-- gh-comment-id:2257112231 --> @mxyng commented on GitHub (Jul 29, 2024): This issue doesn't seem like it's related to Ollama since it's referencing open webui which also seems to be proxying the requests. You might get more help asking in the openwebui repo
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65735