[GH-ISSUE #11343] Docker image version 0.9.6 breaks setups using /.ollama as .ollama dir #69541

Closed
opened 2026-05-04 18:25:43 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @kewiha on GitHub (Jul 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11343

What is the issue?

I've been running ollama via rootless podman for awhile and recently ollama forgot all the models I had pulled. Prior to 0.9.6 a valid place to put the .ollama dir was in /.ollama but in 0.9.6 it doesn't seem to look there and defaults to /home/ubuntu/.ollama. Consequently, the models previously downloaded to /.ollama as well as the id_ed25519 keys are not used.

To show what I mean, below are the podman commands to run ollama rootlessly (i.e. as a non-root user on the host and a non-root user inside the container) as I have done since June 2024.

0.9.5 without anything bound to /.ollama

keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 docker.io/ollama/ollama:0.9.5
Couldn't find '/.ollama/id_ed25519'. Generating new private key.
Error: could not create directory mkdir /.ollama: permission denied

0.9.5 with dir bound to /.ollama

keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 -v ollama095:/.ollama docker.io/ollama/ollama:0.9.5
Couldn't find '/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINn+chwMe7tNHbE/DEybmPmwlbG+JXYf5vhAP17xBGL7

time=2025-07-09T11:48:43.051Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
#rest of output omitted...

0.9.6 without anything bound to /.ollama

keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 docker.io/ollama/ollama:0.9.6
Couldn't find '/home/ubuntu/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOFBpLKuz4qZ1BP0Ovhip0Br6OBLkqRER/3H1m4M0sKY

time=2025-07-09T11:18:38.444Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
#rest of output omitted...

0.9.6 with dir bound to /.ollama

keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 -v ollama096:/.ollama docker.io/ollama/ollama:0.9.6
Couldn't find '/home/ubuntu/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZgoiyamWz/EkLXYPim2yL2mm0S0dljEnpHPOVg6ROC

time=2025-07-09T11:48:55.023Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
#rest of output omitted...

The behaviour of 0.9.5 is consistent whether /.ollama is present or writeable: .ollama is found in /.ollama. In 0.9.6, it looks in /home/ubuntu/.ollama instead regardless of whether /.ollama exists and is usable. Consequently, the contents of .ollama are functionally lost when upgrading to 0.9.6.

A quick fix for users experiencing this is to simply change the bind from /.ollama to /home/ubuntu/.ollama inside the container. However, this breaking change may not have been intentional and doesn't seem to be clearly documented (e.g. in docs/docker.md). The behaviour of using /home/ubuntu/.ollama in certain circumstances seems to be undocumented.

Could a change be made to permit and document the use of /.ollama, either via a new environment var or by defaulting to /.ollama when it already exists and is writeable? The 0.9.6 behaviour (/home/ubuntu/.ollama or /root/.ollama, presumably depending on the user running ollama) could be a fallback but at a minimum needs documentation. I don't see anything in the current docs that suggests /home/ubuntu/.ollama is used sometimes.

Relevant log output


OS

Docker

GPU

No response

CPU

No response

Ollama version

0.9.6

Originally created by @kewiha on GitHub (Jul 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11343 ### What is the issue? I've been running ollama via rootless podman for awhile and recently ollama forgot all the models I had pulled. Prior to 0.9.6 a valid place to put the .ollama dir was in /.ollama but in 0.9.6 it doesn't seem to look there and defaults to /home/ubuntu/.ollama. Consequently, the models previously downloaded to /.ollama as well as the id_ed25519 keys are not used. To show what I mean, below are the podman commands to run ollama rootlessly (i.e. as a non-root user on the host and a non-root user inside the container) as I have done since June 2024. **0.9.5 without anything bound to /.ollama** ``` keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 docker.io/ollama/ollama:0.9.5 Couldn't find '/.ollama/id_ed25519'. Generating new private key. Error: could not create directory mkdir /.ollama: permission denied ``` **0.9.5 with dir bound to /.ollama** ``` keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 -v ollama095:/.ollama docker.io/ollama/ollama:0.9.5 Couldn't find '/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINn+chwMe7tNHbE/DEybmPmwlbG+JXYf5vhAP17xBGL7 time=2025-07-09T11:48:43.051Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" #rest of output omitted... ``` **0.9.6 without anything bound to /.ollama** ``` keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 docker.io/ollama/ollama:0.9.6 Couldn't find '/home/ubuntu/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOFBpLKuz4qZ1BP0Ovhip0Br6OBLkqRER/3H1m4M0sKY time=2025-07-09T11:18:38.444Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" #rest of output omitted... ``` **0.9.6 with dir bound to /.ollama** ``` keith@HAFXB-DB:~/ansible$ podman run --rm -it --userns "keep-id" --user 1000:1000 -v ollama096:/.ollama docker.io/ollama/ollama:0.9.6 Couldn't find '/home/ubuntu/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZgoiyamWz/EkLXYPim2yL2mm0S0dljEnpHPOVg6ROC time=2025-07-09T11:48:55.023Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" #rest of output omitted... ``` The behaviour of 0.9.5 is consistent whether /.ollama is present or writeable: .ollama is found in /.ollama. In 0.9.6, it looks in /home/ubuntu/.ollama instead regardless of whether /.ollama exists and is usable. Consequently, the contents of .ollama are functionally lost when upgrading to 0.9.6. A quick fix for users experiencing this is to simply change the bind from /.ollama to /home/ubuntu/.ollama inside the container. However, this breaking change may not have been intentional and doesn't seem to be clearly documented (e.g. in [docs/docker.md](https://github.com/ollama/ollama/blob/main/docs/docker.md)). The behaviour of using /home/ubuntu/.ollama in certain circumstances seems to be undocumented. Could a change be made to permit and document the use of /.ollama, either via a new environment var or by defaulting to /.ollama when it already exists and is writeable? The 0.9.6 behaviour (/home/ubuntu/.ollama or /root/.ollama, presumably depending on the user running ollama) could be a fallback but at a minimum needs documentation. I don't see anything in the current docs that suggests /home/ubuntu/.ollama is used sometimes. ### Relevant log output ```shell ``` ### OS Docker ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.9.6
GiteaMirror added the bug label 2026-05-04 18:25:43 -05:00
Author
Owner

@qhaas commented on GitHub (Jul 9, 2025):

Resolving issue #228 using the XDG approach would address this.

I end up extending the ollama Dockerfile with the following to mitigate all cases when bind-mounting from the host:

RUN install -d /mnt/ollama && \
    ln -s /mnt/ollama /root/.ollama  && \
    ln -s /mnt/ollama /home/ubuntu/.ollama && \
    ln -s /mnt/ollama /.ollama

VOLUME [ "/mnt/ollama" ]
WORKDIR /mnt/ollama

I think add the following to my compose service:

`user: ${HOST_UID?"HOST_UID not set"}:${HOST_GID?"HOST_GID not set"}`
volumes:
      - ${OLLAMA_HOME}:/mnt/ollama/
<!-- gh-comment-id:3052549168 --> @qhaas commented on GitHub (Jul 9, 2025): Resolving issue #228 using the XDG approach would address this. I end up extending the ollama Dockerfile with the following to mitigate all cases when bind-mounting from the host: ``` RUN install -d /mnt/ollama && \ ln -s /mnt/ollama /root/.ollama && \ ln -s /mnt/ollama /home/ubuntu/.ollama && \ ln -s /mnt/ollama /.ollama VOLUME [ "/mnt/ollama" ] WORKDIR /mnt/ollama ``` I think add the following to my compose service: ``` `user: ${HOST_UID?"HOST_UID not set"}:${HOST_GID?"HOST_GID not set"}` volumes: - ${OLLAMA_HOME}:/mnt/ollama/ ```
Author
Owner

@rick-github commented on GitHub (Jul 9, 2025):

#9681 upgraded the base image for the docker build to 24.04. Unlike earlier images, this contains a password entry for user 1000, with a home directory of /home/ubuntu. The .ollama directory is placed in the $HOME of the user running ollama. Pre 0.9.6, this was /root, because there was a password entry for it, or / if the user ID was overridden, because there was no password entry for the ID. In 0.9.6, the default is still /root, but overriding the user ID uses $HOME for user ubuntu. You can override this by setting HOME in the podman run command.

$ docker run --rm -e HOME=/ --user 1000:1000 -v /tmp/ollama:/.ollama ollama/ollama:0.9.6
Couldn't find '/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBemnGOIoyyQBSEZx2jIw3iIY6lpZaXZLZT3CF77w9sh

time=2025-07-09T12:49:19.332Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-07-09T12:49:19.333Z level=INFO source=images.go:476 msg="total blobs: 0"
<!-- gh-comment-id:3052560824 --> @rick-github commented on GitHub (Jul 9, 2025): #9681 upgraded the base image for the docker build to 24.04. Unlike earlier images, this contains a password entry for user 1000, with a home directory of /home/ubuntu. The .ollama directory is placed in the $HOME of the user running ollama. Pre 0.9.6, this was /root, because there was a password entry for it, or / if the user ID was overridden, because there was no password entry for the ID. In 0.9.6, the default is still /root, but overriding the user ID uses $HOME for user `ubuntu`. You can override this by setting HOME in the podman run command. ```console $ docker run --rm -e HOME=/ --user 1000:1000 -v /tmp/ollama:/.ollama ollama/ollama:0.9.6 Couldn't find '/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBemnGOIoyyQBSEZx2jIw3iIY6lpZaXZLZT3CF77w9sh time=2025-07-09T12:49:19.332Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-07-09T12:49:19.333Z level=INFO source=images.go:476 msg="total blobs: 0" ```
Author
Owner

@dojoca commented on GitHub (Jul 10, 2025):

Same problem on Ubuntu using docker. Updating to this container and all my models disappeared. Trying to see how to mount the directories on the host correctly so that Ollama can see my models.

<!-- gh-comment-id:3056482814 --> @dojoca commented on GitHub (Jul 10, 2025): Same problem on Ubuntu using docker. Updating to this container and all my models disappeared. Trying to see how to mount the directories on the host correctly so that Ollama can see my models.
Author
Owner

@rick-github commented on GitHub (Jul 10, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:3056488534 --> @rick-github commented on GitHub (Jul 10, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@FlippingBinary commented on GitHub (Jul 10, 2025):

For me, it was easier to just change my volume mount from /.ollama to /home/ubuntu/.ollama. I was already running the container with the same uid and gid as my local user account, so I just had to mount the same local folder to the new location where the container expected to find it. The fix was plug-and-play, and my models are back.

<!-- gh-comment-id:3057627888 --> @FlippingBinary commented on GitHub (Jul 10, 2025): For me, it was easier to just change my volume mount from `/.ollama` to `/home/ubuntu/.ollama`. I was already running the container with the same uid and gid as my local user account, so I just had to mount the same local folder to the new location where the container expected to find it. The fix was plug-and-play, and my models are back.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69541