[GH-ISSUE #2832] Docker ollama with local volume #1721

Closed
opened 2026-04-12 11:41:56 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ddbhatt on GitHub (Feb 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2832

Originally assigned to: @dhiltgen on GitHub.

Hello I have installed ollama locally and in docker as per the instructions on https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image, with the exception of mapping volume to my local .ollama directory on windows.

when I run ollama list it is empty. and the docker log for the same is as under:
2024-02-29 18:19:49 time=2024-02-29T12:49:49.557Z level=INFO source=routes.go:814 msg="skipping file: registry.ollama.ai/library/zephyr:latest"

I would like to maintain a common model repository for downloaded models.

Please advise how to go about doing the same.

Originally created by @ddbhatt on GitHub (Feb 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2832 Originally assigned to: @dhiltgen on GitHub. Hello I have installed ollama locally and in docker as per the instructions on https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image, with the exception of mapping volume to my local .ollama directory on windows. when I run ollama list it is empty. and the docker log for the same is as under: 2024-02-29 18:19:49 time=2024-02-29T12:49:49.557Z level=INFO source=routes.go:814 msg="skipping file: registry.ollama.ai/library/zephyr:latest" I would like to maintain a common model repository for downloaded models. Please advise how to go about doing the same.
Author
Owner

@ddbhatt commented on GitHub (Feb 29, 2024):

i.e. ran the following command:
docker run -d -v C:\Users<username>.ollama\models:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

<!-- gh-comment-id:1971100816 --> @ddbhatt commented on GitHub (Feb 29, 2024): i.e. ran the following command: docker run -d -v C:\Users\<username>\.ollama\models:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

I'm unable to reproduce this. When I run the ollama container under Docker Desktop on Windows, I'm able to mount a path into my home directory, pull a model, see the files show up on the host, kill the container, start a new one, and ollama ls shows the model(s) I previously pulled.

I'm using WSL instead of hyper-v in my Docker Desktop configuration. Not sure if that has any bearing, and I don't see any other settings in Docker Desktop that look like this could cause this not to work.

<!-- gh-comment-id:1989404958 --> @dhiltgen commented on GitHub (Mar 11, 2024): I'm unable to reproduce this. When I run the ollama container under Docker Desktop on Windows, I'm able to mount a path into my home directory, pull a model, see the files show up on the host, kill the container, start a new one, and `ollama ls` shows the model(s) I previously pulled. I'm using WSL instead of hyper-v in my Docker Desktop configuration. Not sure if that has any bearing, and I don't see any other settings in Docker Desktop that look like this could cause this not to work.
Author
Owner

@ddbhatt commented on GitHub (Mar 12, 2024):

Hi Daniel,

Thanks for taking the time to resolve this.

What I did was install ollama native on Windows 10 and downloaded models using ollama run. Have downloaded about 15 models totaling to 72gb locally.

Then I came across the article to run ollama in Docker. So installed docker and pointed the local models to the dockers ollama model directory.

The difference is that you first pulled images from the docker ollama instance and i first pulled the models from the native ollama installation instance.

Hence am having the error in identifying the models. I would like help so that I do not have to download 72gb of models again and duplicate space on my system.

Thanks once again for you trying to understand and help with my issue.

<!-- gh-comment-id:1990609516 --> @ddbhatt commented on GitHub (Mar 12, 2024): Hi Daniel, Thanks for taking the time to resolve this. What I did was install ollama native on Windows 10 and downloaded models using ollama run. Have downloaded about 15 models totaling to 72gb locally. Then I came across the article to run ollama in Docker. So installed docker and pointed the local models to the dockers ollama model directory. The difference is that you first pulled images from the docker ollama instance and i first pulled the models from the native ollama installation instance. Hence am having the error in identifying the models. I would like help so that I do not have to download 72gb of models again and duplicate space on my system. Thanks once again for you trying to understand and help with my issue.
Author
Owner

@dhiltgen commented on GitHub (Mar 13, 2024):

Ah, that explains the problem. On linux and mac we use ":" in the filenames for the manifests which isn't a legal character on windows filesystems, so we have to translate it, but the reverse translation isn't handled by our linux code today. This is tracked via issue #2032

<!-- gh-comment-id:1994889376 --> @dhiltgen commented on GitHub (Mar 13, 2024): Ah, that explains the problem. On linux and mac we use ":" in the filenames for the manifests which isn't a legal character on windows filesystems, so we have to translate it, but the reverse translation isn't handled by our linux code today. This is tracked via issue #2032
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1721