[GH-ISSUE #2574] OLLAMA_MODELS Directory #1513

Closed
opened 2026-04-12 11:25:27 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @shersoni610 on GitHub (Feb 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2574

Hello,

I am running Ollama on a Linus machine (zsh shell). I set the environmental variable OLLAMA_MODELS to link to an external hard drive.

export OLLAMA_MODELS=/home/akbar/Disk2/Models/Ollama/models

However, the models are still store in /usr/share/ollama/.ollama folder. I wish to store all the models to an external drive to save the
limited space on the SSD.

Can someone help?

Originally created by @shersoni610 on GitHub (Feb 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2574 Hello, I am running Ollama on a Linus machine (zsh shell). I set the environmental variable OLLAMA_MODELS to link to an external hard drive. export OLLAMA_MODELS=/home/akbar/Disk2/Models/Ollama/models However, the models are still store in /usr/share/ollama/.ollama folder. I wish to store all the models to an external drive to save the limited space on the SSD. Can someone help?
GiteaMirror added the question label 2026-04-12 11:25:27 -05:00
Author
Owner

@ForGoodTech commented on GitHub (Feb 18, 2024):

I am a newbie myself and have only 2 hours of experience on Ollama and I had the identical question as you do.

I think I have figured out the thing. Essentially, the instructions on the FAQ works, but it may look slightly confusing because it appears to address a server configuration issue instead of ollama run issue.

The heart of the Ollama is the server. When you do ollama run abc_model, it will actually attempt to connect to the server, which manages all the models.

So, when you change your environment variables, you must let the server know one way or another. That means you must restart/reload the server.

Option 1. If you want to run the ollama as a service, follow the FAQ.

Option 2. If you want to run command lines by hand, you could do:

export OLLAMA_MODELS=/home/akbar/Disk2/Models/Ollama/models
# Kill the server. By the way, I don't see a command that shuts down the server gracefully. 
ollama serve
ollama run whatever_model_you_want
<!-- gh-comment-id:1951455888 --> @ForGoodTech commented on GitHub (Feb 18, 2024): I am a newbie myself and have only 2 hours of experience on Ollama and I had the identical question as you do. I think I have figured out the thing. Essentially, the instructions on the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md) works, but it may look slightly confusing because it appears to address a server configuration issue instead of `ollama run` issue. The heart of the Ollama is the server. When you do `ollama run abc_model`, it will actually attempt to connect to the server, which manages all the models. So, when you change your environment variables, you must let the server know one way or another. That means you must restart/reload the server. **Option 1**. If you want to run the ollama as a service, follow the FAQ. **Option 2**. If you want to run command lines by hand, you could do: ``` export OLLAMA_MODELS=/home/akbar/Disk2/Models/Ollama/models # Kill the server. By the way, I don't see a command that shuts down the server gracefully. ollama serve ollama run whatever_model_you_want ```
Author
Owner

@norton-chris commented on GitHub (Feb 22, 2024):

I'm having a similar issue. I'm using the ollama docker container and I have it export OLLAMA_MODELS when the container is being created, but it's still not finding models when I run ollama list inside the container. Here is my docker-compose file:

services:
  ollama:
    environment:
      - OLLAMA_MODELS=/root/.ollama/models
    volumes:
      - ollama:/root/.ollama
      - /mnt/2TB_SSD/text-gen/text-generation-webui/models:/root/.ollama/models
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest

When I enter the running container I echo OLLAMA_MODELS and it's correct but ollama list doesn't show any of the models. Also the default model location stated in the FAQ doesn't exist in the container. I even tried creating the default location folder and moving one of the models over, but that still doesn't work. Not sure how to restart ollama inside the ollama container to debug this.

Any help is greatly appreciated.

<!-- gh-comment-id:1958508676 --> @norton-chris commented on GitHub (Feb 22, 2024): I'm having a similar issue. I'm using the ollama docker container and I have it export OLLAMA_MODELS when the container is being created, but it's still not finding models when I run `ollama list` inside the container. Here is my docker-compose file: ``` services: ollama: environment: - OLLAMA_MODELS=/root/.ollama/models volumes: - ollama:/root/.ollama - /mnt/2TB_SSD/text-gen/text-generation-webui/models:/root/.ollama/models container_name: ollama pull_policy: always tty: true restart: unless-stopped image: ollama/ollama:latest ``` When I enter the running container I echo OLLAMA_MODELS and it's correct but ollama list doesn't show any of the models. Also the default model location stated in the FAQ doesn't exist in the container. I even tried creating the default location folder and moving one of the models over, but that still doesn't work. Not sure how to restart ollama inside the ollama container to debug this. Any help is greatly appreciated.
Author
Owner

@pascalandy commented on GitHub (Feb 22, 2024):

you should have this the other way around in your compose file (source:destination)

    volumes:
       - /root/.ollama/models:/mnt/2TB_SSD/text-gen/text-generation-webui/models
<!-- gh-comment-id:1959626526 --> @pascalandy commented on GitHub (Feb 22, 2024): you should have this the other way around in your compose file (source:destination) ```bash volumes: - /root/.ollama/models:/mnt/2TB_SSD/text-gen/text-generation-webui/models ```
Author
Owner

@norton-chris commented on GitHub (Feb 22, 2024):

Thanks for the response, however, this didn't solve my issue. I want the models from /mnt/2TB_SSD/text-gen/text-generation-webui/models to be accessible to ollama in the docker. I don't have any models in /root/.ollama/models on my host machine.

To test out where ollama stores it's models I downloaded phi by running ollama run phi, this command downloads and runs the model. Then I searched for the model file and I found this:

find / -name phi
/root/.ollama/models/manifests/registry.ollama.ai/library/phi

ls /root/.ollama/models/manifests/registry.ollama.ai/library/phi
latest

cat /root/.ollama/models/manifests/registry.ollama.ai/library/phi/latest
{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:4ce4b16d33a334b872b8cc4f9d6929905d0bfa19bdc90c5cbed95700d22f747f","size":555},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:04778965089b91318ad61d0995b7e44fad4b9a9f4e049d7be90932bf8812e828","size":1602461536},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:7908abcab772a6e503cfe014b6399bd58dea04576aaf79412fa66347c72bdd3f","size":1036},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:774a15e6f1e5a0ccd2a2df78c20139ab688472bd8ed5f1ed3ef6abf505e02d02","size":77},{"mediaType":"application/vnd.ollama.image.system","digest":"sha256:3188becd6bae82d66a6a3e68f5dee18484bbe19eeed33b873828dfcbbb2db5bb","size":132},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:0b8127ddf5ee8a3bf3456ad2d4bb5ddbe9927b3bdca10e639f844a12d5b09099","size":42}]}

which references this:

~/.ollama/models/blobs# du ./* -shc
1.5G	./sha256:04778965089b91318ad61d0995b7e44fad4b9a9f4e049d7be90932bf8812e828

How do I replicate this for my models? My docker-compose.yaml above puts the models in the ollama model folder but I don't know how to replicate this. This seems very complicated.

What is also weird is the FAQ says the models are stored Linux: /usr/share/ollama/.ollama/models but that is not the case on my host machine or the docker.

<!-- gh-comment-id:1959969480 --> @norton-chris commented on GitHub (Feb 22, 2024): Thanks for the response, however, this didn't solve my issue. I want the models from `/mnt/2TB_SSD/text-gen/text-generation-webui/models` to be accessible to ollama in the docker. I don't have any models in `/root/.ollama/models` on my host machine. To test out where ollama stores it's models I downloaded phi by running `ollama run phi`, this command downloads and runs the model. Then I searched for the model file and I found this: ``` find / -name phi /root/.ollama/models/manifests/registry.ollama.ai/library/phi ls /root/.ollama/models/manifests/registry.ollama.ai/library/phi latest cat /root/.ollama/models/manifests/registry.ollama.ai/library/phi/latest {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:4ce4b16d33a334b872b8cc4f9d6929905d0bfa19bdc90c5cbed95700d22f747f","size":555},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:04778965089b91318ad61d0995b7e44fad4b9a9f4e049d7be90932bf8812e828","size":1602461536},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:7908abcab772a6e503cfe014b6399bd58dea04576aaf79412fa66347c72bdd3f","size":1036},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:774a15e6f1e5a0ccd2a2df78c20139ab688472bd8ed5f1ed3ef6abf505e02d02","size":77},{"mediaType":"application/vnd.ollama.image.system","digest":"sha256:3188becd6bae82d66a6a3e68f5dee18484bbe19eeed33b873828dfcbbb2db5bb","size":132},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:0b8127ddf5ee8a3bf3456ad2d4bb5ddbe9927b3bdca10e639f844a12d5b09099","size":42}]} ``` which references this: ``` ~/.ollama/models/blobs# du ./* -shc 1.5G ./sha256:04778965089b91318ad61d0995b7e44fad4b9a9f4e049d7be90932bf8812e828 ``` How do I replicate this for my models? My docker-compose.yaml above puts the models in the ollama model folder but I don't know how to replicate this. This seems very complicated. What is also weird is the FAQ says the models are stored `Linux: /usr/share/ollama/.ollama/models` but that is not the case on my host machine or the docker.
Author
Owner

@norton-chris commented on GitHub (Feb 25, 2024):

Sorry, I'm a newbie to Ollama. My previous compose was fine, I just needed to make a modelfile and have it point to the model like this:

FROM /root/.ollama/models/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf
(and whatever other parameters you want)

This may help with the initial question. If OLLAMA_MODELS change isn't working for whatever reason, just point a modelfile to the model.

<!-- gh-comment-id:1963078482 --> @norton-chris commented on GitHub (Feb 25, 2024): Sorry, I'm a newbie to Ollama. My previous compose was fine, I just needed to make a modelfile and have it point to the model like this: ``` FROM /root/.ollama/models/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf (and whatever other parameters you want) ``` This may help with the initial question. If OLLAMA_MODELS change isn't working for whatever reason, just point a modelfile to the model.
Author
Owner

@pdevine commented on GitHub (Mar 14, 2024):

@norton-chris the FROM line of the Modelfile is used to pull in a gguf file during ollama create and it will store the new ollama model wherever the OLLAMA_MODELS environment variable is pointing. The OLLAMA_MODELS variable needs to be passed to ollama serve when you start up the Ollama server. You don't set this w/ ollama run. This is what @ForGoodTech was mentioning.

I'm going to go ahead and close the issue as answered, but please feel free to keep commenting or even reopen it if it's still confusing.

<!-- gh-comment-id:1996174265 --> @pdevine commented on GitHub (Mar 14, 2024): @norton-chris the `FROM` line of the Modelfile is used to pull in a gguf file during `ollama create` and it will store the new ollama model wherever the `OLLAMA_MODELS` environment variable is pointing. The `OLLAMA_MODELS` variable needs to be passed to `ollama serve` when you start up the Ollama server. You **don't** set this w/ `ollama run`. This is what @ForGoodTech was mentioning. I'm going to go ahead and close the issue as answered, but please feel free to keep commenting or even reopen it if it's still confusing.
Author
Owner

@sytelus commented on GitHub (Mar 27, 2024):

If you want to change this for service, here's how to do it:

First, add env var for service. Note that usual export in .bashrc doesn't work for the service. To do this, run:

sudo systemctl edit ollama.service

Add environment variables like this in this config:

[Service]
Environment=OLLAMA_MODELS=/<new_path>/ollama/models

Then stop the service and move the directory from default path to <new_path>:

sudo systemctl stop ollama
sudo mv -f /usr/share/ollama/.ollama/models/  /<new_path>/ollama/

Restart the service:

sudo systemctl daemon-reload
sudo systemctl restart ollama
<!-- gh-comment-id:2022124121 --> @sytelus commented on GitHub (Mar 27, 2024): If you want to change this for service, here's how to do it: First, add env var for service. Note that usual export in `.bashrc` doesn't work for the service. To do this, run: ``` sudo systemctl edit ollama.service ``` Add environment variables like this in this config: ``` [Service] Environment=OLLAMA_MODELS=/<new_path>/ollama/models ``` Then stop the service and move the directory from default path to `<new_path>`: ``` sudo systemctl stop ollama sudo mv -f /usr/share/ollama/.ollama/models/ /<new_path>/ollama/ ``` Restart the service: ``` sudo systemctl daemon-reload sudo systemctl restart ollama ```
Author
Owner

@norton-chris commented on GitHub (Mar 27, 2024):

I see, thank you @pdevine

@sytelus I haven't run ollama directly with systemctl because I like everything in a container, but that does seem pretty straightforward. The docker-compose is the issue for me, because I need to set the variable before ollama is brought up as the container is being started. So I need to find a way to set the variable before ollama comes up in docker.

<!-- gh-comment-id:2023440784 --> @norton-chris commented on GitHub (Mar 27, 2024): I see, thank you @pdevine @sytelus I haven't run ollama directly with systemctl because I like everything in a container, but that does seem pretty straightforward. The docker-compose is the issue for me, because I need to set the variable before ollama is brought up as the container is being started. So I need to find a way to set the variable before ollama comes up in docker.
Author
Owner

@davidgfolch commented on GitHub (Sep 19, 2024):

I'm using the Ollama installed version (via sh) on Ubuntu 22.04 & @sytelus explanation didn't work for me, it only works if I modify de ollama.service directly. Using @sytelus approach systemctl status ollama tells me it's ignoring this modificatoin.

<!-- gh-comment-id:2360817559 --> @davidgfolch commented on GitHub (Sep 19, 2024): I'm using the Ollama installed version (via sh) on Ubuntu 22.04 & @sytelus explanation didn't work for me, it only works if I modify de ollama.service directly. Using @sytelus approach `systemctl status ollama` tells me it's ignoring this modificatoin.
Author
Owner

@srugano commented on GitHub (Jan 26, 2025):

If you are like me and want to use an external media to store model files, it will be not possible. the new folder created must be owned by ollama user and group in order for the service to run. So... that's an issue.

Jan 26 20:03:08 homet systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 20:03:08 homet ollama[12755]: Error: mkdir /media/stock/e0ff4041-3d3f-4037-8772-bf51ccba2d5b2: permission denied
Jan 26 20:03:08 homet ollama[12755]: 2025/01/26 20:03:08 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_>
Jan 26 20:03:08 homet systemd[1]: Started ollama.service - Ollama Service.
Jan 26 20:03:08 homet systemd[1]: ollama.service: Scheduled restart job, restart counter is at 33.
Jan 26 20:03:05 homet systemd[1]: ollama.service: Failed with result 'exit-code'.
Jan 26 20:03:05 homet systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 20:03:05 homet ollama[12746]: Error: mkdir /media/stock/e0ff4041-3d3f-4037-8772-bf51ccba2d5b2: permission denied
Jan 26 20:03:05 homet ollama[12746]: 2025/01/26 20:03:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_>
Jan 26 20:03:05 homet systemd[1]: Started ollama.service - Ollama Service.

<!-- gh-comment-id:2614552285 --> @srugano commented on GitHub (Jan 26, 2025): If you are like me and want to use an external media to store model files, it will be not possible. the new folder created must be owned by ollama user and group in order for the service to run. So... that's an issue. > Jan 26 20:03:08 homet systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Jan 26 20:03:08 homet ollama[12755]: Error: mkdir /media/stock/e0ff4041-3d3f-4037-8772-bf51ccba2d5b2: permission denied Jan 26 20:03:08 homet ollama[12755]: 2025/01/26 20:03:08 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_> Jan 26 20:03:08 homet systemd[1]: Started ollama.service - Ollama Service. Jan 26 20:03:08 homet systemd[1]: ollama.service: Scheduled restart job, restart counter is at 33. Jan 26 20:03:05 homet systemd[1]: ollama.service: Failed with result 'exit-code'. Jan 26 20:03:05 homet systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Jan 26 20:03:05 homet ollama[12746]: Error: mkdir /media/stock/e0ff4041-3d3f-4037-8772-bf51ccba2d5b2: permission denied Jan 26 20:03:05 homet ollama[12746]: 2025/01/26 20:03:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_> Jan 26 20:03:05 homet systemd[1]: Started ollama.service - Ollama Service.
Author
Owner

@EthraZa commented on GitHub (Mar 30, 2026):

How to run ollama within docker...

ollama_update.sh:

#!/bin/bash

docker exec -it ollama ollama --version
docker stop ollama
docker rm ollama
docker run -d --gpus=all \
  -e "OLLAMA_HOST=0.0.0.0" \
  -e "OLLAMA_ORIGINS=*" \
  -e "OLLAMA_NEW_ESTIMATES=1" \
  -e "OLLAMA_FLASH_ATTENTION=1" \
  -e "OLLAMA_KV_CACHE_TYPE=q4_0" \
  -e "OLLAMA_NUM_PARALLEL=4" \
  -e "OLLAMA_MAX_LOADED_MODELS=2" \
  -e "OLLAMA_KEEP_ALIVE=5m" \
  -v /mnt/BIGDISK/ollama/:/root/.ollama \
  -p 11434:11434 \
  --add-host=host.docker.internal:host-gateway \
  --pull=always \
  --restart unless-stopped \
  --name ollama ollama/ollama
docker exec -it ollama sh -c '/usr/bin/ollama list | awk '\''NR>1 && !/reviewer/ {system("echo "$1"; ollama pull "$1)}'\'''
docker exec -it ollama ollama --version

ollama (ie. add to /usr/local/bin/):

#!/bin/bash

docker exec -it ollama ollama $*
<!-- gh-comment-id:4155038109 --> @EthraZa commented on GitHub (Mar 30, 2026): How to run ollama within docker... ollama_update.sh: ```bash #!/bin/bash docker exec -it ollama ollama --version docker stop ollama docker rm ollama docker run -d --gpus=all \ -e "OLLAMA_HOST=0.0.0.0" \ -e "OLLAMA_ORIGINS=*" \ -e "OLLAMA_NEW_ESTIMATES=1" \ -e "OLLAMA_FLASH_ATTENTION=1" \ -e "OLLAMA_KV_CACHE_TYPE=q4_0" \ -e "OLLAMA_NUM_PARALLEL=4" \ -e "OLLAMA_MAX_LOADED_MODELS=2" \ -e "OLLAMA_KEEP_ALIVE=5m" \ -v /mnt/BIGDISK/ollama/:/root/.ollama \ -p 11434:11434 \ --add-host=host.docker.internal:host-gateway \ --pull=always \ --restart unless-stopped \ --name ollama ollama/ollama docker exec -it ollama sh -c '/usr/bin/ollama list | awk '\''NR>1 && !/reviewer/ {system("echo "$1"; ollama pull "$1)}'\''' docker exec -it ollama ollama --version ``` ollama (ie. add to /usr/local/bin/): ```bash #!/bin/bash docker exec -it ollama ollama $* ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1513