[GH-ISSUE #5026] Can I customize OLLAMA_TMPDIR ? #3182

Closed
opened 2026-04-12 13:40:32 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @prince21000 on GitHub (Jun 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5026

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When I use Modelfile to create a new model using Ollama create xxx -f xxx.modefile, error (error: /tmp/ollama-tfxxxxxxx no space left) occurs. I noticed that /tmp and /usr/share/ollama share the same root directory that has no space, so I change OLLAMA_TMPDIR and OLLAMA_MODELS to a customized location.
In ollama.service, I added:
Environment = "OLLAMA_TMPDIR = /apprun/tmp"
Environment = "OLLAMA_MODELS = /apprun/models".
And use sudo chmod -R 777 tmp & sudo chown -R root:root tmp (also tried sudo chown -R ollama:ollama tmp)
sudo chmod -R 777 models & sudo chown -R root:root models(also tried sudo chown -R ollama:ollama models)

sudo systemctl daemon-reload
sudo systemctl restart ollama.service
sudo systemctl status ollama

And I found that my own private folders and files in /apprun were all missing but with /apprun/tmp existing. The /models folder only contains /blobs (but no /manifest). The /models are empty (It should've contain model files I downloaded before).

Is this a normal situation? If restarting ollama.service would delete all files under folders where /tmp locates?
Do you have any suggestion to the error (no space left) for a linux user, when using ollama create. Because I can confirm that the model would download to a customized location using ollama pull. But when using ollama create, it still use root directory where /user/share locate. Therefore, I got a no space error.

Thank you!

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.42

Originally created by @prince21000 on GitHub (Jun 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5026 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When I use Modelfile to create a new model using Ollama create xxx -f xxx.modefile, error (error: /tmp/ollama-tfxxxxxxx no space left) occurs. I noticed that /tmp and /usr/share/ollama share the same root directory that has no space, so I change OLLAMA_TMPDIR and OLLAMA_MODELS to a customized location. In ollama.service, I added: Environment = "OLLAMA_TMPDIR = /apprun/tmp" Environment = "OLLAMA_MODELS = /apprun/models". And use sudo chmod -R 777 tmp & sudo chown -R root:root tmp (also tried sudo chown -R ollama:ollama tmp) sudo chmod -R 777 models & sudo chown -R root:root models(also tried sudo chown -R ollama:ollama models) sudo systemctl daemon-reload sudo systemctl restart ollama.service sudo systemctl status ollama And I found that my own private folders and files in /apprun were all missing but with /apprun/tmp existing. The /models folder only contains /blobs (but no /manifest). The /models are empty (It should've contain model files I downloaded before). Is this a normal situation? If restarting ollama.service would delete all files under folders where /tmp locates? Do you have any suggestion to the error (no space left) for a linux user, when using ollama create. Because I can confirm that the model would download to a customized location using ollama pull. But when using ollama create, it still use root directory where /user/share locate. Therefore, I got a no space error. Thank you! ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.42
GiteaMirror added the question label 2026-04-12 13:40:32 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

Please upgrade to the latest version, and take a look at the server log and at startup, you should see a line that looks something like this, which will help narrow down if there's a configuration problem

2024/06/18 19:54:08 routes.go:1025: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/daniel/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"

If both OLLAMA_TMPDIR and OLLAMA_MODELS are getting set correctly, but you're still seeing it store large files in /tmp, share some more details on your scenario so we can try to repro.

<!-- gh-comment-id:2176852904 --> @dhiltgen commented on GitHub (Jun 18, 2024): Please upgrade to the latest version, and take a look at the server log and at startup, you should see a line that looks something like this, which will help narrow down if there's a configuration problem ``` 2024/06/18 19:54:08 routes.go:1025: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/daniel/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" ``` If both OLLAMA_TMPDIR and OLLAMA_MODELS are getting set correctly, but you're still seeing it store large files in /tmp, share some more details on your scenario so we can try to repro.
Author
Owner

@dhiltgen commented on GitHub (Aug 1, 2024):

If you're still having troubles setting the alternate temp dir after upgrading, please share your server log and I'll reopen and help troubleshoot.

<!-- gh-comment-id:2264129816 --> @dhiltgen commented on GitHub (Aug 1, 2024): If you're still having troubles setting the alternate temp dir after upgrading, please share your server log and I'll reopen and help troubleshoot.
Author
Owner

@publicmatt commented on GitHub (Aug 29, 2024):

This happens to me as well. I set those two variables, the logs verify that, but when I try to create a model (ollama create -f Modelfile), I still get a no space left on disk error pointing to /tmp instead of MODEL_TMPDIR.

Here's the startup:

TMPDIR=/scratch TEMP=/scratch OLLAMA_TMPDIR=/scratch OLLAMA_MODELS=/scratch/ollama \
ollama serve
2024/08/29 09:30:45 routes.go:1099: INFO server config env="... OLLAMA_MODELS:/scratch/ollama ... OLLAMA_TMPDIR:/scratch ..."

And the creation:

ollama create modelname -f ./Modelfile                                                                                                                                                       transferring model data
Error: write /tmp/ollama-tf303378605: no space left on device

That's even after I tried setting the tmp env variable.

Ollama info:

ollama --version
ollama version is 0.3.0
<!-- gh-comment-id:2318319618 --> @publicmatt commented on GitHub (Aug 29, 2024): This happens to me as well. I set those two variables, the logs verify that, but when I try to create a model (`ollama create -f Modelfile`), I still get a no space left on disk error pointing to `/tmp` instead of `MODEL_TMPDIR`. Here's the startup: ``` TMPDIR=/scratch TEMP=/scratch OLLAMA_TMPDIR=/scratch OLLAMA_MODELS=/scratch/ollama \ ollama serve 2024/08/29 09:30:45 routes.go:1099: INFO server config env="... OLLAMA_MODELS:/scratch/ollama ... OLLAMA_TMPDIR:/scratch ..." ``` And the creation: ``` ollama create modelname -f ./Modelfile transferring model data Error: write /tmp/ollama-tf303378605: no space left on device ``` That's even after I tried setting the tmp env variable. Ollama info: ``` ollama --version ollama version is 0.3.0 ```
Author
Owner

@publicmatt commented on GitHub (Aug 29, 2024):

I think my issue might be the tmp dir on the client side. I'm going to experiment with a different machine as the client and also setting TMPDIR=/scratch on that machine as well.

<!-- gh-comment-id:2318372212 --> @publicmatt commented on GitHub (Aug 29, 2024): I think my issue might be the `tmp` dir on the client side. I'm going to experiment with a different machine as the client and also setting `TMPDIR=/scratch` on that machine as well.
Author
Owner

@240db commented on GitHub (Oct 23, 2024):

Hi. I need help on this too.

ollama version is 0.3.12

I am facing a similar issue; I am trying to open a model I fine-tuned, but i get this /tmp/ folder issue, the model i tried to load is 16gb but only had 14gigs free in my /tmp/ folder.

So my idea is to change the tmp folder location that ollama uses, hopefully that fixes it?

so i tried following the instructions to configure the environment variables https://github.com/ollama/ollama/blob/main/docs/faq.md but i think i messed up, because ollama was failing...

when i run sudo systemctl edit ollama.service the file is something like /etc/systemd/system/ollama.service.d/override.conf but the service running is /etc/systemd/system/ollama.service

when i try to add the custom OLLAMA_TMPDIR with sudo systemctl edit ollama.service to /etc/systemd/system/ollama.service.d/override.conf. the file content is something like #[Service] Environment="OLLAMA_TMPDIR=/mnt/hdd/tmp_back" systemctl status shows this warning

Oct 23 09:53:18 parrot systemd[1]: /etc/systemd/system/ollama.service.d/override.conf:2: Assignment outside of section. Ignoring.

I also tried to include the part [Service] uncommented, or the line Environment="OLLAMA_TMPDIR=/mnt/hdd/tmp_back" directly in the /etc/systemd/system/ollama.service file but both fail.

I am not sure how to set these environment variables and it seems critical when trying to open my safetensors model with ollama.

Any help is greatly appreciated!

<!-- gh-comment-id:2432894147 --> @240db commented on GitHub (Oct 23, 2024): Hi. I need help on this too. `ollama version is 0.3.12` I am facing a similar issue; I am trying to open a model I fine-tuned, but i get this /tmp/ folder issue, the model i tried to load is 16gb but only had 14gigs free in my /tmp/ folder. So my idea is to change the tmp folder location that ollama uses, hopefully that fixes it? so i tried following the instructions to configure the environment variables https://github.com/ollama/ollama/blob/main/docs/faq.md but i think i messed up, because ollama was failing... when i run `sudo systemctl edit ollama.service` the file is something like `/etc/systemd/system/ollama.service.d/override.conf` but the service running is `/etc/systemd/system/ollama.service` when i try to add the custom `OLLAMA_TMPDIR` with `sudo systemctl edit ollama.service` to `/etc/systemd/system/ollama.service.d/override.conf`. the file content is something like `#[Service] Environment="OLLAMA_TMPDIR=/mnt/hdd/tmp_back"` systemctl status shows this warning `Oct 23 09:53:18 parrot systemd[1]: /etc/systemd/system/ollama.service.d/override.conf:2: Assignment outside of section. Ignoring.` I also tried to include the part `[Service]` uncommented, or the line `Environment="OLLAMA_TMPDIR=/mnt/hdd/tmp_back"` directly in the `/etc/systemd/system/ollama.service` file but both fail. I am not sure how to set these environment variables and it seems critical when trying to open my safetensors model with ollama. Any help is greatly appreciated!
Author
Owner

@a1ix2 commented on GitHub (Apr 17, 2025):

Sorry to be reviving this old thread, but in case anyone ran into the same problem (like I did while trying to create a quantized version of a huge model) and stumbled on this github issue first, OLLAMA_TMPDIR was ditched in 4879a234c4 .

ollama now relies on the system default, so add

[Service]
Environment="TMPDIR=/mnt/hdd/tmp_back"

to the ollama service overload.

<!-- gh-comment-id:2811576422 --> @a1ix2 commented on GitHub (Apr 17, 2025): Sorry to be reviving this old thread, but in case anyone ran into the same problem (like I did while trying to create a quantized version of a huge model) and stumbled on this github issue first, `OLLAMA_TMPDIR` was ditched in https://github.com/ollama/ollama/commit/4879a234c4bd3f2bbc99d9b09c44bd99fc337679 . ollama now relies on the system default, so add ``` [Service] Environment="TMPDIR=/mnt/hdd/tmp_back" ``` to the ollama service overload.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3182