[GH-ISSUE #8520] $OLLAMA_MODELS no longer respected? #52004

Closed
opened 2026-04-28 21:36:28 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @yuimbo on GitHub (Jan 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8520

What is the issue?

I've been using the OLLAMA_MODELS variable to store my models on an external drive.

I can see that this is set:

$ echo $OLLAMA_MODELS

/Volumes/bigdrive/ollama/models

I can see that my models are stored there:

$ tree $OLLAMA_MODELS

/Volumes/bigdrive/ollama/models
├── blobs
│   ├── sha256-0b4284c1f87029e67654c7953afa16279961632cf73dcfe33374c4c2f298fa35
│   ├── sha256-1ae29500b4be5bb4ce3981e3692ee8689ce5df0ae3080ed4c5ff9f72bf01ba6a
│   ├── sha256-2eedb02591412148c2fea86b5896da88ec5cddea551bcccde3270aa9d1f048ff
│   ....
└── manifests
    └── registry.ollama.ai
        ├── alibayram
        │   └── erurollm-9b-instruct
        │       └── latest
        ├── huihui_ai
        │   ├── llama3.3-abliterated
        │   │   └── latest
        │   └── qwq-abliterated
        │       └── latest
        ├── library
        │   ├── deepseek-r1
        │   │   ├── 32b
        │   │   └── 70b
        ....

But for some reason the models does not show up in ollama?

$ ollama list

NAME               ID              SIZE     MODIFIED   

It seems the OLLAMA_MODELS variable is not being respected.

If there are any debug flags I can use to inspect what's going wrong, let me know!

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @yuimbo on GitHub (Jan 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8520 ### What is the issue? I've been using the OLLAMA_MODELS variable to store my models on an external drive. I can see that this is set: ``` $ echo $OLLAMA_MODELS /Volumes/bigdrive/ollama/models ``` I can see that my models are stored there: ``` $ tree $OLLAMA_MODELS /Volumes/bigdrive/ollama/models ├── blobs │   ├── sha256-0b4284c1f87029e67654c7953afa16279961632cf73dcfe33374c4c2f298fa35 │   ├── sha256-1ae29500b4be5bb4ce3981e3692ee8689ce5df0ae3080ed4c5ff9f72bf01ba6a │   ├── sha256-2eedb02591412148c2fea86b5896da88ec5cddea551bcccde3270aa9d1f048ff │   .... └── manifests └── registry.ollama.ai ├── alibayram │   └── erurollm-9b-instruct │   └── latest ├── huihui_ai │   ├── llama3.3-abliterated │   │   └── latest │   └── qwq-abliterated │   └── latest ├── library │   ├── deepseek-r1 │   │   ├── 32b │   │   └── 70b .... ``` But for some reason the models does not show up in ollama? ``` $ ollama list NAME ID SIZE MODIFIED ``` It seems the OLLAMA_MODELS variable is not being respected. If there are any debug flags I can use to inspect what's going wrong, let me know! ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-28 21:36:28 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 21, 2025):

Server logs.

<!-- gh-comment-id:2605181680 --> @rick-github commented on GitHub (Jan 21, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@yuimbo commented on GitHub (Jan 21, 2025):

2025/01/21 16:49:41 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/username/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"

I can see in server log that it seems to resolve to the default folder instead OLLAMA_MODELS:/Users/username/.ollama/models.

Possibly related to this earlier:

2025/01/08 16:06:48 routes.go:1259: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Volumes/bigdrive/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
Error: mkdir /Volumes/bigdrive: permission denied

Looks like it couldn't access the drive for a while (probably when I didn't have it plugged in).

Has my models directory been marked as dirty/inaccessible? Can I reset this somehow?

Though this might be a red herring, I still have a lot of logs of the server loading models from the external drive after that point in time.

<!-- gh-comment-id:2605223874 --> @yuimbo commented on GitHub (Jan 21, 2025): ``` 2025/01/21 16:49:41 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/username/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" ``` I can see in server log that it seems to resolve to the default folder instead `OLLAMA_MODELS:/Users/username/.ollama/models`. Possibly related to this earlier: ``` 2025/01/08 16:06:48 routes.go:1259: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Volumes/bigdrive/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" Error: mkdir /Volumes/bigdrive: permission denied ``` Looks like it couldn't access the drive for a while (probably when I didn't have it plugged in). Has my models directory been marked as dirty/inaccessible? Can I reset this somehow? Though this might be a red herring, I still have a lot of logs of the server loading models from the external drive after that point in time.
Author
Owner

@rick-github commented on GitHub (Jan 21, 2025):

Are you sure OLLAMA_MODELS is set in the launch context? What does the following show:

launchctl getenv OLLAMA_MODELS
<!-- gh-comment-id:2605243640 --> @rick-github commented on GitHub (Jan 21, 2025): Are you sure OLLAMA_MODELS is set in the launch context? What does the following show: ```sh launchctl getenv OLLAMA_MODELS ```
Author
Owner

@yuimbo commented on GitHub (Jan 22, 2025):

Ah yes you are right rick, I had just naively set the variable in my .zshrc 🙃

<!-- gh-comment-id:2606724011 --> @yuimbo commented on GitHub (Jan 22, 2025): Ah yes you are right rick, I had just naively set the variable in my .zshrc 🙃
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52004