[GH-ISSUE #3624] [v0.1.32-pre for Windows] Can't use ollama run with OLLAMA_MODELS env set #2234

Closed
opened 2026-04-12 12:30:14 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @mann1x on GitHub (Apr 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3624

Originally assigned to: @bmizerany on GitHub.

What is the issue?

ollama run can't be used anymore with OLLAMA_MODELS environment set
instead:
Error: OLLAMA_MODELS must only be set for 'ollama serve'

What did you expect to see?

The ollama chat prompt

Steps to reproduce

set OLLAMA_MODELS system environment variable
type ollama run anymodel

Are there any recent changes that introduced the issue?

The latest prerelease

OS

Windows

Architecture

amd64

Platform

No response

Ollama version

0.1.32-pre

GPU

No response

GPU info

No response

CPU

AMD

Other software

No response

Originally created by @mann1x on GitHub (Apr 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3624 Originally assigned to: @bmizerany on GitHub. ### What is the issue? ollama run can't be used anymore with OLLAMA_MODELS environment set instead: `Error: OLLAMA_MODELS must only be set for 'ollama serve'` ### What did you expect to see? The ollama chat prompt ### Steps to reproduce set OLLAMA_MODELS system environment variable type ollama run anymodel ### Are there any recent changes that introduced the issue? The latest prerelease ### OS Windows ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.32-pre ### GPU _No response_ ### GPU info _No response_ ### CPU AMD ### Other software _No response_
GiteaMirror added the bug label 2026-04-12 12:30:14 -05:00
Author
Owner

@MoonRide303 commented on GitHub (Apr 13, 2024):

Same problem for me - only change was updating from 0.1.31 to 0.1.32-rc1, and now ollama run [model_name] results with Error: OLLAMA_MODELS must only be set for 'ollama serve' (for any model). Windows, too.

<!-- gh-comment-id:2053698269 --> @MoonRide303 commented on GitHub (Apr 13, 2024): Same problem for me - only change was updating from 0.1.31 to 0.1.32-rc1, and now `ollama run [model_name]` results with `Error: OLLAMA_MODELS must only be set for 'ollama serve'` (for any model). Windows, too.
Author
Owner

@Real00 commented on GitHub (Apr 15, 2024):

Maybe it's related to this PR
https://github.com/ollama/ollama/pull/3470

<!-- gh-comment-id:2055580806 --> @Real00 commented on GitHub (Apr 15, 2024): Maybe it's related to this PR https://github.com/ollama/ollama/pull/3470
Author
Owner

@AjmalShajahan commented on GitHub (Apr 15, 2024):

It was added in #3470
You can try this in powershell

$env:OLLAMA_MODELS=""; ollama run MODEL

OLLAMA_MODELS is set to "" for current session, when downloading/updating models in same session you'll need to set it to the path where the models are kept.

<!-- gh-comment-id:2057065996 --> @AjmalShajahan commented on GitHub (Apr 15, 2024): It was added in #3470 You can try this in powershell `$env:OLLAMA_MODELS=""; ollama run MODEL` OLLAMA_MODELS is set to "" for current session, when downloading/updating models in same session you'll need to set it to the path where the models are kept.
Author
Owner

@MoonRide303 commented on GitHub (Apr 15, 2024):

@AjmalShajahan but OLLAMA_MODELS points all the time to the path where I keep all my Ollama model files. I didn't change anything specifically for launching 0.1.32-rc1. 0.1.31 works fine, and 0.1.32-rc1 - just doesn't.

<!-- gh-comment-id:2057399782 --> @MoonRide303 commented on GitHub (Apr 15, 2024): @AjmalShajahan but OLLAMA_MODELS points all the time to the path where I keep all my Ollama model files. I didn't change anything specifically for launching 0.1.32-rc1. 0.1.31 works fine, and 0.1.32-rc1 - just doesn't.
Author
Owner

@MoonRide303 commented on GitHub (Apr 15, 2024):

@jmorganca @bmizerany Could you guys re-think that change (https://github.com/ollama/ollama/pull/3470)? I don't get it how that "helpful error message" solves anything - it just breaks essential functionality like "ollama run" for me.

<!-- gh-comment-id:2057420393 --> @MoonRide303 commented on GitHub (Apr 15, 2024): @jmorganca @bmizerany Could you guys re-think that change (https://github.com/ollama/ollama/pull/3470)? I don't get it how that "helpful error message" solves anything - it just breaks essential functionality like "ollama run" for me.
Author
Owner

@mann1x commented on GitHub (Apr 15, 2024):

OLLAMA_MODELS is set to "" for current session, when downloading/updating models in same session you'll need to set it to the path where the models are kept.

Yes it works of course but I run ollama on Windows, the OLLAMA_MODELS is set as system env variable. Can't be otherwise.

I don't really understand what was the problem with run on that PR but it breaks completely the client functionality while it looks the aim was to only show a diagnostic message.
Better to find a different solution, will try to understand what was the original issue.

If such a diagnostic message is needed maybe it's better to have it print out only when OLLAMA_DEBUG is set.

<!-- gh-comment-id:2057481157 --> @mann1x commented on GitHub (Apr 15, 2024): > OLLAMA_MODELS is set to "" for current session, when downloading/updating models in same session you'll need to set it to the path where the models are kept. Yes it works of course but I run ollama on Windows, the OLLAMA_MODELS is set as system env variable. Can't be otherwise. I don't really understand what was the problem with `run` on that PR but it breaks completely the client functionality while it looks the aim was to only show a diagnostic message. Better to find a different solution, will try to understand what was the original issue. If such a diagnostic message is needed maybe it's better to have it print out only when `OLLAMA_DEBUG` is set.
Author
Owner

@mann1x commented on GitHub (Apr 15, 2024):

I had a look at the code but I couldn't see any reason why run should care about the OLLAMA_MODELS env...

<!-- gh-comment-id:2057526802 --> @mann1x commented on GitHub (Apr 15, 2024): I had a look at the code but I couldn't see any reason why `run` should care about the `OLLAMA_MODELS` env...
Author
Owner

@MoonRide303 commented on GitHub (Apr 15, 2024):

@mann1x run uses it to know the path where model files are stored - at least that how it works in 0.1.31.

<!-- gh-comment-id:2057535554 --> @MoonRide303 commented on GitHub (Apr 15, 2024): @mann1x run uses it to know the path where model files are stored - at least that how it works in 0.1.31.
Author
Owner

@mann1x commented on GitHub (Apr 15, 2024):

@MoonRide303

run launches the client which only need OLLAMA_HOST; it's only connecting to the server, it's not interacting directly with the models.
Otherwise it wouldn't need it unset but set.
There must be another reason but I can't see it.

<!-- gh-comment-id:2057543612 --> @mann1x commented on GitHub (Apr 15, 2024): @MoonRide303 `run` launches the client which only need `OLLAMA_HOST`; it's only connecting to the server, it's not interacting directly with the models. Otherwise it wouldn't need it unset but set. There must be another reason but I can't see it.
Author
Owner

@mann1x commented on GitHub (Apr 15, 2024):

We need @bmizerany input on what was his issue with OLLAMA_MODELS and run but I suspect this is a bug.
Anyway it's a no go for Windows, if there's an issue we must find a different solution.

<!-- gh-comment-id:2057561792 --> @mann1x commented on GitHub (Apr 15, 2024): We need @bmizerany input on what was his issue with `OLLAMA_MODELS` and `run` but I suspect this is a bug. Anyway it's a no go for Windows, if there's an issue we must find a different solution.
Author
Owner

@MoonRide303 commented on GitHub (Apr 15, 2024):

@mann1x I just use "ollama run" like "main" from llama.cpp. I get that it might be launching server in the background, but I don't set OLLAMA_HOST or launch server explicitely.

<!-- gh-comment-id:2057591529 --> @MoonRide303 commented on GitHub (Apr 15, 2024): @mann1x I just use "ollama run" like "main" from llama.cpp. I get that it might be launching server in the background, but I don't set OLLAMA_HOST or launch server explicitely.
Author
Owner

@mann1x commented on GitHub (Apr 15, 2024):

@MoonRide303 Yes that's right; OLLAMA_HOST doesn't need to be set, it defaults to localhost, and the server is launched in the background if it's not started already

<!-- gh-comment-id:2057596115 --> @mann1x commented on GitHub (Apr 15, 2024): @MoonRide303 Yes that's right; `OLLAMA_HOST` doesn't need to be set, it defaults to localhost, and the server is launched in the background if it's not started already
Author
Owner

@jmorganca commented on GitHub (Apr 16, 2024):

Hi folks, this should be fixed now, and sorry for the issue!

<!-- gh-comment-id:2058068834 --> @jmorganca commented on GitHub (Apr 16, 2024): Hi folks, this should be fixed now, and sorry for the issue!
Author
Owner

@mann1x commented on GitHub (Apr 16, 2024):

@jmorganca Thanks, we were just confused!

<!-- gh-comment-id:2058168630 --> @mann1x commented on GitHub (Apr 16, 2024): @jmorganca Thanks, we were just confused!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2234