[GH-ISSUE #1836] Consult where Ollama models are saved in Linux.( in WSL on windows) #63083

Closed
opened 2026-05-03 11:43:18 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @zephirusgit on GitHub (Jan 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1836

Hello, I'm really running Ollama, in WSL Windows Subsystem Linux, (in Windows) Now, my problem is that when you lower a new model, call2, llava, or create some, these models are downloaded, or copied, in some folder , I imagine the WSL? De Linux? or Windows?
For example, I wanted to run the mixtral model, which occupies 26gb
And where I have it, I "double it" and I do not.
Does anyone know where those files can be putting?
From already thank you very much,
In Windows I walk very well call2 and llava, (describing images)
compared to another llava that ran before which I required 3 simultaneous processes that occupied me as 90gb of RAM

enfin any tip is appreciated, to find them,
I saw that if I believe them, and then I eliminate them, they are erased, but as I have very little disk space, I want to see how I can use them, without being doubled,
I think I move it to another album and install it, from there, so as not to run out of space, I already have very little, greetings!

Originally created by @zephirusgit on GitHub (Jan 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1836 Hello, I'm really running Ollama, in WSL Windows Subsystem Linux, (in Windows) Now, my problem is that when you lower a new model, call2, llava, or create some, these models are downloaded, or copied, in some folder , I imagine the WSL? De Linux? or Windows? For example, I wanted to run the mixtral model, which occupies 26gb And where I have it, I "double it" and I do not. Does anyone know where those files can be putting? From already thank you very much, In Windows I walk very well call2 and llava, (describing images) compared to another llava that ran before which I required 3 simultaneous processes that occupied me as 90gb of RAM enfin any tip is appreciated, to find them, I saw that if I believe them, and then I eliminate them, they are erased, but as I have very little disk space, I want to see how I can use them, without being doubled, I think I move it to another album and install it, from there, so as not to run out of space, I already have very little, greetings! ​
Author
Owner

@ewebgh33 commented on GitHub (Jan 10, 2024):

I would like to add to this, is there a way we can point to a common repo on our HDD/SSD? Rather than have every LLM app download it's own copy of the model, and have 5x Mistrals on disk?

And yes, when a model is auto-downloaded, where does it go please?

<!-- gh-comment-id:1884184623 --> @ewebgh33 commented on GitHub (Jan 10, 2024): I would like to add to this, is there a way we can point to a common repo on our HDD/SSD? Rather than have every LLM app download it's own copy of the model, and have 5x Mistrals on disk? And yes, when a model is auto-downloaded, where does it go please?
Author
Owner

@zephirusgit commented on GitHub (Jan 11, 2024):

thanks i find it on

C:\Users*****\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu_79*****gsc\LocalState\ext4.vhdx

I did not know what that virtual unit could be compressed! But layers is a good idea, it occupies 66gbs now, I have it in a very fast M2 so it is almost instantaneous everything, I wanted It detects Nvidia, and it doesn't work, but maybe you can copy that ext4.VHDX file, and see if it works by replacing it?

<!-- gh-comment-id:1886167196 --> @zephirusgit commented on GitHub (Jan 11, 2024): thanks i find it on C:\Users\*****\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu_79*****gsc\LocalState\ext4.vhdx I did not know what that virtual unit could be compressed! But layers is a good idea, it occupies 66gbs now, I have it in a very fast M2 so it is almost instantaneous everything, I wanted It detects Nvidia, and it doesn't work, but maybe you can copy that ext4.VHDX file, and see if it works by replacing it?
Author
Owner

@ewebgh33 commented on GitHub (Jan 11, 2024):

I found my models are going into
\wsl.localhost\Ubuntu\usr\share\ollama.ollama\models

And the FAQ says we can move this folder with a change to an environment variable.

BUT
What are these blobs?

The models I want to run, I have already downloaded. I've tried a lot of LLM apps, and the models are named like so:
model.safetensors
In a folder with the name of the model:
models\TheBloke_Orca-2-13B-GPTQ
And some JSONs for settings.

How do I get Ollama to use that model? Seems like I can't simply point it to that models folder because Ollama is expecting:
sha2568934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
??

<!-- gh-comment-id:1886167929 --> @ewebgh33 commented on GitHub (Jan 11, 2024): I found my models are going into \wsl.localhost\Ubuntu\usr\share\ollama.ollama\models And the FAQ says we can move this folder with a change to an environment variable. BUT What are these blobs? The models I want to run, I have already downloaded. I've tried a lot of LLM apps, and the models are named like so: model.safetensors In a folder with the name of the model: models\TheBloke_Orca-2-13B-GPTQ And some JSONs for settings. How do I get Ollama to use that model? Seems like I can't simply point it to that models folder because Ollama is expecting: sha2568934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 ??
Author
Owner

@zephirusgit commented on GitHub (Jan 11, 2024):

I have caused several LLMS, although Ollama is the one that is faster, I was using Zephyr (Zephyr-7b-Bet Although I still don't try to create it inside Ollama, then I tell you, I think I will have to remove the mix, and try, because I have no space anymore.

<!-- gh-comment-id:1886173291 --> @zephirusgit commented on GitHub (Jan 11, 2024): I have caused several LLMS, although Ollama is the one that is faster, I was using Zephyr (Zephyr-7b-Bet Although I still don't try to create it inside Ollama, then I tell you, I think I will have to remove the mix, and try, because I have no space anymore.
Author
Owner

@ewebgh33 commented on GitHub (Jan 11, 2024):

@dcasota appreciate you're trying to be helpful, I was assuming the devs check these issues once in a while. If you're not a dev no need to answer that you don't know. But thanks.

<!-- gh-comment-id:1886798150 --> @ewebgh33 commented on GitHub (Jan 11, 2024): @dcasota appreciate you're trying to be helpful, I was assuming the devs check these issues once in a while. If you're not a dev no need to answer that you don't know. But thanks.
Author
Owner

@pdevine commented on GitHub (Mar 11, 2024):

Sorry for the slow response guys. There's actually an FAQ which explains how to do this. The short answer is use the OLLAMA_MODELS environment variable if you want to put the models in a different location.

One big caveat here is that Windows and Linux use different file names for the blobs because NTFS doesn't support : in a file name. We've been talking about changing Linux to use the same file names though to make this cross-platform compatible in the future, but no timeframe for that right now.

I'm going to go ahead and close the issue, but feel free to keep commenting.

<!-- gh-comment-id:1989407737 --> @pdevine commented on GitHub (Mar 11, 2024): Sorry for the slow response guys. There's actually an [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-set-them-to-a-different-location) which explains how to do this. *The short answer* is use the `OLLAMA_MODELS` environment variable if you want to put the models in a different location. One big caveat here is that Windows and Linux use different file names for the blobs because NTFS doesn't support `:` in a file name. We've been talking about changing Linux to use the same file names though to make this cross-platform compatible in the future, but no timeframe for that right now. I'm going to go ahead and close the issue, but feel free to keep commenting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63083