[GH-ISSUE #4579] Redownloading model on run command after runner crash #28633

Closed
opened 2026-04-22 07:06:17 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @TipuatGit on GitHub (May 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4579

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I downloaded llama3 with the ollama run llama3 command, and after it downloaded, first it failed with the error:
Error: llama runner process has terminated: exit status 0xc0000005

Then to see if the error reproduces, I ran ollama run llama3 but it started to download it all over again. I don't understand why its doing that. And its repeatedly doing that, everytime I run the command it just goes to redownloading.

Note: I changed model directory by creating environment variable OLLAMA_MODELS as per the instructions in F.A.Qs. Also, model is in both the C drive and my other drive that I chose.

Foremost, I would like it to stop redownloading and use what is on my system already. That is top priority.
Please help guys.

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

0.1.38

Originally created by @TipuatGit on GitHub (May 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4579 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I downloaded llama3 with the `ollama run llama3` command, and after it downloaded, first it failed with the error: `Error: llama runner process has terminated: exit status 0xc0000005` Then to see if the error reproduces, I ran `ollama run llama3` but it started to download it all over again. I don't understand why its doing that. And its repeatedly doing that, everytime I run the command it just goes to redownloading. Note: I changed model directory by creating environment variable `OLLAMA_MODELS` as per the instructions in F.A.Qs. Also, model is in both the C drive and my other drive that I chose. Foremost, I would like it to stop redownloading and use what is on my system already. That is top priority. Please help guys. ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.1.38
GiteaMirror added the needs more infobugwindows labels 2026-04-22 07:06:17 -05:00
Author
Owner

@dp289m commented on GitHub (May 27, 2024):

Same here.
Intel i9 Macbook Darwin 23.1.0
ollama 0.1.38

<!-- gh-comment-id:2132675757 --> @dp289m commented on GitHub (May 27, 2024): Same here. Intel i9 Macbook Darwin 23.1.0 ollama 0.1.38
Author
Owner

@zombodotcom commented on GitHub (Jun 29, 2024):

Im experiencing this issue from https://github.com/ollama/ollama/issues/2551#issuecomment-2198366862
#2551

Changed environment variables cause even after the command line install and specifying directory, still downloaded to c drive

<!-- gh-comment-id:2198368198 --> @zombodotcom commented on GitHub (Jun 29, 2024): Im experiencing this issue from https://github.com/ollama/ollama/issues/2551#issuecomment-2198366862 #2551 Changed environment variables cause even after the command line install and specifying directory, still downloaded to c drive
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2024):

Looks like this issue slipped through the cracks.

We don't currently support Intel GPUs, so I presume you're running on CPU. We should try to understand the crash, so the first thing I'd recommend is upgrade to the latest version, and if you're still seeing a crash, please share the server log so we can see why it crashed.

Occasionally we update models with fixes for templates, etc., and when you perform ollama pull we'll check to see if it has changed and pull only if it did. ollama run should not do this, and will use the existing pulled model even if it has changed on ollama.com. It's possible there's some filesystem corruption, running out of disk space, or other failure that's causing the model to be corrupted or removed and leading to the re-pull. Again, server logs will help diagnose.

<!-- gh-comment-id:2433467985 --> @dhiltgen commented on GitHub (Oct 23, 2024): Looks like this issue slipped through the cracks. We don't currently support Intel GPUs, so I presume you're running on CPU. We should try to understand the crash, so the first thing I'd recommend is upgrade to the latest version, and if you're still seeing a crash, please share the server log so we can see why it crashed. Occasionally we update models with fixes for templates, etc., and when you perform `ollama pull` we'll check to see if it has changed and pull only if it did. `ollama run` should not do this, and will use the existing pulled model even if it has changed on ollama.com. It's possible there's some filesystem corruption, running out of disk space, or other failure that's causing the model to be corrupted or removed and leading to the re-pull. Again, server logs will help diagnose.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28633