[GH-ISSUE #6098] Why is the llama3 model missing after I restart Ollama? When I run “ollama run llama3”, it re-pulls the manifest. #3814

Closed
opened 2026-04-12 14:39:00 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @fanjikang on GitHub (Jul 31, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6098

What is the issue?

Why is the llama3 model missing after I restart Ollama? When I run “ollama run llama3”, it re-pulls the manifest.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

v0.2.8

Originally created by @fanjikang on GitHub (Jul 31, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6098 ### What is the issue? Why is the llama3 model missing after I restart Ollama? When I run “ollama run llama3”, it re-pulls the manifest. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version v0.2.8
GiteaMirror added the bug label 2026-04-12 14:39:00 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 31, 2024):

Did you change the server environment variable OLLAMA_MODELS? If you add server logs it may be easier to diagnose the issue.

<!-- gh-comment-id:2260733750 --> @rick-github commented on GitHub (Jul 31, 2024): Did you change the server environment variable `OLLAMA_MODELS`? If you add server logs it may be easier to diagnose the issue.
Author
Owner

@Siddharth-Latthe-07 commented on GitHub (Aug 2, 2024):

The above issue suggests that the model is not being persistently stored or there is a configuration problem causing it to be removed upon restart.
steps to diagnose:-

  1. Check Ollama Configuration
  2. Disk Space and Permissions
  3. Model cache and log directory
  4. service restarts and version updation
  5. Persistent Storage Configuration:
    ollama config set model_storage_path /path/to/persistent/storage
  6. Checking and Setting Up Model Cache Directory

Hope this helps,
Thanks

<!-- gh-comment-id:2265452851 --> @Siddharth-Latthe-07 commented on GitHub (Aug 2, 2024): The above issue suggests that the model is not being persistently stored or there is a configuration problem causing it to be removed upon restart. steps to diagnose:- 1. Check Ollama Configuration 2. Disk Space and Permissions 3. Model cache and log directory 4. service restarts and version updation 5. Persistent Storage Configuration: `ollama config set model_storage_path /path/to/persistent/storage` 6. Checking and Setting Up Model Cache Directory Hope this helps, Thanks
Author
Owner

@FellowTraveler commented on GitHub (Aug 12, 2024):

@fanjikang Can you please confirm via above tips if this is a real issue? Otherwise please close this issue.

<!-- gh-comment-id:2282958949 --> @FellowTraveler commented on GitHub (Aug 12, 2024): @fanjikang Can you please confirm via above tips if this is a real issue? Otherwise please close this issue.
Author
Owner

@skjortan23 commented on GitHub (Feb 6, 2025):

I am actually having the same problem. after a power outage ollama list is empty however i still see all my models by listing the model directory.

e.g

126G	./blobs
8,0K	./manifests/registry.ollama.ai/hengwen/watt-tool-8B
12K	./manifests/registry.ollama.ai/hengwen
8,0K	./manifests/registry.ollama.ai/library/llama3-groq-tool-use
8,0K	./manifests/registry.ollama.ai/library/codeqwen
8,0K	./manifests/registry.ollama.ai/library/phi4
8,0K	./manifests/registry.ollama.ai/library/Qwen2.5
12K	./manifests/registry.ollama.ai/library/qwen2.5-coder
8,0K	./manifests/registry.ollama.ai/library/deepseek-coder-v2
8,0K	./manifests/registry.ollama.ai/library/deepseek-coder
8,0K	./manifests/registry.ollama.ai/library/llama3.1
16K	./manifests/registry.ollama.ai/library/deepseek-r1
8,0K	./manifests/registry.ollama.ai/library/llama3.2
8,0K	./manifests/registry.ollama.ai/library/gemma2
8,0K	./manifests/registry.ollama.ai/library/yi-coder
8,0K	./manifests/registry.ollama.ai/library/phi3
8,0K	./manifests/registry.ollama.ai/library/mixtral
8,0K	./manifests/registry.ollama.ai/library/mistral
136K	./manifests/registry.ollama.ai/library
152K	./manifests/registry.ollama.ai
156K	./manifests
126G	./```
<!-- gh-comment-id:2639596461 --> @skjortan23 commented on GitHub (Feb 6, 2025): I am actually having the same problem. after a power outage `ollama list` is empty however i still see all my models by listing the model directory. e.g ```/usr/share/ollama/.ollama/models$ du -h ./ 126G ./blobs 8,0K ./manifests/registry.ollama.ai/hengwen/watt-tool-8B 12K ./manifests/registry.ollama.ai/hengwen 8,0K ./manifests/registry.ollama.ai/library/llama3-groq-tool-use 8,0K ./manifests/registry.ollama.ai/library/codeqwen 8,0K ./manifests/registry.ollama.ai/library/phi4 8,0K ./manifests/registry.ollama.ai/library/Qwen2.5 12K ./manifests/registry.ollama.ai/library/qwen2.5-coder 8,0K ./manifests/registry.ollama.ai/library/deepseek-coder-v2 8,0K ./manifests/registry.ollama.ai/library/deepseek-coder 8,0K ./manifests/registry.ollama.ai/library/llama3.1 16K ./manifests/registry.ollama.ai/library/deepseek-r1 8,0K ./manifests/registry.ollama.ai/library/llama3.2 8,0K ./manifests/registry.ollama.ai/library/gemma2 8,0K ./manifests/registry.ollama.ai/library/yi-coder 8,0K ./manifests/registry.ollama.ai/library/phi3 8,0K ./manifests/registry.ollama.ai/library/mixtral 8,0K ./manifests/registry.ollama.ai/library/mistral 136K ./manifests/registry.ollama.ai/library 152K ./manifests/registry.ollama.ai 156K ./manifests 126G ./```
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

Did you change the server environment variable OLLAMA_MODELS? If you add server logs it may be easier to diagnose the issue.

<!-- gh-comment-id:2639600601 --> @rick-github commented on GitHub (Feb 6, 2025): Did you change the server environment variable `OLLAMA_MODELS`? If you add [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) it may be easier to diagnose the issue.
Author
Owner

@FellowTraveler commented on GitHub (Feb 7, 2025):

You're probably aware of this but it's worth mentioning...

When I do "ollama list" I see this in the list:
llama3.3:70b-instruct-q5_K_M

...but q5_K_M is NOT the default quant. So if I do "ollama run llama3.3:70b-instruct-q5_K_M" it will run immediately.

But if I do "ollama run llama3.3" it will start downloading q4_0 or whatever the default quant is.

<!-- gh-comment-id:2642478663 --> @FellowTraveler commented on GitHub (Feb 7, 2025): You're probably aware of this but it's worth mentioning... When I do "ollama list" I see this in the list: llama3.3:70b-instruct-q5_K_M ...but q5_K_M is NOT the default quant. So if I do "ollama run llama3.3:70b-instruct-q5_K_M" it will run immediately. **But if I do "ollama run llama3.3" it will start downloading q4_0 or whatever the default quant is.**
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3814