Delete partially downloaded models not working in wsl2 #8410

Closed
opened 2025-11-12 14:41:11 -06:00 by GiteaMirror · 6 comments
Owner

Originally created by @somera on GitHub (Oct 16, 2025).

What is the issue?

I started downloading the new model hf.co/unsloth/gpt-oss-20b-GGUF:Q6_K but stopped the download. As noted in https://github.com/ollama/ollama/issues/1599, any partially downloaded files should be removed when Ollama starts (OLLAMA_CONTEXT_LENGTH=24576 OLLAMA_HOST=0.0.0.0 ollama serve).

This is not working for me:

$ ls -al ~/.ollama/models/blobs/*partial*
-rw-r--r-- 1 somera somera 12G Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial
-rw-r--r-- 1 somera somera  57 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-0
-rw-r--r-- 1 somera somera  65 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-1
-rw-r--r-- 1 somera somera  67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-10
-rw-r--r-- 1 somera somera  67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-11
-rw-r--r-- 1 somera somera  67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-12
-rw-r--r-- 1 somera somera  67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-13
-rw-r--r-- 1 somera somera  68 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-14
-rw-r--r-- 1 somera somera  68 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-15
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-2
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-3
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-4
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-5
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-6
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-7
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-8
-rw-r--r-- 1 somera somera  66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-9

Relevant log output


OS

WSL2

GPU

Nvidia

CPU

AMD

Ollama version

0.12.5

Originally created by @somera on GitHub (Oct 16, 2025). ### What is the issue? I started downloading the new model `hf.co/unsloth/gpt-oss-20b-GGUF:Q6_K` but stopped the download. As noted in https://github.com/ollama/ollama/issues/1599, any partially downloaded files should be removed when Ollama starts (OLLAMA_CONTEXT_LENGTH=24576 OLLAMA_HOST=0.0.0.0 ollama serve). This is not working for me: ``` $ ls -al ~/.ollama/models/blobs/*partial* -rw-r--r-- 1 somera somera 12G Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial -rw-r--r-- 1 somera somera 57 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-0 -rw-r--r-- 1 somera somera 65 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-1 -rw-r--r-- 1 somera somera 67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-10 -rw-r--r-- 1 somera somera 67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-11 -rw-r--r-- 1 somera somera 67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-12 -rw-r--r-- 1 somera somera 67 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-13 -rw-r--r-- 1 somera somera 68 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-14 -rw-r--r-- 1 somera somera 68 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-15 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-2 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-3 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-4 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-5 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-6 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-7 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-8 -rw-r--r-- 1 somera somera 66 Oct 16 19:56 /home/somera/.ollama/models/blobs/sha256-f2ead52a7842a556ea10d5edd91a88e5af974856d2485ecaad3aade679832935-partial-9 ``` ### Relevant log output ```shell ``` ### OS WSL2 ### GPU Nvidia ### CPU AMD ### Ollama version 0.12.5
GiteaMirror added the bug label 2025-11-12 14:41:11 -06:00
Author
Owner

@rick-github commented on GitHub (Oct 16, 2025):

Do you have OLLAMA_NO_PRUNE set in the server environment?

@rick-github commented on GitHub (Oct 16, 2025): Do you have `OLLAMA_NO_PRUNE` set in the server environment?
Author
Owner

@somera commented on GitHub (Oct 16, 2025):

Nope. I start Ollama manually:

OLLAMA_CONTEXT_LENGTH=24576 OLLAMA_HOST=0.0.0.0 ollama serve

or

OLLAMA_HOST=0.0.0.0 ollama serve

@somera commented on GitHub (Oct 16, 2025): Nope. I start Ollama manually: `OLLAMA_CONTEXT_LENGTH=24576 OLLAMA_HOST=0.0.0.0 ollama serve` or `OLLAMA_HOST=0.0.0.0 ollama serve`
Author
Owner

@rick-github commented on GitHub (Oct 16, 2025):

What's the first 10 lines of log when you start ollama?

@rick-github commented on GitHub (Oct 16, 2025): What's the first 10 lines of log when you start ollama?
Author
Owner

@somera commented on GitHub (Oct 16, 2025):

I see OLLAMA_NOPRUNE:false in

$ OLLAMA_HOST=0.0.0.0 ollama serve
time=2025-10-16T22:08:24.266+02:00 level=INFO source=routes.go:1481 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/somera/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-16T22:08:24.294+02:00 level=WARN source=routes.go:1493 msg="corrupt manifests detected, skipping prune operation.  Re-pull or delete to clear" error="registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M EOF"
time=2025-10-16T22:08:24.295+02:00 level=INFO source=routes.go:1534 msg="Listening on [::]:11434 (version 0.12.5)"
time=2025-10-16T22:08:24.297+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-16T22:08:26.094+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-12ee681d-be61-a8c4-4271-304292804c1c library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Ti" libdirs=ollama,cuda_v13 driver=13.0 pci_id=01:00.0 type=discrete total="16.0 GiB" available="14.9 GiB"
time=2025-10-16T22:08:26.095+02:00 level=INFO source=routes.go:1575 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"

I can't find any informations about OLLAMA_NO_PRUNE.

Or this

corrupt manifests detected, skipping prune operation. Re-pull or delete to clear" error="registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M EOF

is the first problem?

@somera commented on GitHub (Oct 16, 2025): I see `OLLAMA_NOPRUNE:false` in ``` $ OLLAMA_HOST=0.0.0.0 ollama serve time=2025-10-16T22:08:24.266+02:00 level=INFO source=routes.go:1481 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/somera/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-16T22:08:24.294+02:00 level=WARN source=routes.go:1493 msg="corrupt manifests detected, skipping prune operation. Re-pull or delete to clear" error="registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M EOF" time=2025-10-16T22:08:24.295+02:00 level=INFO source=routes.go:1534 msg="Listening on [::]:11434 (version 0.12.5)" time=2025-10-16T22:08:24.297+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-16T22:08:26.094+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-12ee681d-be61-a8c4-4271-304292804c1c library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Ti" libdirs=ollama,cuda_v13 driver=13.0 pci_id=01:00.0 type=discrete total="16.0 GiB" available="14.9 GiB" time=2025-10-16T22:08:26.095+02:00 level=INFO source=routes.go:1575 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" ``` I can't find any informations about `OLLAMA_NO_PRUNE`. Or this `corrupt manifests detected, skipping prune operation. Re-pull or delete to clear" error="registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M EOF` is the first problem?
Author
Owner

@rick-github commented on GitHub (Oct 16, 2025):

time=2025-10-16T22:08:24.294+02:00 level=WARN source=routes.go:1493 msg="corrupt manifests detected, skipping prune operation.  Re-pull or delete to clear" error="[registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M](http://registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M) EOF"

The housekeeping is being interrupted because of a corrupt manifest. Delete this and the server should do normal housekeeping when restarted.

@rick-github commented on GitHub (Oct 16, 2025): ``` time=2025-10-16T22:08:24.294+02:00 level=WARN source=routes.go:1493 msg="corrupt manifests detected, skipping prune operation. Re-pull or delete to clear" error="[registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M](http://registry.ollama.ai/library/qwen3:4b-instruct-2507-q4_K_M) EOF" ``` The housekeeping is being interrupted because of a corrupt manifest. Delete this and the server should do normal housekeeping when restarted.
Author
Owner

@somera commented on GitHub (Oct 16, 2025):

thx

@somera commented on GitHub (Oct 16, 2025): thx
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#8410