[GH-ISSUE #8784] Crash when trying to download model #52215

Closed
opened 2026-04-28 22:32:02 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @rilendorf on GitHub (Feb 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8784

What is the issue?

❯ ollama serve
2025/02/03 08:16:04 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/derz/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-03T08:16:04.467+01:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-02-03T08:16:04.467+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-03T08:16:04.467+01:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-03T08:16:04.468+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]"
time=2025-02-03T08:16:04.468+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-03T08:16:04.571+01:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-02-03T08:16:04.571+01:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.1 GiB" available="28.4 GiB"
[GIN] 2025/02/03 - 08:17:43 | 200 |      83.212µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/03 - 08:17:43 | 404 |     632.074µs |       127.0.0.1 | POST     "/api/show"
time=2025-02-03T08:17:44.899+01:00 level=INFO source=download.go:175 msg="downloading 96c415656d37 in 16 292 MB part(s)"
time=2025-02-03T08:25:30.769+01:00 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)"
panic: runtime error: index out of range [0] with length 0

goroutine 43 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0004a2af0, {0x64b75e4b8300, 0xc000138140}, 0xc0005a8480, 0xc00036c3c0)
	/build/ollama/src/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x64b75e4b8300, 0xc000138140}, {{{0x64b75e08b3c7, 0x5}, {0x64b75e09f797, 0x12}, {0x64b75e093ec5, 0x7}, {0xc00049a0e0, 0xb}, ...}, ...})
	/build/ollama/src/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x64b75e4b8300, 0xc000138140}, {0xc00049a0e0, 0x12}, 0xc00036c3c0, 0xc0004862f0)
	/build/ollama/src/ollama/server/images.go:564 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
	/build/ollama/src/ollama/server/routes.go:594 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 10
	/build/ollama/src/ollama/server/routes.go:581 +0x691

To trigger I ran

❯ ollama run deepseek-r1
pulling manifest
pulling manifest
pulling 96c415656d37... 100% ▕██████████████████████▏ 4.7 GB
pulling 369ca498f347... 100% ▕██████████████████████▏  387 B
Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connect: connection refused

according to fish the command took 7m48s

tried downloading the model again, no cache was preserved

OS

Linux

GPU

Other

CPU

Intel

Ollama version

0.5.7

Originally created by @rilendorf on GitHub (Feb 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8784 ### What is the issue? ``` ❯ ollama serve 2025/02/03 08:16:04 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/derz/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-03T08:16:04.467+01:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-02-03T08:16:04.467+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-03T08:16:04.467+01:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-03T08:16:04.468+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]" time=2025-02-03T08:16:04.468+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-03T08:16:04.571+01:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-02-03T08:16:04.571+01:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.1 GiB" available="28.4 GiB" [GIN] 2025/02/03 - 08:17:43 | 200 | 83.212µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/03 - 08:17:43 | 404 | 632.074µs | 127.0.0.1 | POST "/api/show" time=2025-02-03T08:17:44.899+01:00 level=INFO source=download.go:175 msg="downloading 96c415656d37 in 16 292 MB part(s)" time=2025-02-03T08:25:30.769+01:00 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)" panic: runtime error: index out of range [0] with length 0 goroutine 43 [running]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0004a2af0, {0x64b75e4b8300, 0xc000138140}, 0xc0005a8480, 0xc00036c3c0) /build/ollama/src/ollama/server/download.go:175 +0x539 github.com/ollama/ollama/server.downloadBlob({0x64b75e4b8300, 0xc000138140}, {{{0x64b75e08b3c7, 0x5}, {0x64b75e09f797, 0x12}, {0x64b75e093ec5, 0x7}, {0xc00049a0e0, 0xb}, ...}, ...}) /build/ollama/src/ollama/server/download.go:489 +0x4da github.com/ollama/ollama/server.PullModel({0x64b75e4b8300, 0xc000138140}, {0xc00049a0e0, 0x12}, 0xc00036c3c0, 0xc0004862f0) /build/ollama/src/ollama/server/images.go:564 +0x771 github.com/ollama/ollama/server.(*Server).PullHandler.func1() /build/ollama/src/ollama/server/routes.go:594 +0x197 created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 10 /build/ollama/src/ollama/server/routes.go:581 +0x691 ``` To trigger I ran ``` ❯ ollama run deepseek-r1 pulling manifest pulling manifest pulling 96c415656d37... 100% ▕██████████████████████▏ 4.7 GB pulling 369ca498f347... 100% ▕██████████████████████▏ 387 B Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connect: connection refused ``` according to fish the command took 7m48s tried downloading the model again, no cache was preserved ### OS Linux ### GPU Other ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-28 22:32:02 -05:00
Author
Owner

@rilendorf commented on GitHub (Feb 3, 2025):

Checked the code, the line responsible seems to be:
https://github.com/ollama/ollama/blob/v0.5.7/server/download.go#L175

slog.Info(fmt.Sprintf("downloading %s in %d %s part(s)", b.Digest[7:19], len(b.Parts), format.HumanBytes(b.Parts[0].Size)))

so either len(Digest) == 0 or len(b.Parts) == 0; reading further up the Digest being nil seems unlikely, so most likely the Parts are not updated?; neither seem to be nil, if the error is correct

Note: download worked second try

<!-- gh-comment-id:2630182459 --> @rilendorf commented on GitHub (Feb 3, 2025): Checked the code, the line responsible seems to be: https://github.com/ollama/ollama/blob/v0.5.7/server/download.go#L175 ```golang slog.Info(fmt.Sprintf("downloading %s in %d %s part(s)", b.Digest[7:19], len(b.Parts), format.HumanBytes(b.Parts[0].Size))) ``` so either len(Digest) == 0 or len(b.Parts) == 0; reading further up the Digest being nil seems unlikely, so most likely the Parts are not updated?; neither seem to be nil, if the error is correct Note: download worked second try
Author
Owner

@justinkb commented on GitHub (Feb 3, 2025):

just ran into this, too. very weird. probably a server side issue, but still the download client probably shouldn't choke on it and discard gigabytes of already downloaded stuff

<!-- gh-comment-id:2630232896 --> @justinkb commented on GitHub (Feb 3, 2025): just ran into this, too. very weird. probably a server side issue, but still the download client probably shouldn't choke on it and discard gigabytes of already downloaded stuff
Author
Owner

@bbSnavy commented on GitHub (Feb 3, 2025):

Fix in #8480

<!-- gh-comment-id:2630497809 --> @bbSnavy commented on GitHub (Feb 3, 2025): Fix in #8480
Author
Owner

@rick-github commented on GitHub (Feb 3, 2025):

dupe #8400.

<!-- gh-comment-id:2630768345 --> @rick-github commented on GitHub (Feb 3, 2025): dupe #8400.
Author
Owner

@rdimitrov commented on GitHub (Feb 4, 2025):

I believe I encountered the same issue 👍

Details:
This happens while running ollama through -
docker run -d -v ollama:/root/.ollama --network host --name ollama ollama/ollama
and then
docker exec -d ollama ollama run qwen2.5-coder:0.5b

Here's a copy of the logs (full logs here) -

Starting Ollama container (Attempt 1/3)
Unable to find image 'ollama/ollama:latest' locally
latest: Pulling from ollama/ollama
6414378b6477: Pulling fs layer
...
bbc15f5291c8: Pull complete
Digest: sha256:7e672211886f8bd4448a98ed577e26c816b9e8b052[112](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:113)860564afaa2c105800e
Status: Downloaded newer image for ollama/ollama:latest
55d638f78d6b2ac68292df8a3dbbc04414ae0821df18739613a7885becc0171d
Ollama endpoint is available
Starting model download/initialization...
Model not ready yet. Waiting... (1/60)
Container crashed, logs:
2025/02/04 09:26:11 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:[114](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:115)34 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
time=2025-02-04T09:26:11.550Z level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-04T09:26:11.550Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-04T09:26:11.550Z level=INFO source=routes.go:[123](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:124)8 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)"
time=2025-02-04T09:26:11.551Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
time=2025-02-04T09:26:11.551Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-04T09:26:11.553Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-02-04T09:26:11.553Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="15.6 GiB" available="14.4 GiB"
time=2025-02-04T09:26:14.207Z level=INFO source=download.go:175 msg="downloading 828125e28bf4 in 6 100 MB part(s)"
panic: runtime error: index out of range [0] with length 0

goroutine 51 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0004c25b0, {0x55809fe75580, 0xc000520690}, 0xc0003923f0, 0xc000090540)
	github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x55809fe75580, 0xc000520690}, {{{0x55809fa39269, 0x5}, {0x55809fa4d641, 0x12}, {0x55809fa41d67, 0x7}, {0xc0005b8360, 0xd}, ...}, ...})
	github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x55809fe75580, 0xc000520690}, {0xc0005b8360, 0x12}, 0xc000090540, 0xc000[124](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:125)d10)
	github.com/ollama/ollama/server/images.go:564 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
	github.com/ollama/ollama/server/routes.go:594 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 12
	github.com/ollama/ollama/server/routes.go:581 +0x691
Your new public key is: 
...
<!-- gh-comment-id:2633545872 --> @rdimitrov commented on GitHub (Feb 4, 2025): I believe I encountered the same issue 👍 **Details:** This happens while running ollama through - `docker run -d -v ollama:/root/.ollama --network host --name ollama ollama/ollama` and then `docker exec -d ollama ollama run qwen2.5-coder:0.5b` Here's a copy of the logs (full logs [here](https://github.com/stacklok/codegate/actions/runs/13132024777/job/36639108866)) - ```bash Starting Ollama container (Attempt 1/3) Unable to find image 'ollama/ollama:latest' locally latest: Pulling from ollama/ollama 6414378b6477: Pulling fs layer ... bbc15f5291c8: Pull complete Digest: sha256:7e672211886f8bd4448a98ed577e26c816b9e8b052[112](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:113)860564afaa2c105800e Status: Downloaded newer image for ollama/ollama:latest 55d638f78d6b2ac68292df8a3dbbc04414ae0821df18739613a7885becc0171d Ollama endpoint is available Starting model download/initialization... Model not ready yet. Waiting... (1/60) Container crashed, logs: 2025/02/04 09:26:11 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:[114](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:115)34 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. time=2025-02-04T09:26:11.550Z level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-04T09:26:11.550Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-04T09:26:11.550Z level=INFO source=routes.go:[123](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:124)8 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)" time=2025-02-04T09:26:11.551Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-02-04T09:26:11.551Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-04T09:26:11.553Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-02-04T09:26:11.553Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="15.6 GiB" available="14.4 GiB" time=2025-02-04T09:26:14.207Z level=INFO source=download.go:175 msg="downloading 828125e28bf4 in 6 100 MB part(s)" panic: runtime error: index out of range [0] with length 0 goroutine 51 [running]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0004c25b0, {0x55809fe75580, 0xc000520690}, 0xc0003923f0, 0xc000090540) github.com/ollama/ollama/server/download.go:175 +0x539 github.com/ollama/ollama/server.downloadBlob({0x55809fe75580, 0xc000520690}, {{{0x55809fa39269, 0x5}, {0x55809fa4d641, 0x12}, {0x55809fa41d67, 0x7}, {0xc0005b8360, 0xd}, ...}, ...}) github.com/ollama/ollama/server/download.go:489 +0x4da github.com/ollama/ollama/server.PullModel({0x55809fe75580, 0xc000520690}, {0xc0005b8360, 0x12}, 0xc000090540, 0xc000[124](https://github.com/stacklok/codegate/actions/runs/13132206637/job/36639703989#step:15:125)d10) github.com/ollama/ollama/server/images.go:564 +0x771 github.com/ollama/ollama/server.(*Server).PullHandler.func1() github.com/ollama/ollama/server/routes.go:594 +0x197 created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 12 github.com/ollama/ollama/server/routes.go:581 +0x691 Your new public key is: ... ```
Author
Owner

@rick-github commented on GitHub (Feb 4, 2025):

Yes, same problem.

<!-- gh-comment-id:2633605062 --> @rick-github commented on GitHub (Feb 4, 2025): Yes, same problem.
Author
Owner

@Eva-Skye commented on GitHub (Feb 6, 2025):

i got the same problem

<!-- gh-comment-id:2638608868 --> @Eva-Skye commented on GitHub (Feb 6, 2025): i got the same problem
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

Fixed by #8746

<!-- gh-comment-id:2660527294 --> @rick-github commented on GitHub (Feb 14, 2025): Fixed by #8746
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52215