[GH-ISSUE #8400] Model pulling behind proxy index out of range #67452

Closed
opened 2026-05-04 10:24:14 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @xyzBart on GitHub (Jan 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8400

What is the issue?

Hi,

I'm getting the following error when trying to go through corpo proxy when downloading models with ollama pull:

panic: runtime error: index out of range [0] with length 0


goroutine 7 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0, {0x55efb74bd430, 0xc00042a4b0}, 0xc0000e1a70, 0xc000431480)
        github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x55efb74bd430, 0xc00042a4b0}, {{{0x55efb708726e, 0x5}, {0x55efb709b519, 0x12}, {0x55efb708fd4b, 0x7}, {0xc000016560, 0x8}, ...}, ...})
        github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x55efb74bd430, 0xc00042a4b0}, {0xc000016560, 0xb}, 0xc000431480, 0xc0002824f0)
        github.com/ollama/ollama/server/images.go:889 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
        github.com/ollama/ollama/server/routes.go:595 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 29
        github.com/ollama/ollama/server/routes.go:582 +0x691

With GODEBUG=http2debug=2 set:
time=2025-01-13T10:29:35.304Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"

It occurs both when running from docker container or from downloaded released binary on Ubuntu. When running inside container, the HTTPS_PROXY is set, the container is updated with certificates.
wget or curl inside this container to some registry urls, e.g. https://registry.ollama.ai/v2/library/tinnyllama/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816, works fine.

Full log


docker logs ollama
2025/01/13 10:29:05 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:proxyx.cn.:8080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-13T10:29:05.927Z level=INFO source=images.go:757 msg="total blobs: 0"
time=2025-01-13T10:29:05.927Z level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-13T10:29:05.928Z level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4-0-g2ddc32d-dirty)"
time=2025-01-13T10:29:05.929Z level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx cpu]"
time=2025-01-13T10:29:05.929Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-13T10:29:05.934Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-13T10:29:05.934Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="7.8 GiB" available="6.0 GiB"
[GIN] 2025/01/13 - 10:29:35 | 200 |     245.408µs |       127.0.0.1 | HEAD     "/"
time=2025-01-13T10:29:35.304Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-01-13T10:29:35.895Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-01-13T10:29:36.156Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-01-13T10:29:36.466Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
panic: runtime error: index out of range [0] with length 0

goroutine 7 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0, {0x55efb74bd430, 0xc00042a4b0}, 0xc0000e1a70, 0xc000431480)
        github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x55efb74bd430, 0xc00042a4b0}, {{{0x55efb708726e, 0x5}, {0x55efb709b519, 0x12}, {0x55efb708fd4b, 0x7}, {0xc000016560, 0x8}, ...}, ...})
        github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x55efb74bd430, 0xc00042a4b0}, {0xc000016560, 0xb}, 0xc000431480, 0xc0002824f0)
        github.com/ollama/ollama/server/images.go:889 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
        github.com/ollama/ollama/server/routes.go:595 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 29
        github.com/ollama/ollama/server/routes.go:582 +0x691


OS

Linux, Docker

GPU

Intel

CPU

Intel

Ollama version

0.5.4-0-g2ddc32d-dirty

Originally created by @xyzBart on GitHub (Jan 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8400 ### What is the issue? Hi, I'm getting the following error when trying to go through corpo proxy when downloading models with ollama pull: ```plaintext panic: runtime error: index out of range [0] with length 0 goroutine 7 [running]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0, {0x55efb74bd430, 0xc00042a4b0}, 0xc0000e1a70, 0xc000431480) github.com/ollama/ollama/server/download.go:175 +0x539 github.com/ollama/ollama/server.downloadBlob({0x55efb74bd430, 0xc00042a4b0}, {{{0x55efb708726e, 0x5}, {0x55efb709b519, 0x12}, {0x55efb708fd4b, 0x7}, {0xc000016560, 0x8}, ...}, ...}) github.com/ollama/ollama/server/download.go:489 +0x4da github.com/ollama/ollama/server.PullModel({0x55efb74bd430, 0xc00042a4b0}, {0xc000016560, 0xb}, 0xc000431480, 0xc0002824f0) github.com/ollama/ollama/server/images.go:889 +0x771 github.com/ollama/ollama/server.(*Server).PullHandler.func1() github.com/ollama/ollama/server/routes.go:595 +0x197 created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 29 github.com/ollama/ollama/server/routes.go:582 +0x691 ``` With GODEBUG=http2debug=2 set: time=2025-01-13T10:29:35.304Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" It occurs both when running from docker container or from downloaded released binary on Ubuntu. When running inside container, the HTTPS_PROXY is set, the container is updated with certificates. wget or curl inside this container to some registry urls, e.g. https://registry.ollama.ai/v2/library/tinnyllama/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816, works fine. ## Full log ```plaintext docker logs ollama 2025/01/13 10:29:05 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:proxyx.cn.:8080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-01-13T10:29:05.927Z level=INFO source=images.go:757 msg="total blobs: 0" time=2025-01-13T10:29:05.927Z level=INFO source=images.go:764 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-13T10:29:05.928Z level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4-0-g2ddc32d-dirty)" time=2025-01-13T10:29:05.929Z level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx cpu]" time=2025-01-13T10:29:05.929Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-13T10:29:05.934Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-01-13T10:29:05.934Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="7.8 GiB" available="6.0 GiB" [GIN] 2025/01/13 - 10:29:35 | 200 | 245.408µs | 127.0.0.1 | HEAD "/" time=2025-01-13T10:29:35.304Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-01-13T10:29:35.895Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-01-13T10:29:36.156Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-01-13T10:29:36.466Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" panic: runtime error: index out of range [0] with length 0 goroutine 7 [running]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0, {0x55efb74bd430, 0xc00042a4b0}, 0xc0000e1a70, 0xc000431480) github.com/ollama/ollama/server/download.go:175 +0x539 github.com/ollama/ollama/server.downloadBlob({0x55efb74bd430, 0xc00042a4b0}, {{{0x55efb708726e, 0x5}, {0x55efb709b519, 0x12}, {0x55efb708fd4b, 0x7}, {0xc000016560, 0x8}, ...}, ...}) github.com/ollama/ollama/server/download.go:489 +0x4da github.com/ollama/ollama/server.PullModel({0x55efb74bd430, 0xc00042a4b0}, {0xc000016560, 0xb}, 0xc000431480, 0xc0002824f0) github.com/ollama/ollama/server/images.go:889 +0x771 github.com/ollama/ollama/server.(*Server).PullHandler.func1() github.com/ollama/ollama/server/routes.go:595 +0x197 created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 29 github.com/ollama/ollama/server/routes.go:582 +0x691 ``` ### OS Linux, Docker ### GPU Intel ### CPU Intel ### Ollama version 0.5.4-0-g2ddc32d-dirty
GiteaMirror added the bug label 2026-05-04 10:24:14 -05:00
Author
Owner

@fallenreaper commented on GitHub (Feb 1, 2025):

This is happening to me as well, Im running Mac Sonoma running Ollama 0.5.7

<!-- gh-comment-id:2628698679 --> @fallenreaper commented on GitHub (Feb 1, 2025): This is happening to me as well, Im running Mac Sonoma running Ollama 0.5.7
Author
Owner

@lucasmontec commented on GitHub (Feb 1, 2025):

Same here trying deepseek.

panic: runtime error: index out of range [0] with length 0

goroutine 41 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0002d1e30, {0x55d58a164580, 0xc0004162d0}, 0xc0005ba090, 0xc000686a40)
        github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x55d58a164580, 0xc0004162d0}, {{{0x55d589d28269, 0x5}, {0x55d589d3c641, 0x12}, {0x55d589d30d67, 0x7}, {0xc0002b2c00, 0xb}, ...}, ...})
        github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x55d58a164580, 0xc0004162d0}, {0xc0002b2c00, 0xf}, 0xc000686a40, 0xc000683af0)
        github.com/ollama/ollama/server/images.go:564 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
        github.com/ollama/ollama/server/routes.go:594 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 51
        github.com/ollama/ollama/server/routes.go:581 +0x691
<!-- gh-comment-id:2628720001 --> @lucasmontec commented on GitHub (Feb 1, 2025): Same here trying deepseek. ``` panic: runtime error: index out of range [0] with length 0 goroutine 41 [running]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0002d1e30, {0x55d58a164580, 0xc0004162d0}, 0xc0005ba090, 0xc000686a40) github.com/ollama/ollama/server/download.go:175 +0x539 github.com/ollama/ollama/server.downloadBlob({0x55d58a164580, 0xc0004162d0}, {{{0x55d589d28269, 0x5}, {0x55d589d3c641, 0x12}, {0x55d589d30d67, 0x7}, {0xc0002b2c00, 0xb}, ...}, ...}) github.com/ollama/ollama/server/download.go:489 +0x4da github.com/ollama/ollama/server.PullModel({0x55d58a164580, 0xc0004162d0}, {0xc0002b2c00, 0xf}, 0xc000686a40, 0xc000683af0) github.com/ollama/ollama/server/images.go:564 +0x771 github.com/ollama/ollama/server.(*Server).PullHandler.func1() github.com/ollama/ollama/server/routes.go:594 +0x197 created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 51 github.com/ollama/ollama/server/routes.go:581 +0x691 ```
Author
Owner

@fallenreaper commented on GitHub (Feb 1, 2025):

I submitted a PR to try to fix it. I hope they accept it, but maybe there
is a deeper issue at play.

On Fri, Jan 31, 2025, 21:46 Lucas Montenegro Carvalhaes <
@.***> wrote:

Same here trying deepseek.

panic: runtime error: index out of range [0] with length 0

goroutine 41 [running]:github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0002d1e30, {0x55d58a164580, 0xc0004162d0}, 0xc0005ba090, 0xc000686a40)
github.com/ollama/ollama/server/download.go:175 +0x539github.com/ollama/ollama/server.downloadBlob({0x55d58a164580 http://github.com/ollama/ollama/server.downloadBlob(%7B0x55d58a164580, 0xc0004162d0}, {{{0x55d589d28269, 0x5}, {0x55d589d3c641, 0x12}, {0x55d589d30d67, 0x7}, {0xc0002b2c00, 0xb}, ...}, ...})
github.com/ollama/ollama/server/download.go:489 +0x4dagithub.com/ollama/ollama/server.PullModel({0x55d58a164580 http://github.com/ollama/ollama/server.PullModel(%7B0x55d58a164580, 0xc0004162d0}, {0xc0002b2c00, 0xf}, 0xc000686a40, 0xc000683af0)
github.com/ollama/ollama/server/images.go:564 +0x771github.com/ollama/ollama/server.(*Server).PullHandler.func1()
github.com/ollama/ollama/server/routes.go:594 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 51
github.com/ollama/ollama/server/routes.go:581 +0x691


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/8400#issuecomment-2628720001,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACM7Z4MEMINEMCLOMP7UJ32NQYQLAVCNFSM6AAAAABVCJIPXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRYG4ZDAMBQGE
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2628735640 --> @fallenreaper commented on GitHub (Feb 1, 2025): I submitted a PR to try to fix it. I hope they accept it, but maybe there is a deeper issue at play. On Fri, Jan 31, 2025, 21:46 Lucas Montenegro Carvalhaes < ***@***.***> wrote: > Same here trying deepseek. > > panic: runtime error: index out of range [0] with length 0 > > goroutine 41 [running]:github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0002d1e30, {0x55d58a164580, 0xc0004162d0}, 0xc0005ba090, 0xc000686a40) > github.com/ollama/ollama/server/download.go:175 +0x539github.com/ollama/ollama/server.downloadBlob({0x55d58a164580 <http://github.com/ollama/ollama/server.downloadBlob(%7B0x55d58a164580>, 0xc0004162d0}, {{{0x55d589d28269, 0x5}, {0x55d589d3c641, 0x12}, {0x55d589d30d67, 0x7}, {0xc0002b2c00, 0xb}, ...}, ...}) > github.com/ollama/ollama/server/download.go:489 +0x4dagithub.com/ollama/ollama/server.PullModel({0x55d58a164580 <http://github.com/ollama/ollama/server.PullModel(%7B0x55d58a164580>, 0xc0004162d0}, {0xc0002b2c00, 0xf}, 0xc000686a40, 0xc000683af0) > github.com/ollama/ollama/server/images.go:564 +0x771github.com/ollama/ollama/server.(*Server).PullHandler.func1() > github.com/ollama/ollama/server/routes.go:594 +0x197 > created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 51 > github.com/ollama/ollama/server/routes.go:581 +0x691 > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/8400#issuecomment-2628720001>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AACM7Z4MEMINEMCLOMP7UJ32NQYQLAVCNFSM6AAAAABVCJIPXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRYG4ZDAMBQGE> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@dokicode commented on GitHub (Feb 1, 2025):

Hi!

I have the same issue while trying to pull llama3.2:3b-instruct-fp16 model on ubuntu 16.04
It seems that model 6,4 Gb has been downloaded, but after that it disappered from blobs.
The blobs folder is empty(

ollama list shows nothing(

In logs I found following:

feb 01 18:49:44 luna-mega ollama[8289]: [GIN] 2025/02/01 - 18:49:44 | 200 |      2h24m17s |       127.0.0.1 | POST     "/api/pull"
feb 01 18:49:44 luna-mega ollama[8289]: panic: runtime error: index out of range [0] with length 0
feb 01 18:49:44 luna-mega ollama[8289]: goroutine 1026 [running]:
feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0002d2b60, {0x55d0e77ca580, 0xc000312140}, 0xc000486090, 0xc000414080)
feb 01 18:49:44 luna-mega ollama[8289]:         github.com/ollama/ollama/server/download.go:175 +0x539
feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.downloadBlob({0x55d0e77ca580, 0xc000312140}, {{{0x55d0e738e269, 0x5}, {0x55d0e73a2641, 0x12}, {0x55d0e7396d67, 0x7}, {0xc000404120, 0x8}, ...}, ...})
feb 01 18:49:44 luna-mega ollama[8289]:         github.com/ollama/ollama/server/download.go:489 +0x4da
feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.PullModel({0x55d0e77ca580, 0xc000312140}, {0xc000404120, 0x19}, 0xc000414080, 0xc000042130)
feb 01 18:49:44 luna-mega ollama[8289]:         github.com/ollama/ollama/server/images.go:564 +0x771
feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.(*Server).PullHandler.func1()
feb 01 18:49:44 luna-mega ollama[8289]:         github.com/ollama/ollama/server/routes.go:594 +0x197
feb 01 18:49:44 luna-mega ollama[8289]: created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 942
feb 01 18:49:44 luna-mega ollama[8289]:         github.com/ollama/ollama/server/routes.go:581 +0x691
feb 01 18:49:44 luna-mega systemd[1]: ollama.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
feb 01 18:49:44 luna-mega systemd[1]: ollama.service: Unit entered failed state.
feb 01 18:49:44 luna-mega systemd[1]: ollama.service: Failed with result 'exit-code'.
feb 01 18:49:47 luna-mega systemd[1]: ollama.service: Service hold-off time over, scheduling restart.
feb 01 18:49:47 luna-mega systemd[1]: Stopped Ollama Service.
feb 01 18:49:47 luna-mega systemd[1]: Started Ollama Service.
feb 01 18:49:47 luna-mega ollama[9420]: 2025/02/01 18:49:47 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:fals
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.482+03:00 level=INFO source=images.go:432 msg="total blobs: 1"
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.686+03:00 level=INFO source=images.go:439 msg="total unused blobs removed: 1"
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
feb 01 18:49:47 luna-mega ollama[9420]:  - using env:        export GIN_MODE=release
feb 01 18:49:47 luna-mega ollama[9420]:  - using code:        gin.SetMode(gin.ReleaseMode)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.700+03:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.706+03:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]"
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.706+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.778+03:00 level=INFO source=gpu.go:630 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.396.26: symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linu
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.919+03:00 level=INFO source=gpu.go:318 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.919+03:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.945+03:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="15.5 GiB" available="14.1 GiB"

OS
Linux Ubuntu 16.04

GPU
Nvidea

CPU
Intel

Ollama version
0.5.7

<!-- gh-comment-id:2629024355 --> @dokicode commented on GitHub (Feb 1, 2025): Hi! I have the same issue while trying to pull llama3.2:3b-instruct-fp16 model on ubuntu 16.04 It seems that model 6,4 Gb has been downloaded, but after that it disappered from blobs. The blobs folder is empty( **ollama list** shows nothing( In logs I found following: ``` feb 01 18:49:44 luna-mega ollama[8289]: [GIN] 2025/02/01 - 18:49:44 | 200 | 2h24m17s | 127.0.0.1 | POST "/api/pull" feb 01 18:49:44 luna-mega ollama[8289]: panic: runtime error: index out of range [0] with length 0 feb 01 18:49:44 luna-mega ollama[8289]: goroutine 1026 [running]: feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0002d2b60, {0x55d0e77ca580, 0xc000312140}, 0xc000486090, 0xc000414080) feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server/download.go:175 +0x539 feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.downloadBlob({0x55d0e77ca580, 0xc000312140}, {{{0x55d0e738e269, 0x5}, {0x55d0e73a2641, 0x12}, {0x55d0e7396d67, 0x7}, {0xc000404120, 0x8}, ...}, ...}) feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server/download.go:489 +0x4da feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.PullModel({0x55d0e77ca580, 0xc000312140}, {0xc000404120, 0x19}, 0xc000414080, 0xc000042130) feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server/images.go:564 +0x771 feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server.(*Server).PullHandler.func1() feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server/routes.go:594 +0x197 feb 01 18:49:44 luna-mega ollama[8289]: created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 942 feb 01 18:49:44 luna-mega ollama[8289]: github.com/ollama/ollama/server/routes.go:581 +0x691 feb 01 18:49:44 luna-mega systemd[1]: ollama.service: Main process exited, code=exited, status=2/INVALIDARGUMENT feb 01 18:49:44 luna-mega systemd[1]: ollama.service: Unit entered failed state. feb 01 18:49:44 luna-mega systemd[1]: ollama.service: Failed with result 'exit-code'. feb 01 18:49:47 luna-mega systemd[1]: ollama.service: Service hold-off time over, scheduling restart. feb 01 18:49:47 luna-mega systemd[1]: Stopped Ollama Service. feb 01 18:49:47 luna-mega systemd[1]: Started Ollama Service. feb 01 18:49:47 luna-mega ollama[9420]: 2025/02/01 18:49:47 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:fals feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.482+03:00 level=INFO source=images.go:432 msg="total blobs: 1" feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.686+03:00 level=INFO source=images.go:439 msg="total unused blobs removed: 1" feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. feb 01 18:49:47 luna-mega ollama[9420]: - using env: export GIN_MODE=release feb 01 18:49:47 luna-mega ollama[9420]: - using code: gin.SetMode(gin.ReleaseMode) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.700+03:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.706+03:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]" feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.706+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.778+03:00 level=INFO source=gpu.go:630 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.396.26: symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linu feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.919+03:00 level=INFO source=gpu.go:318 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.919+03:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" feb 01 18:49:47 luna-mega ollama[9420]: time=2025-02-01T18:49:47.945+03:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="15.5 GiB" available="14.1 GiB" ``` OS Linux Ubuntu 16.04 GPU Nvidea CPU Intel Ollama version 0.5.7
Author
Owner

@prusnak commented on GitHub (Feb 1, 2025):

fix in https://github.com/ollama/ollama/pull/8480

<!-- gh-comment-id:2629158933 --> @prusnak commented on GitHub (Feb 1, 2025): fix in https://github.com/ollama/ollama/pull/8480
Author
Owner

@xyzBart commented on GitHub (Feb 7, 2025):

Better with 0.5.8-rc10, but still no success:

docker exec -it ollama ollama pull llama3.2
pulling manifest
pulling dde5aa3fc5ff...   0% ▕                                                                                                                                                                      ▏    0 B
pulling 966de95ca8a6...   0% ▕                                                                                                                                                                      ▏    0 B
pulling fcc5a6bec9da...   0% ▕                                                                                                                                                                      ▏    0 B
pulling a70ff7e570d9...   0% ▕                                                                                                                                                                      ▏    0 B
pulling 56bb8bd477a5...   0% ▕                                                                                                                                                                      ▏    0 B
pulling 34bb5ab01051...   0% ▕                                                                                                                                                                      ▏    0 B
verifying sha256 digest
Error: digest mismatch, file must be downloaded again: want sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
docker logs ollama
2025/02/07 08:01:06 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:proxyx.cn.in.corpo.com.pl:8080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-07T08:01:06.569Z level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-02-07T08:01:06.569Z level=INFO source=images.go:439 msg="total unused blobs removed: 5"
time=2025-02-07T08:01:06.570Z level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc10)"
time=2025-02-07T08:01:06.570Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-02-07T08:01:06.571Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-07T08:01:06.572Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-07T08:01:06.572Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-02-07T08:01:06.572Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-02-07T08:01:06.573Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-02-07T08:01:06.573Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-02-07T08:01:06.573Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-02-07T08:01:06.574Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127]"
cudaSetDevice err: 35
time=2025-02-07T08:01:06.575Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2025-02-07T08:01:06.576Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
time=2025-02-07T08:01:06.576Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2025-02-07T08:01:06.576Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-07T08:01:06.576Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.8 GiB" available="6.7 GiB"
[GIN] 2025/02/07 - 08:01:44 | 200 |     195.868µs |       127.0.0.1 | HEAD     "/"
time=2025-02-07T08:01:44.831Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:45.596Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:45.916Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:46.218Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:46.490Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:46.702Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:47.228Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:47.408Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:47.971Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:48.256Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:48.722Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:48.969Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:49.533Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:49.776Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-02-07T08:01:50.291Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
[GIN] 2025/02/07 - 08:01:50 | 200 |  5.679397129s |       127.0.0.1 | POST     "/api/pull"
<!-- gh-comment-id:2642199306 --> @xyzBart commented on GitHub (Feb 7, 2025): Better with 0.5.8-rc10, but still no success: ```plaintext docker exec -it ollama ollama pull llama3.2 pulling manifest pulling dde5aa3fc5ff... 0% ▕ ▏ 0 B pulling 966de95ca8a6... 0% ▕ ▏ 0 B pulling fcc5a6bec9da... 0% ▕ ▏ 0 B pulling a70ff7e570d9... 0% ▕ ▏ 0 B pulling 56bb8bd477a5... 0% ▕ ▏ 0 B pulling 34bb5ab01051... 0% ▕ ▏ 0 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ``` ```plaintext docker logs ollama 2025/02/07 08:01:06 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:proxyx.cn.in.corpo.com.pl:8080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-07T08:01:06.569Z level=INFO source=images.go:432 msg="total blobs: 5" time=2025-02-07T08:01:06.569Z level=INFO source=images.go:439 msg="total unused blobs removed: 5" time=2025-02-07T08:01:06.570Z level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc10)" time=2025-02-07T08:01:06.570Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-02-07T08:01:06.571Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-07T08:01:06.572Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-07T08:01:06.572Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-02-07T08:01:06.572Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-02-07T08:01:06.573Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-02-07T08:01:06.573Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-02-07T08:01:06.573Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-02-07T08:01:06.574Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127]" cudaSetDevice err: 35 time=2025-02-07T08:01:06.575Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2025-02-07T08:01:06.576Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" time=2025-02-07T08:01:06.576Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-02-07T08:01:06.576Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-02-07T08:01:06.576Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.8 GiB" available="6.7 GiB" [GIN] 2025/02/07 - 08:01:44 | 200 | 195.868µs | 127.0.0.1 | HEAD "/" time=2025-02-07T08:01:44.831Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:45.596Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:45.916Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:46.218Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:46.490Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:46.702Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:47.228Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:47.408Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:47.971Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:48.256Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:48.722Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:48.969Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:49.533Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:49.776Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" time=2025-02-07T08:01:50.291Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" [GIN] 2025/02/07 - 08:01:50 | 200 | 5.679397129s | 127.0.0.1 | POST "/api/pull" ```
Author
Owner

@prusnak commented on GitHub (Feb 7, 2025):

Better with 0.5.8-rc10, but still no success:

This is a different issue unrelated to the originally reported one. And maybe a network error. Try again and if it does not resolve open a new issue.

<!-- gh-comment-id:2642222256 --> @prusnak commented on GitHub (Feb 7, 2025): > Better with 0.5.8-rc10, but still no success: This is a different issue unrelated to the originally reported one. And maybe a network error. Try again and if it does not resolve open a new issue.
Author
Owner

@fallenreaper commented on GitHub (Feb 7, 2025):

All, just so you know, there are some potential fixes and they are working
through some release candidates before pushing the next release.

As an alternative until fixed, you can use the run command which will
download and run it. It will remain in your system after the initial run.

On Fri, Feb 7, 2025, 03:18 Pavol Rusnak @.***> wrote:

Better with 0.5.8-rc10, but still no success:

This is a different issue unrelated to the originally reported one. And
maybe a network error. Try again and if it does not resolve open a new
issue.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/8400#issuecomment-2642222256,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACM7Z24BOWPBGGW4HGVQUL2ORT5DAVCNFSM6AAAAABVCJIPXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNBSGIZDEMRVGY
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2642629631 --> @fallenreaper commented on GitHub (Feb 7, 2025): All, just so you know, there are some potential fixes and they are working through some release candidates before pushing the next release. As an alternative until fixed, you can use the run command which will download and run it. It will remain in your system after the initial run. On Fri, Feb 7, 2025, 03:18 Pavol Rusnak ***@***.***> wrote: > Better with 0.5.8-rc10, but still no success: > > This is a different issue unrelated to the originally reported one. And > maybe a network error. Try again and if it does not resolve open a new > issue. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/8400#issuecomment-2642222256>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AACM7Z24BOWPBGGW4HGVQUL2ORT5DAVCNFSM6AAAAABVCJIPXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNBSGIZDEMRVGY> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@xyzBart commented on GitHub (Feb 10, 2025):

I think it might be the same since the main issue seems to be that Ollama can't accept not cached connection:

time=2025-02-07T08:01:44.831Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"

while wget or curl from the same docker container, going through the same proxy, is working ok to Ollama registry urls, e.g.: https://registry.ollama.ai/v2/library/tinnyllama/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816

With latest commits we only got removal of exception in line which was trying to the error details. I get the same for pull and run. The image tag is 0.5.8-rc12 build yesterday. Now we are simply downloading and importing models from Hugging Face, but can't get tools support in that way :/

<!-- gh-comment-id:2647227349 --> @xyzBart commented on GitHub (Feb 10, 2025): I think it might be the same since the main issue seems to be that Ollama can't accept not cached connection: ```plaintext time=2025-02-07T08:01:44.831Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available" ``` while wget or curl from the same docker container, going through the same proxy, is working ok to Ollama registry urls, e.g.: https://registry.ollama.ai/v2/library/tinnyllama/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 With latest commits we only got removal of exception in line which was trying to the error details. I get the same for pull and run. The image tag is 0.5.8-rc12 build yesterday. Now we are simply downloading and importing models from Hugging Face, but can't get tools support in that way :/
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

Fixed by #8746

<!-- gh-comment-id:2660524093 --> @rick-github commented on GitHub (Feb 14, 2025): Fixed by #8746
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67452