[GH-ISSUE #13526] Embedding model broke in version v0.13.5 #55422

Open
opened 2026-04-29 09:09:31 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Kn3pXGraF on GitHub (Dec 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13526

What is the issue?

In version v0.13.5, the ognivo777/rubert-mini-frida embedding model broke

panic serving 127.0.0.1:43802: unknown pooling type
github.com/ollama/ollama/ml/nn/pooling.Type.Forward ...

Relevant log output

time=2025-12-19T06:59:48.766Z level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-12-19T06:59:48.769Z level=INFO source=images.go:493 msg="total blobs: 49"
time=2025-12-19T06:59:48.771Z level=INFO source=images.go:500 msg="total unused blobs removed: 0"
time=2025-12-19T06:59:48.772Z level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.5)"
time=2025-12-19T06:59:48.774Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-19T06:59:48.775Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35731"
time=2025-12-19T06:59:48.881Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44551"
time=2025-12-19T06:59:48.962Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2025-12-19T06:59:48.962Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46023"
time=2025-12-19T06:59:48.962Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36479"
time=2025-12-19T06:59:49.072Z level=INFO source=types.go:42 msg="inference compute" id=GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA A16" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:08:00.0 type=discrete total="15.0 GiB" available="11.0 GiB"
time=2025-12-19T06:59:49.072Z level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="15.0 GiB" threshold="20.0 GiB"
time=2025-12-19T07:00:03.020Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37949"
time=2025-12-19T07:00:03.114Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2025-12-19T07:00:03.150Z level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=4096 n_ctx_train=2048
time=2025-12-19T07:00:03.150Z level=WARN source=server.go:207 msg="flash attention enabled but not supported by model"
time=2025-12-19T07:00:03.150Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-86ef97b18a29ce3ef3af67d4edda4441328ae44f8f81a5c98cc6ce669618f376 --port 46447"
time=2025-12-19T07:00:03.150Z level=INFO source=sched.go:443 msg="system memory" total="376.9 GiB" free="376.7 GiB" free_swap="0 B"
time=2025-12-19T07:00:03.150Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda library=CUDA available="10.5 GiB" free="11.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-19T07:00:03.150Z level=INFO source=server.go:746 msg="loading model" "model layers"=8 requested=-1
time=2025-12-19T07:00:03.165Z level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2025-12-19T07:00:03.167Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:46447"
time=2025-12-19T07:00:03.173Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:128 GPULayers:8[ID:GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda Layers:8(0..7)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-19T07:00:03.183Z level=INFO source=ggml.go:136 msg="" architecture=bert file_type=F16 name="" description="" num_tensors=117 num_key_values=28
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA A16, compute capability 8.6, VMM: yes, ID: GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2025-12-19T07:00:03.231Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-12-19T07:00:03.233Z level=WARN source=runner.go:1213 msg="model does not support caching, setting batch size to context length" batch_size=2048
time=2025-12-19T07:00:03.234Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:43802: unknown pooling type\ngoroutine 274 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x555de96f1e60?, 0x555de98b8560?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1190 +0x11f\npanic({0x555de96f1e60?, 0x555de98b8560?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn/pooling.Type.Forward(0x49308?, {0x555de98d1250, 0xc000be6040}, {0x555de98dbb20, 0xc000d8e8d0})\n\tgithub.com/ollama/ollama/ml/nn/pooling/pooling.go:39 +0x1fe\ngithub.com/ollama/ollama/model/models/bert.(*Model).Forward(0xc0003bedc0, {0x555de98d1250, 0xc000be6040}, {{0x555de98dbb20, 0xc0000101e0}, {0x555de98dbb20, 0xc0000101f8}, {0xc0004be000, 0x800, 0x800}, ...})\n\tgithub.com/ollama/ollama/model/models/bert/embed.go:40 +0x29e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0004770e0, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1157 +0x9ad\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc0004770e0, {0x7ffe91f1adfd?, 0x555de860541a?}, {0x0, 0x80, {0xc0007e6240, 0x1, 0x1}, 0x0}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1226 +0x391\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0004770e0, {0x555de98c3fa0, 0xc000000540}, 0xc000818280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1305 +0x54b\nnet/http.HandlerFunc.ServeHTTP(0xc0003a6240?, {0x555de98c3fa0?, 0xc000000540?}, 0xc00064bb60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x555de82b58c5?, {0x555de98c3fa0, 0xc000000540}, 0xc000818280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x555de98c0590?}, {0x555de98c3fa0?, 0xc000000540?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc000037dd0, {0x555de98c63d8, 0xc0009182a0})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-12-19T07:00:03.234Z level=INFO source=runner.go:1278 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-19T07:00:03.235Z level=INFO source=sched.go:470 msg="Load failed" model=/root/.ollama/models/blobs/sha256-86ef97b18a29ce3ef3af67d4edda4441328ae44f8f81a5c98cc6ce669618f376 error="do load request: Post \"http://127.0.0.1:46447/load\": EOF"
[GIN] 2025/12/19 - 07:00:03 | 500 |   252.38427ms |      172.19.0.1 | POST     "/api/embeddings"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @Kn3pXGraF on GitHub (Dec 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13526 ### What is the issue? In version v0.13.5, the ognivo777/rubert-mini-frida embedding model broke ``` panic serving 127.0.0.1:43802: unknown pooling type github.com/ollama/ollama/ml/nn/pooling.Type.Forward ... ``` ### Relevant log output ```shell time=2025-12-19T06:59:48.766Z level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-12-19T06:59:48.769Z level=INFO source=images.go:493 msg="total blobs: 49" time=2025-12-19T06:59:48.771Z level=INFO source=images.go:500 msg="total unused blobs removed: 0" time=2025-12-19T06:59:48.772Z level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.5)" time=2025-12-19T06:59:48.774Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-19T06:59:48.775Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35731" time=2025-12-19T06:59:48.881Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44551" time=2025-12-19T06:59:48.962Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-12-19T06:59:48.962Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46023" time=2025-12-19T06:59:48.962Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36479" time=2025-12-19T06:59:49.072Z level=INFO source=types.go:42 msg="inference compute" id=GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA A16" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:08:00.0 type=discrete total="15.0 GiB" available="11.0 GiB" time=2025-12-19T06:59:49.072Z level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="15.0 GiB" threshold="20.0 GiB" time=2025-12-19T07:00:03.020Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37949" time=2025-12-19T07:00:03.114Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2025-12-19T07:00:03.150Z level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=4096 n_ctx_train=2048 time=2025-12-19T07:00:03.150Z level=WARN source=server.go:207 msg="flash attention enabled but not supported by model" time=2025-12-19T07:00:03.150Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-86ef97b18a29ce3ef3af67d4edda4441328ae44f8f81a5c98cc6ce669618f376 --port 46447" time=2025-12-19T07:00:03.150Z level=INFO source=sched.go:443 msg="system memory" total="376.9 GiB" free="376.7 GiB" free_swap="0 B" time=2025-12-19T07:00:03.150Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda library=CUDA available="10.5 GiB" free="11.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-19T07:00:03.150Z level=INFO source=server.go:746 msg="loading model" "model layers"=8 requested=-1 time=2025-12-19T07:00:03.165Z level=INFO source=runner.go:1405 msg="starting ollama engine" time=2025-12-19T07:00:03.167Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:46447" time=2025-12-19T07:00:03.173Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:128 GPULayers:8[ID:GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda Layers:8(0..7)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-19T07:00:03.183Z level=INFO source=ggml.go:136 msg="" architecture=bert file_type=F16 name="" description="" num_tensors=117 num_key_values=28 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A16, compute capability 8.6, VMM: yes, ID: GPU-23e630c8-6d08-abcd-bd27-e16cdbd3fcda load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2025-12-19T07:00:03.231Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-12-19T07:00:03.233Z level=WARN source=runner.go:1213 msg="model does not support caching, setting batch size to context length" batch_size=2048 time=2025-12-19T07:00:03.234Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:43802: unknown pooling type\ngoroutine 274 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x555de96f1e60?, 0x555de98b8560?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1190 +0x11f\npanic({0x555de96f1e60?, 0x555de98b8560?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn/pooling.Type.Forward(0x49308?, {0x555de98d1250, 0xc000be6040}, {0x555de98dbb20, 0xc000d8e8d0})\n\tgithub.com/ollama/ollama/ml/nn/pooling/pooling.go:39 +0x1fe\ngithub.com/ollama/ollama/model/models/bert.(*Model).Forward(0xc0003bedc0, {0x555de98d1250, 0xc000be6040}, {{0x555de98dbb20, 0xc0000101e0}, {0x555de98dbb20, 0xc0000101f8}, {0xc0004be000, 0x800, 0x800}, ...})\n\tgithub.com/ollama/ollama/model/models/bert/embed.go:40 +0x29e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0004770e0, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1157 +0x9ad\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc0004770e0, {0x7ffe91f1adfd?, 0x555de860541a?}, {0x0, 0x80, {0xc0007e6240, 0x1, 0x1}, 0x0}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1226 +0x391\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0004770e0, {0x555de98c3fa0, 0xc000000540}, 0xc000818280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1305 +0x54b\nnet/http.HandlerFunc.ServeHTTP(0xc0003a6240?, {0x555de98c3fa0?, 0xc000000540?}, 0xc00064bb60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x555de82b58c5?, {0x555de98c3fa0, 0xc000000540}, 0xc000818280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x555de98c0590?}, {0x555de98c3fa0?, 0xc000000540?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc000037dd0, {0x555de98c63d8, 0xc0009182a0})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-12-19T07:00:03.234Z level=INFO source=runner.go:1278 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-19T07:00:03.235Z level=INFO source=sched.go:470 msg="Load failed" model=/root/.ollama/models/blobs/sha256-86ef97b18a29ce3ef3af67d4edda4441328ae44f8f81a5c98cc6ce669618f376 error="do load request: Post \"http://127.0.0.1:46447/load\": EOF" [GIN] 2025/12/19 - 07:00:03 | 500 | 252.38427ms | 172.19.0.1 | POST "/api/embeddings" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 09:09:31 -05:00
Author
Owner

@Kn3pXGraF commented on GitHub (Jan 14, 2026):

There's a problem even with GUFF models downloaded from HF. Has anyone encountered this?

<!-- gh-comment-id:3747898388 --> @Kn3pXGraF commented on GitHub (Jan 14, 2026): There's a problem even with GUFF models downloaded from HF. Has anyone encountered this?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55422