[GH-ISSUE #9906] ollama docker 0.6.2 failed to run model #32247

Closed
opened 2026-04-22 13:19:54 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @goactiongo on GitHub (Mar 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9906

docker-compose.yml


  ollama_cpu:
    image: registry.cn-hangzhou.aliyuncs.com/xxx/ollama_0.6.2
    container_name: ollama_cpu
    restart: always
    ports:
      - 11434:11434
    volumes:
      - /root/.ollama/models:/root/.ollama/models
    networks:
      fastgpt:
        ipv4_address: 172.19.0.19

root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest
pulling manifest
pulling f68644a89c4a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████▏ 650 MB
pulling bf91410d1f04... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████▏  260 B
verifying sha256 digest
writing manifest
success
Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1
root@4cb4d67c9914:~/.ollama/models/blobs# df -h
Filesystem               Size  Used Avail Use% Mounted on
overlay                  133G  103G   31G  78% /
tmpfs                     64M     0   64M   0% /dev
tmpfs                     16G     0   16G   0% /sys/fs/cgroup
shm                       64M     0   64M   0% /dev/shm
/dev/mapper/centos-root  133G  103G   31G  78% /etc/hosts
tmpfs                     16G     0   16G   0% /proc/acpi
tmpfs                     16G     0   16G   0% /proc/scsi
tmpfs                     16G     0   16G   0% /sys/firmware
root@4cb4d67c9914:~/.ollama/models/blobs# chmod -R 755 /root/.ollama/models
root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest
Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1
root@4cb4d67c9914:~/.ollama/models/blobs# # \345\210\240\351\231\244\351\205\215\347\275\256\346\226\207\344\273\266
root@4cb4d67c9914:~/.ollama/models/blobs# rm -rf /root/.ollama/config.toml
root@4cb4d67c9914:~/.ollama/models/blobs# # \345\206\215\346\254\241\350\277\220\350\241\214\346\250\241\345\236\213
root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest
Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1
root@4cb4d67c9914:~/.ollama/models/blobs# ollama rm milkey/m3e:latest
deleted 'milkey/m3e:latest'
root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest
pulling manifest
pulling f68644a89c4a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████▏ 650 MB
pulling bf91410d1f04... 100% ▕███████████████████████████████████████████████████████████████████████���█████████████████████▏  260 B
verifying sha256 digest
writing manifest
success
Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1

root@4cb4d67c9914:~/.ollama/models/blobs# ollama list
NAME                 ID              SIZE      MODIFIED
milkey/m3e:latest    1477f12451b0    650 MB    2 minutes ago
root@4cb4d67c9914:~/.ollama/models/blobs#
root@4cb4d67c9914:~/.ollama/models/blobs# ollama -v
ollama version is 0.6.2
root@4cb4d67c9914:~/.ollama/models/blobs#

docker log




[GIN] 2025/03/20 - 11:42:01 | 200 |   31.138788ms |      172.19.0.1 | POST     "/v1/embeddings"
time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:22.820Z level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="22.0 GiB" free_swap="14.3 GiB"
time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:22.820Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22
gguf_init_from_file_impl: failed to read key-value pairs
llama_model_load: error loading model: llama_model_loader: failed to load model from /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1

llama_model_load_from_file_impl: failed to load model
time=2025-03-20T11:42:22.824Z level=INFO source=sched.go:429 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 error="unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1"
[GIN] 2025/03/20 - 11:42:22 | 500 |   20.307258ms |      172.19.0.1 | POST     "/v1/embeddings"
time=2025-03-20T11:42:23.322Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:23.323Z level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="22.0 GiB" free_swap="14.3 GiB"
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-20T11:42:23.324Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:23.324Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22
gguf_init_from_file_impl: failed to read key-value pairs
llama_model_load: error loading model: llama_model_loader: failed to load model from /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1

llama_model_load_from_file_impl: failed to load model
time=2025-03-20T11:42:23.327Z level=INFO source=sched.go:429 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 error="unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1"
[GIN] 2025/03/20 - 11:42:23 | 500 |   14.287378ms |      172.19.0.1 | POST     "/v1/embeddings"
time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:24.236Z level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="22.0 GiB" free_swap="14.3 GiB"
time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-20T11:42:24.236Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22
gguf_init_from_file_impl: failed to read key-value pairs
llama_model_load: error loading model: llama_model_loader: failed to load model from /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1

llama_model_load_from_file_impl: failed to load model
time=2025-03-20T11:42:24.240Z level=INFO source=sched.go:429 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 error="unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1"
[GIN] 2025/03/20 - 11:42:24 | 500 |   14.046895ms |      172.19.0.1 | POST     "/v1/embeddings"

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @goactiongo on GitHub (Mar 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9906 docker-compose.yml ``` ollama_cpu: image: registry.cn-hangzhou.aliyuncs.com/xxx/ollama_0.6.2 container_name: ollama_cpu restart: always ports: - 11434:11434 volumes: - /root/.ollama/models:/root/.ollama/models networks: fastgpt: ipv4_address: 172.19.0.19 ``` ``` root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest pulling manifest pulling f68644a89c4a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████▏ 650 MB pulling bf91410d1f04... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████▏ 260 B verifying sha256 digest writing manifest success Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 root@4cb4d67c9914:~/.ollama/models/blobs# df -h Filesystem Size Used Avail Use% Mounted on overlay 133G 103G 31G 78% / tmpfs 64M 0 64M 0% /dev tmpfs 16G 0 16G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm /dev/mapper/centos-root 133G 103G 31G 78% /etc/hosts tmpfs 16G 0 16G 0% /proc/acpi tmpfs 16G 0 16G 0% /proc/scsi tmpfs 16G 0 16G 0% /sys/firmware root@4cb4d67c9914:~/.ollama/models/blobs# chmod -R 755 /root/.ollama/models root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 root@4cb4d67c9914:~/.ollama/models/blobs# # \345\210\240\351\231\244\351\205\215\347\275\256\346\226\207\344\273\266 root@4cb4d67c9914:~/.ollama/models/blobs# rm -rf /root/.ollama/config.toml root@4cb4d67c9914:~/.ollama/models/blobs# # \345\206\215\346\254\241\350\277\220\350\241\214\346\250\241\345\236\213 root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 root@4cb4d67c9914:~/.ollama/models/blobs# ollama rm milkey/m3e:latest deleted 'milkey/m3e:latest' root@4cb4d67c9914:~/.ollama/models/blobs# ollama run milkey/m3e:latest pulling manifest pulling f68644a89c4a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████▏ 650 MB pulling bf91410d1f04... 100% ▕███████████████████████████████████████████████████████████████████████���█████████████████████▏ 260 B verifying sha256 digest writing manifest success Error: unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 root@4cb4d67c9914:~/.ollama/models/blobs# ollama list NAME ID SIZE MODIFIED milkey/m3e:latest 1477f12451b0 650 MB 2 minutes ago root@4cb4d67c9914:~/.ollama/models/blobs# root@4cb4d67c9914:~/.ollama/models/blobs# ollama -v ollama version is 0.6.2 root@4cb4d67c9914:~/.ollama/models/blobs# ``` ## docker log ``` [GIN] 2025/03/20 - 11:42:01 | 200 | 31.138788ms | 172.19.0.1 | POST "/v1/embeddings" time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-20T11:42:22.819Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:22.820Z level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="22.0 GiB" free_swap="14.3 GiB" time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-20T11:42:22.820Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:22.820Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB" gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22 gguf_init_from_file_impl: failed to read key-value pairs llama_model_load: error loading model: llama_model_loader: failed to load model from /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 llama_model_load_from_file_impl: failed to load model time=2025-03-20T11:42:22.824Z level=INFO source=sched.go:429 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 error="unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1" [GIN] 2025/03/20 - 11:42:22 | 500 | 20.307258ms | 172.19.0.1 | POST "/v1/embeddings" time=2025-03-20T11:42:23.322Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:23.323Z level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="22.0 GiB" free_swap="14.3 GiB" time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-20T11:42:23.323Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-20T11:42:23.324Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:23.324Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB" gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22 gguf_init_from_file_impl: failed to read key-value pairs llama_model_load: error loading model: llama_model_loader: failed to load model from /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 llama_model_load_from_file_impl: failed to load model time=2025-03-20T11:42:23.327Z level=INFO source=sched.go:429 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 error="unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1" [GIN] 2025/03/20 - 11:42:23 | 500 | 14.287378ms | 172.19.0.1 | POST "/v1/embeddings" time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-20T11:42:24.235Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:24.236Z level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="22.0 GiB" free_swap="14.3 GiB" time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-20T11:42:24.236Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-20T11:42:24.236Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB" gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22 gguf_init_from_file_impl: failed to read key-value pairs llama_model_load: error loading model: llama_model_loader: failed to load model from /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 llama_model_load_from_file_impl: failed to load model time=2025-03-20T11:42:24.240Z level=INFO source=sched.go:429 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 error="unable to load model: /root/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1" [GIN] 2025/03/20 - 11:42:24 | 500 | 14.046895ms | 172.19.0.1 | POST "/v1/embeddings" ``` ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 13:19:54 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22
gguf_init_from_file_impl: failed to read key-value pairs

The model is invalid.

<!-- gh-comment-id:2740342562 --> @rick-github commented on GitHub (Mar 20, 2025): ``` gguf_init_from_file_impl: duplicate key 'tokenizer.ggml.bos_token_id' for tensors 11 and 22 gguf_init_from_file_impl: failed to read key-value pairs ``` The model is invalid.
Author
Owner

@goactiongo commented on GitHub (Mar 21, 2025):

Thanks.
I have downloaded the model before on the olde server and it worked well now.
I want to copy this model to my new server and hope to work well with ollama docker,how should I do?

how to copy the model form the oldserver to new server and located the model in /root/.ollama, so that i could set volumes in docker-compose.

Another, where is the model location on docker-ollama .

following is the information on the old server

(base) [root@aitest ~]# ollama list
NAME                                    ID              SIZE    MODIFIED
milkey/m3e:latest                       1477f12451b0    650 MB  8 months ago
mxbai-embed-large:latest                468836162de7    669 MB  8 months ago
quentinz/bge-large-zh-v1.5:latest       bc8ca0995fcd    651 MB  8 months ago
qwen2:0.5b                              6f48b936a09f    352 MB  8 months ago
(base) [root@aitest ~]#
(base) [root@aitest ~]#
(base) [root@aitest ~]# ollama -v
ollama version is 0.1.46
(base) [root@aitest ~]# ollama show milkey/m3e:latest --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM milkey/m3e:latest

FROM /usr/share/ollama/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1
TEMPLATE {{ .Prompt }}

<!-- gh-comment-id:2742085828 --> @goactiongo commented on GitHub (Mar 21, 2025): Thanks. I have downloaded the model before on the olde server and it worked well now. I want to copy this model to my new server and hope to work well with ollama docker,how should I do? how to copy the model form the oldserver to new server and located the model in /root/.ollama, so that i could set volumes in docker-compose. Another, where is the model location on docker-ollama . following is the information on the old server ``` (base) [root@aitest ~]# ollama list NAME ID SIZE MODIFIED milkey/m3e:latest 1477f12451b0 650 MB 8 months ago mxbai-embed-large:latest 468836162de7 669 MB 8 months ago quentinz/bge-large-zh-v1.5:latest bc8ca0995fcd 651 MB 8 months ago qwen2:0.5b 6f48b936a09f 352 MB 8 months ago (base) [root@aitest ~]# (base) [root@aitest ~]# (base) [root@aitest ~]# ollama -v ollama version is 0.1.46 (base) [root@aitest ~]# ollama show milkey/m3e:latest --modelfile # Modelfile generated by "ollama show" # To build a new Modelfile based on this, replace FROM with: # FROM milkey/m3e:latest FROM /usr/share/ollama/.ollama/models/blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 TEMPLATE {{ .Prompt }} ```
Author
Owner

@rick-github commented on GitHub (Mar 21, 2025):

On old server:

$ cd /usr/share/ollama/.ollama/models
$ zip -r /tmp/milkey.zip manifests/registry.ollama.ai/milkey blobs/sha256-bf91410d1f04aa13257b9a33a1668d193e4fd4587a830b55f6b27223bd3dc5b9 blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1

Copy the zip file to the new server, then cd to the models directory:

$ unzip /tmp/milkey.zip

However, milkey/m3e is not compatible with ollama 0.6.2, because of the duplicate keys. If you want to use this model, you have to use an older version of ollama, 0.5.12.

<!-- gh-comment-id:2742098583 --> @rick-github commented on GitHub (Mar 21, 2025): On old server: ``` $ cd /usr/share/ollama/.ollama/models $ zip -r /tmp/milkey.zip manifests/registry.ollama.ai/milkey blobs/sha256-bf91410d1f04aa13257b9a33a1668d193e4fd4587a830b55f6b27223bd3dc5b9 blobs/sha256-f68644a89c4aff17e05e863ecb5ad1c899d4ec4fd5fcc0747d1cb136dbbf69a1 ``` Copy the zip file to the new server, then `cd` to the models directory: ``` $ unzip /tmp/milkey.zip ``` However, milkey/m3e is not compatible with ollama 0.6.2, because of the duplicate keys. If you want to use this model, you have to use an older version of ollama, 0.5.12.
Author
Owner

@goactiongo commented on GitHub (Mar 21, 2025):

thanks

<!-- gh-comment-id:2742147684 --> @goactiongo commented on GitHub (Mar 21, 2025): thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32247