[GH-ISSUE #7038] Error: llama runner process has terminated: error loading modelvocabulary: cannot find tokenizer merges in model file #66521

Closed
opened 2026-05-04 07:16:11 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @sparklyi on GitHub (Sep 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7038

What is the issue?

time : 09/30/2024
script:

FROM "./model-quant.gguf"
TEMPLATE """{{- if .System }}
<|im_start|>system {{ .System }}<|im_end|>
{{- end }}
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""

SYSTEM """"""

PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>

The creation was successful, but the operation failed

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.3.12

Originally created by @sparklyi on GitHub (Sep 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7038 ### What is the issue? time : 09/30/2024 script: ``` FROM "./model-quant.gguf" TEMPLATE """{{- if .System }} <|im_start|>system {{ .System }}<|im_end|> {{- end }} <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant """ SYSTEM """""" PARAMETER stop <|im_start|> PARAMETER stop <|im_end|> ``` The creation was successful, but the operation failed ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.3.12
GiteaMirror added the bug label 2026-05-04 07:16:11 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 30, 2024):

https://github.com/ollama/ollama/issues/7020

<!-- gh-comment-id:2381902678 --> @rick-github commented on GitHub (Sep 30, 2024): https://github.com/ollama/ollama/issues/7020
Author
Owner

@sparklyi commented on GitHub (Sep 30, 2024):

#7020

thank you

<!-- gh-comment-id:2381915760 --> @sparklyi commented on GitHub (Sep 30, 2024): > #7020 thank you
Author
Owner

@Harishk2508 commented on GitHub (Sep 30, 2024):

i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file

<!-- gh-comment-id:2382298203 --> @Harishk2508 commented on GitHub (Sep 30, 2024): i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file
Author
Owner

@sparklyi commented on GitHub (Sep 30, 2024):

i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file

you can check your transformers version, I installed the old version to slove

pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2"

also, you can refer to this link https://github.com/ollama/ollama/issues/7020

<!-- gh-comment-id:2382313710 --> @sparklyi commented on GitHub (Sep 30, 2024): > i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file you can check your transformers version, I installed the old version to slove ``` pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" ``` also, you can refer to this link https://github.com/ollama/ollama/issues/7020
Author
Owner

@Harishk2508 commented on GitHub (Sep 30, 2024):

i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file

you can check your transformers version, I installed the old version to slove

pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2"

also, you can refer to this link #7020

no bro still i encounter with the same error

<!-- gh-comment-id:2382453031 --> @Harishk2508 commented on GitHub (Sep 30, 2024): > > i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file > > you can check your transformers version, I installed the old version to slove > > ``` > pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" > ``` > > also, you can refer to this link #7020 no bro still i encounter with the same error
Author
Owner

@sparklyi commented on GitHub (Sep 30, 2024):

i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file

you can check your transformers version, I installed the old version to slove

pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2"

also, you can refer to this link #7020

no bro still i encounter with the same error

sorry bro, I re-open the issue, you can paste your environment and errors for this, maybe someone else can solve your problem

<!-- gh-comment-id:2382477241 --> @sparklyi commented on GitHub (Sep 30, 2024): > > > i run my fine tunned model in the ollama and model got saved and when i list the ollama models which are available in my machine locally are listed and it shows the fine tunned model but when i run it i got this error Error: llama runner process has terminated: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file > > > > > > you can check your transformers version, I installed the old version to slove > > ``` > > pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" > > ``` > > > > > > > > > > > > > > > > > > > > > > > > also, you can refer to this link #7020 > > no bro still i encounter with the same error sorry bro, I re-open the issue, you can paste your environment and errors for this, maybe someone else can solve your problem
Author
Owner

@danielhanchen commented on GitHub (Sep 30, 2024):

This might be related to https://github.com/unslothai/unsloth/issues/1065, https://github.com/unslothai/unsloth/issues/1062 - temporary fixes are provided for Unsloth finetuners, and can confirm with the Hugging Face team at https://github.com/ggerganov/llama.cpp/issues/9692 it's tokenizers causing issues

<!-- gh-comment-id:2382701705 --> @danielhanchen commented on GitHub (Sep 30, 2024): This might be related to https://github.com/unslothai/unsloth/issues/1065, https://github.com/unslothai/unsloth/issues/1062 - temporary fixes are provided for Unsloth finetuners, and can confirm with the Hugging Face team at https://github.com/ggerganov/llama.cpp/issues/9692 it's `tokenizers` causing issues
Author
Owner

@ishu121992 commented on GitHub (Oct 5, 2024):

I am getting below error:
image
Any help?
it seems to be working completely fine with llama3.1 or Mistral.

<!-- gh-comment-id:2395161647 --> @ishu121992 commented on GitHub (Oct 5, 2024): I am getting below error: ![image](https://github.com/user-attachments/assets/e31949d5-b0be-4dbe-9b35-f8036dce9f2d) Any help? it seems to be working completely fine with llama3.1 or Mistral.
Author
Owner

@rick-github commented on GitHub (Oct 5, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2395204008 --> @rick-github commented on GitHub (Oct 5, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@ishu121992 commented on GitHub (Oct 6, 2024):

Server logs will aid in debugging.

Oct 06 19:30:47 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:47 | 200 | 1.689599ms | 127.0.0.1 | HEAD "/"
Oct 06 19:30:47 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:47 | 200 | 2.586207ms | 127.0.0.1 | POST "/api/show"
Oct 06 19:30:47 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:47 | 200 | 662.215µs | 127.0.0.1 | POST "/api/show"
Oct 06 19:30:47 Eshan92 ollama[201]: time=2024-10-06T19:30:47.980+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Oct 06 19:30:47 Eshan92 ollama[201]: time=2024-10-06T19:30:47.980+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.145+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2847391919/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.147+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.147+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.270+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.300+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.300+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.192+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2847391919/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.192+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.192+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.301+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.334+05:30 level=INFO source=server.go:127 msg="offload to gpu" reallayers=29 layers=29 required="2415.1 MiB" used="2415.1 MiB" available="7100.0 MiB" kv="224.0 MiB" fulloffload="124.0 MiB" partialoffload="570.7 MiB"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.334+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.334+05:30 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2847391919/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --port 35307"
Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.335+05:30 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
Oct 06 19:30:52 Eshan92 ollama[73237]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"139626538135552","timestamp":1728223252}
Oct 06 19:30:52 Eshan92 ollama[73237]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"139626538135552","timestamp":1728223252}
Oct 06 19:30:52 Eshan92 ollama[73237]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":8,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"139626538135552","timestamp":1728223252,"total_threads":16}
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 0: general.architecture str = llama
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 1: general.type str = model
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 3: general.finetune str = Instruct
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 4: general.basename str = Llama-3.2
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 5: general.size_label str = 3B
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 8: llama.block_count u32 = 28
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 9: llama.context_length u32 = 131072
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 18: general.file_type u32 = 15
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 29: general.quantization_version u32 = 2
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - type f32: 58 tensors
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - type q4_K: 168 tensors
Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - type q6_K: 29 tensors
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_vocab: special tokens definition check successful ( 256/128256 ).
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: format = GGUF V3 (latest)
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: arch = llama
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: vocab type = BPE
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_vocab = 128256
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_merges = 280147
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_ctx_train = 131072
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd = 3072
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_head = 24
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_head_kv = 8
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_layer = 28
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_rot = 128
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_head_k = 128
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_head_v = 128
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_gqa = 3
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_k_gqa = 1024
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_v_gqa = 1024
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_norm_eps = 0.0e+00
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_logit_scale = 0.0e+00
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_ff = 8192
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_expert = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_expert_used = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: causal attn = 1
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: pooling type = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: rope type = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: rope scaling = linear
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: freq_base_train = 500000.0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: freq_scale_train = 1
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_yarn_orig_ctx = 131072
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: rope_finetuned = unknown
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_d_conv = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_d_inner = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_d_state = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_dt_rank = 0
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model type = ?B
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model ftype = Q4_K - Medium
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model params = 3.21 B
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model size = 1.87 GiB (5.01 BPW)
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: general.name = Llama 3.2 3B Instruct
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: LF token = 128 'Ä'
Oct 06 19:30:53 Eshan92 ollama[201]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Oct 06 19:30:53 Eshan92 ollama[201]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Oct 06 19:30:53 Eshan92 ollama[201]: ggml_cuda_init: found 1 CUDA devices:
Oct 06 19:30:53 Eshan92 ollama[201]: Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_tensors: ggml ctx size = 0.20 MiB
Oct 06 19:30:53 Eshan92 ollama[201]: llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 255, got 254
Oct 06 19:30:53 Eshan92 ollama[201]: llama_load_model_from_file: exception loading model
Oct 06 19:30:53 Eshan92 ollama[201]: terminate called after throwing an instance of 'std::runtime_error'
Oct 06 19:30:53 Eshan92 ollama[201]: what(): done_getting_tensors: wrong number of tensors; expected 255, got 254
Oct 06 19:30:57 Eshan92 ollama[201]: time=2024-10-06T19:30:57.105+05:30 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 "
Oct 06 19:30:57 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:57 | 500 | 9.731425253s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2395452430 --> @ishu121992 commented on GitHub (Oct 6, 2024): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging. Oct 06 19:30:47 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:47 | 200 | 1.689599ms | 127.0.0.1 | HEAD "/" Oct 06 19:30:47 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:47 | 200 | 2.586207ms | 127.0.0.1 | POST "/api/show" Oct 06 19:30:47 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:47 | 200 | 662.215µs | 127.0.0.1 | POST "/api/show" Oct 06 19:30:47 Eshan92 ollama[201]: time=2024-10-06T19:30:47.980+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type" Oct 06 19:30:47 Eshan92 ollama[201]: time=2024-10-06T19:30:47.980+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.145+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2847391919/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]" Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.147+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.147+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.270+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.300+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type" Oct 06 19:30:50 Eshan92 ollama[201]: time=2024-10-06T19:30:50.300+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.192+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2847391919/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.192+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.192+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.301+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.334+05:30 level=INFO source=server.go:127 msg="offload to gpu" reallayers=29 layers=29 required="2415.1 MiB" used="2415.1 MiB" available="7100.0 MiB" kv="224.0 MiB" fulloffload="124.0 MiB" partialoffload="570.7 MiB" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.334+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.334+05:30 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2847391919/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --port 35307" Oct 06 19:30:52 Eshan92 ollama[201]: time=2024-10-06T19:30:52.335+05:30 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" Oct 06 19:30:52 Eshan92 ollama[73237]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"139626538135552","timestamp":1728223252} Oct 06 19:30:52 Eshan92 ollama[73237]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"139626538135552","timestamp":1728223252} Oct 06 19:30:52 Eshan92 ollama[73237]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":8,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"139626538135552","timestamp":1728223252,"total_threads":16} Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 0: general.architecture str = llama Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 1: general.type str = model Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 3: general.finetune str = Instruct Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 4: general.basename str = Llama-3.2 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 5: general.size_label str = 3B Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 8: llama.block_count u32 = 28 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 9: llama.context_length u32 = 131072 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 18: general.file_type u32 = 15 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - type f32: 58 tensors Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - type q4_K: 168 tensors Oct 06 19:30:52 Eshan92 ollama[201]: llama_model_loader: - type q6_K: 29 tensors Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_vocab: special tokens definition check successful ( 256/128256 ). Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: format = GGUF V3 (latest) Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: arch = llama Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: vocab type = BPE Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_vocab = 128256 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_merges = 280147 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_ctx_train = 131072 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd = 3072 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_head = 24 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_head_kv = 8 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_layer = 28 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_rot = 128 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_head_k = 128 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_head_v = 128 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_gqa = 3 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_k_gqa = 1024 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_embd_v_gqa = 1024 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_norm_eps = 0.0e+00 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: f_logit_scale = 0.0e+00 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_ff = 8192 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_expert = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_expert_used = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: causal attn = 1 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: pooling type = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: rope type = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: rope scaling = linear Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: freq_base_train = 500000.0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: freq_scale_train = 1 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: n_yarn_orig_ctx = 131072 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: rope_finetuned = unknown Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_d_conv = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_d_inner = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_d_state = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: ssm_dt_rank = 0 Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model type = ?B Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model ftype = Q4_K - Medium Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model params = 3.21 B Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: model size = 1.87 GiB (5.01 BPW) Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: general.name = Llama 3.2 3B Instruct Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_print_meta: LF token = 128 'Ä' Oct 06 19:30:53 Eshan92 ollama[201]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes Oct 06 19:30:53 Eshan92 ollama[201]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no Oct 06 19:30:53 Eshan92 ollama[201]: ggml_cuda_init: found 1 CUDA devices: Oct 06 19:30:53 Eshan92 ollama[201]: Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes Oct 06 19:30:53 Eshan92 ollama[201]: llm_load_tensors: ggml ctx size = 0.20 MiB Oct 06 19:30:53 Eshan92 ollama[201]: llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 255, got 254 Oct 06 19:30:53 Eshan92 ollama[201]: llama_load_model_from_file: exception loading model Oct 06 19:30:53 Eshan92 ollama[201]: terminate called after throwing an instance of 'std::runtime_error' Oct 06 19:30:53 Eshan92 ollama[201]: what(): done_getting_tensors: wrong number of tensors; expected 255, got 254 Oct 06 19:30:57 Eshan92 ollama[201]: time=2024-10-06T19:30:57.105+05:30 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 " Oct 06 19:30:57 Eshan92 ollama[201]: [GIN] 2024/10/06 - 19:30:57 | 500 | 9.731425253s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@alperen21 commented on GitHub (May 31, 2025):

Have the same problem and there seems to be no work arounds

<!-- gh-comment-id:2925039702 --> @alperen21 commented on GitHub (May 31, 2025): Have the same problem and there seems to be no work arounds
Author
Owner

@rick-github commented on GitHub (May 31, 2025):

Which problem, 'error loading modelvocabulary' or 'wrong number of tensors'? #7020 for the former, #6966 for the latter. Open a new issue and include logs if you are still having problems.

<!-- gh-comment-id:2925083504 --> @rick-github commented on GitHub (May 31, 2025): Which problem, 'error loading modelvocabulary' or 'wrong number of tensors'? #7020 for the former, #6966 for the latter. Open a new issue and include logs if you are still having problems.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66521