[GH-ISSUE #10759] --quantize leads to incorrect KVs / tokenizer #69127

Open
opened 2026-05-04 17:14:19 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @jmorganca on GitHub (May 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10759

What is the issue?

llama_model_load_from_file_impl: using device Metal (Apple M3 Max) - 98303 MiB free
llama_model_loader: loaded meta data with 29 key-value pairs and 363 tensors from /Users/jmorgan/.ollama/models/blobs/sha256-f05ccecc2baeb1d3c4c68b0a61c7b17124fd04efdb8f80fdbb10d0c15df8d8f1 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                          general.file_type u32              = 15
llama_model_loader: - kv   2:                    general.parameter_count u64              = 23572403200
llama_model_loader: - kv   3:               general.quantization_version u32              = 2
llama_model_loader: - kv   4:                         general.size_label str              = 24B
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  11:                          llama.block_count u32              = 40
llama_model_loader: - kv  12:                       llama.context_length u32              = 32768
llama_model_loader: - kv  13:                     llama.embedding_length u32              = 5120
llama_model_loader: - kv  14:                  llama.feed_forward_length u32              = 32768
llama_model_loader: - kv  15:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  16:                       llama.rope.freq_base f32              = 1000000000.000000
llama_model_loader: - kv  17:                           llama.vocab_size u32              = 131072
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  21:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,0]       = []
llama_model_loader: - kv  24:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  25:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,0]       = []
llama_model_loader: - kv  27:                      tokenizer.ggml.tokens arr[str,0]       = []
llama_model_loader: - kv  28:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - type  f32:   81 tensors
llama_model_loader: - type q4_K:  241 tensors
llama_model_loader: - type q6_K:   41 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 13.34 GiB (4.86 BPW) 
load: model vocab missing newline token, using special_pad_id instead
load: bad special token: 'tokenizer.ggml.bos_token_id' = 1, using default id 11
load: bad special token: 'tokenizer.ggml.eos_token_id' = 2, using default id 11
load: bad special token: 'tokenizer.ggml.unknown_token_id' = 0, using default id -1
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 0
load: token to piece cache size = 0.0000 MB

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @jmorganca on GitHub (May 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10759 ### What is the issue? ``` llama_model_load_from_file_impl: using device Metal (Apple M3 Max) - 98303 MiB free llama_model_loader: loaded meta data with 29 key-value pairs and 363 tensors from /Users/jmorgan/.ollama/models/blobs/sha256-f05ccecc2baeb1d3c4c68b0a61c7b17124fd04efdb8f80fdbb10d0c15df8d8f1 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.file_type u32 = 15 llama_model_loader: - kv 2: general.parameter_count u64 = 23572403200 llama_model_loader: - kv 3: general.quantization_version u32 = 2 llama_model_loader: - kv 4: general.size_label str = 24B llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.attention.key_length u32 = 128 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.attention.value_length u32 = 128 llama_model_loader: - kv 11: llama.block_count u32 = 40 llama_model_loader: - kv 12: llama.context_length u32 = 32768 llama_model_loader: - kv 13: llama.embedding_length u32 = 5120 llama_model_loader: - kv 14: llama.feed_forward_length u32 = 32768 llama_model_loader: - kv 15: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 16: llama.rope.freq_base f32 = 1000000000.000000 llama_model_loader: - kv 17: llama.vocab_size u32 = 131072 llama_model_loader: - kv 18: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 19: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 20: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 21: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,0] = [] llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,0] = [] llama_model_loader: - kv 27: tokenizer.ggml.tokens arr[str,0] = [] llama_model_loader: - kv 28: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - type f32: 81 tensors llama_model_loader: - type q4_K: 241 tensors llama_model_loader: - type q6_K: 41 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 13.34 GiB (4.86 BPW) load: model vocab missing newline token, using special_pad_id instead load: bad special token: 'tokenizer.ggml.bos_token_id' = 1, using default id 11 load: bad special token: 'tokenizer.ggml.eos_token_id' = 2, using default id 11 load: bad special token: 'tokenizer.ggml.unknown_token_id' = 0, using default id -1 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 0 load: token to piece cache size = 0.0000 MB ``` ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 17:14:19 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69127