[GH-ISSUE #8571] running deepseek r1 671b on 64GB / 128GB ram mac gives Error: llama runner process has terminated: signal: killed #67592

Closed
opened 2026-05-04 10:56:36 -05:00 by GiteaMirror · 38 comments
Owner

Originally created by @duttaoindril on GitHub (Jan 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8571

What is the issue?

after waiting all day for the model to download, ollama run deepseek-r1:671b fails to run with the error Error: llama runner process has terminated: signal: killed.

I can run the deepseek-r1:70b llama model just fine.

I'm running a Macbook M3 Pro 64GB ram, I'm assuming it's failing due to lack of memory?

  • how do I know the real memory requirements for a model? i don't think it's obvious on the ollama page.
  • any way to fix this at all? I tried it on my 128GB M1 Ultra Mac Studio and got the same error. I'd really love to run this locally, so would appreciate any help!

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @duttaoindril on GitHub (Jan 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8571 ### What is the issue? after waiting all day for the model to download, `ollama run deepseek-r1:671b` fails to run with the error `Error: llama runner process has terminated: signal: killed`. I can run the deepseek-r1:70b llama model just fine. I'm running a Macbook M3 Pro 64GB ram, I'm assuming it's failing due to lack of memory? - how do I know the real memory requirements for a model? i don't think it's obvious on the ollama page. - any way to fix this at all? I tried it on my 128GB M1 Ultra Mac Studio and got the same error. I'd really love to run this locally, so would appreciate any help! ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-05-04 10:56:36 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 25, 2025):

Image

You will need to add a lot of swap to run this model.

<!-- gh-comment-id:2613685919 --> @rick-github commented on GitHub (Jan 25, 2025): ![Image](https://github.com/user-attachments/assets/d3edbf0c-42c3-404f-a130-67287687246e) You will need to add a lot of swap to run this model.
Author
Owner

@rick-github commented on GitHub (Jan 25, 2025):

It would be interesting to see the server logs because I would have expected ollama to refuse to load the model in the face of insufficient resources.

<!-- gh-comment-id:2613687399 --> @rick-github commented on GitHub (Jan 25, 2025): It would be interesting to see the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) because I would have expected ollama to refuse to load the model in the face of insufficient resources.
Author
Owner

@duttaoindril commented on GitHub (Jan 25, 2025):

llm_load_tensors: offloading 9 repeating layers to GPU
llm_load_tensors: offloaded 9/62 layers to GPU
llm_load_tensors:          CPU model buffer size = 322657.21 MiB
llm_load_tensors:        Metal model buffer size = 63032.41 MiB
time=2025-01-24T14:30:52.857-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding"
time=2025-01-24T14:30:54.663-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-24T14:30:54.926-08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed"
[GIN] 2025/01/24 - 14:30:54 | 500 |          1m6s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/01/24 - 15:53:26 | 200 |    2.065042ms |       127.0.0.1 | HEAD     "/"
[GIN] 2025/01/24 - 15:53:26 | 200 |   31.410791ms |       127.0.0.1 | POST     "/api/show"
time=2025-01-24T15:53:26.909-08:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="32.4 GiB" free_swap="0 B"
time=2025-01-24T15:53:26.910-08:00 level=INFO source=memory.go:356 msg="offload to metal" layers.requested=-1 layers.model=62 layers.offload=9 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="413.6 GiB" memory.required.partial="43.9 GiB" memory.required.kv="9.5 GiB" memory.required.allocations="[43.9 GiB]" memory.weights.total="385.0 GiB" memory.weights.repeating="384.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="654.0 MiB" memory.graph.partial="654.0 MiB"
time=2025-01-24T15:53:26.911-08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/od/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 2048 --batch-size 512 --n-gpu-layers 9 --threads 12 --no-mmap --parallel 1 --port 54470"
time=2025-01-24T15:53:26.912-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-24T15:53:26.913-08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-24T15:53:26.913-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-24T15:53:26.942-08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-01-24T15:53:26.943-08:00 level=INFO source=runner.go:937 msg=system info="Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=12
time=2025-01-24T15:53:26.943-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:54470"
llama_load_model_from_file: using device Metal (Apple M3 Max) - 49151 MiB free
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /Users/od/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  606 tensors
llama_model_loader: - type q6_K:   58 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 818
time=2025-01-24T15:53:27.164-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 129280
llm_load_print_meta: n_merges         = 127741
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_layer          = 61
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 128
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 24576
llm_load_print_meta: n_embd_v_gqa     = 16384
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18432
llm_load_print_meta: n_expert         = 256
llm_load_print_meta: n_expert_used    = 8
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 671B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 671.03 B
llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 131 'Ä'
llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 3
llm_load_print_meta: n_lora_q             = 1536
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 2048
llm_load_print_meta: n_expert_shared      = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
llm_load_tensors: offloading 9 repeating layers to GPU
llm_load_tensors: offloaded 9/62 layers to GPU
llm_load_tensors:          CPU model buffer size = 322657.21 MiB
llm_load_tensors:        Metal model buffer size = 63032.41 MiB
time=2025-01-24T15:54:14.784-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding"
time=2025-01-24T15:54:16.509-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-24T15:54:16.769-08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed"
[GIN] 2025/01/24 - 15:54:16 | 500 | 49.907095958s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/01/24 - 16:04:03 | 200 |       2.329ms |       127.0.0.1 | GET      "/api/version"

copied as much as I could

<!-- gh-comment-id:2613688209 --> @duttaoindril commented on GitHub (Jan 25, 2025): ``` llm_load_tensors: offloading 9 repeating layers to GPU llm_load_tensors: offloaded 9/62 layers to GPU llm_load_tensors: CPU model buffer size = 322657.21 MiB llm_load_tensors: Metal model buffer size = 63032.41 MiB time=2025-01-24T14:30:52.857-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding" time=2025-01-24T14:30:54.663-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-24T14:30:54.926-08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed" [GIN] 2025/01/24 - 14:30:54 | 500 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/01/24 - 15:53:26 | 200 | 2.065042ms | 127.0.0.1 | HEAD "/" [GIN] 2025/01/24 - 15:53:26 | 200 | 31.410791ms | 127.0.0.1 | POST "/api/show" time=2025-01-24T15:53:26.909-08:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="32.4 GiB" free_swap="0 B" time=2025-01-24T15:53:26.910-08:00 level=INFO source=memory.go:356 msg="offload to metal" layers.requested=-1 layers.model=62 layers.offload=9 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="413.6 GiB" memory.required.partial="43.9 GiB" memory.required.kv="9.5 GiB" memory.required.allocations="[43.9 GiB]" memory.weights.total="385.0 GiB" memory.weights.repeating="384.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="654.0 MiB" memory.graph.partial="654.0 MiB" time=2025-01-24T15:53:26.911-08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/od/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 2048 --batch-size 512 --n-gpu-layers 9 --threads 12 --no-mmap --parallel 1 --port 54470" time=2025-01-24T15:53:26.912-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-24T15:53:26.913-08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-24T15:53:26.913-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-24T15:53:26.942-08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-24T15:53:26.943-08:00 level=INFO source=runner.go:937 msg=system info="Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=12 time=2025-01-24T15:53:26.943-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:54470" llama_load_model_from_file: using device Metal (Apple M3 Max) - 49151 MiB free llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /Users/od/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 time=2025-01-24T15:53:27.164-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: offloading 9 repeating layers to GPU llm_load_tensors: offloaded 9/62 layers to GPU llm_load_tensors: CPU model buffer size = 322657.21 MiB llm_load_tensors: Metal model buffer size = 63032.41 MiB time=2025-01-24T15:54:14.784-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding" time=2025-01-24T15:54:16.509-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-24T15:54:16.769-08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed" [GIN] 2025/01/24 - 15:54:16 | 500 | 49.907095958s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/01/24 - 16:04:03 | 200 | 2.329ms | 127.0.0.1 | GET "/api/version" ``` copied as much as I could
Author
Owner

@duttaoindril commented on GitHub (Jan 25, 2025):

lol how do I add almost 350gb of swap?

i don't think I have even that much storage left 😅

<!-- gh-comment-id:2613692376 --> @duttaoindril commented on GitHub (Jan 25, 2025): lol how do I add almost 350gb of swap? i don't think I have even that much storage left 😅
Author
Owner

@rick-github commented on GitHub (Jan 25, 2025):

llm_load_tensors:          CPU model buffer size = 322657.21 MiB
llm_load_tensors:        Metal model buffer size = 63032.41 MiB

So 322G in system RAM and 63G in GPU. I believe that MacOS has dynamically allocated swap so it grows automatically, which I guess is why ollama didn't reject it outright. Maybe that's why it eventually failed though, it may have hit some system limit. signal: killed seems like an active policy took the runner out. Do you have system logs for kernel issues on MacOS?

<!-- gh-comment-id:2613696788 --> @rick-github commented on GitHub (Jan 25, 2025): ``` llm_load_tensors: CPU model buffer size = 322657.21 MiB llm_load_tensors: Metal model buffer size = 63032.41 MiB ``` So 322G in system RAM and 63G in GPU. I believe that MacOS has dynamically allocated swap so it grows automatically, which I guess is why ollama didn't reject it outright. Maybe that's why it eventually failed though, it may have hit some system limit. `signal: killed` seems like an active policy took the runner out. Do you have system logs for kernel issues on MacOS?
Author
Owner

@rick-github commented on GitHub (Jan 25, 2025):

time=2025-01-24T15:53:26.909-08:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="32.4 GiB" free_swap="0 B"
time=2025-01-24T15:53:26.910-08:00 level=INFO source=memory.go:356 msg="offload to metal" layers.requested=-1 layers.model=62 layers.offload=9 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="413.6 GiB" memory.required.partial="43.9 GiB" memory.required.kv="9.5 GiB" memory.required.allocations="[43.9 GiB]" memory.weights.total="385.0 GiB" memory.weights.repeating="384.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="654.0 MiB" memory.graph.partial="654.0 MiB"

Or maybe not, free swap was reported as 0. I'm afraid virtual memory management on MacOS is a mystery to me, you'll have to dig out the manual or do some internet searches to figure out how to expand your swap to get the model loaded.

<!-- gh-comment-id:2613699972 --> @rick-github commented on GitHub (Jan 25, 2025): ``` time=2025-01-24T15:53:26.909-08:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="32.4 GiB" free_swap="0 B" time=2025-01-24T15:53:26.910-08:00 level=INFO source=memory.go:356 msg="offload to metal" layers.requested=-1 layers.model=62 layers.offload=9 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="413.6 GiB" memory.required.partial="43.9 GiB" memory.required.kv="9.5 GiB" memory.required.allocations="[43.9 GiB]" memory.weights.total="385.0 GiB" memory.weights.repeating="384.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="654.0 MiB" memory.graph.partial="654.0 MiB" ``` Or maybe not, free swap was reported as 0. I'm afraid virtual memory management on MacOS is a mystery to me, you'll have to dig out the manual or do some internet searches to figure out how to expand your swap to get the model loaded.
Author
Owner

@SeekPoint commented on GitHub (Jan 26, 2025):

Can I run ds-r1 651GB on 4*2080ti@22GB cards and 512GB cpu memory?

<!-- gh-comment-id:2614453211 --> @SeekPoint commented on GitHub (Jan 26, 2025): Can I run ds-r1 651GB on 4*2080ti@22GB cards and 512GB cpu memory?
Author
Owner

@rick-github commented on GitHub (Jan 26, 2025):

Theoretically, yes. It won't be very fast, though.

<!-- gh-comment-id:2614467619 --> @rick-github commented on GitHub (Jan 26, 2025): Theoretically, yes. It won't be very fast, though.
Author
Owner

@neuhaus commented on GitHub (Jan 28, 2025):

@duttaoindril what outcome do your expect? You have way to little memory for this huge model. Running it with using swap works in theory, but inference will be slow to the extreme, completely unusable. You are wasting your (and everyone else's) time.

To answer your question, this model has 671 billion weights and q4 means they are ~4bit each so 336GB RAM is the amount of memory required just to load the model, but you also need more RAM for context.

<!-- gh-comment-id:2619335487 --> @neuhaus commented on GitHub (Jan 28, 2025): @duttaoindril what outcome do your expect? You have **way** to little memory for this huge model. Running it with using swap works in theory, but inference will be slow to the extreme, completely unusable. You are wasting your (and everyone else's) time. To answer your question, this model has 671 billion weights and q4 means they are ~4bit each so 336GB RAM is the amount of memory required just to *load* the model, but you also need more RAM for context.
Author
Owner

@fserb commented on GitHub (Jan 28, 2025):

I don't think that's what's going on. This is not just a "there's not enough RAM you are going to swap and being killed" issue.

I got a compressed model from https://huggingface.co/unsloth/DeepSeek-R1-GGUF (it's 150GB instead of 404GB). I had to merge it (because olama doesn't support split GGUF yet) but then, if I try to "ollama run" it, I get the same error as OP ("Error: llama runner process has terminated: signal: killed") with similar logs.

But if I ran the same model file with llama-cpp (actual line: llama-cli --model DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf --cache-type-k q4_0 --threads 12 -no-cnv --n-gpu-layers 7 --prio 2 --temp 0.6 --ctx-size 8192 --seed 3407) it works fine on the same machine. (By fine I mean a couple tokens per second).

olama killed logs
llama_load_model_from_file: using device Metal (Apple M3 Max) - 49151 MiB free
llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from /Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 BF16
llama_model_loader: - kv   3:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   4:                         general.size_label str              = 256x20B
llama_model_loader: - kv   5:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   6:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   7:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   8:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   9:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv  10:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv  11:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv  12:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  15:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  16:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  17:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  18:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  19:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  20:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  21:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  22:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  23:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  24:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  25:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  26:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  27:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  28:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  29:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  30: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  31: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 128815
llama_model_loader: - kv  40:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  41:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  42:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  43:               general.quantization_version u32              = 2
llama_model_loader: - kv  44:                          general.file_type u32              = 24
llama_model_loader: - kv  45:                      quantize.imatrix.file str              = DeepSeek-R1.imatrix
llama_model_loader: - kv  46:                   quantize.imatrix.dataset str              = /training_data/calibration_datav3.txt
llama_model_loader: - kv  47:             quantize.imatrix.entries_count i32              = 720
llama_model_loader: - kv  48:              quantize.imatrix.chunks_count i32              = 124
llama_model_loader: - kv  49:                                   split.no u16              = 0
llama_model_loader: - kv  50:                        split.tensors.count i32              = 1025
llama_model_loader: - kv  51:                                split.count u16              = 0
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  190 tensors
llama_model_loader: - type q5_K:  116 tensors
llama_model_loader: - type q6_K:  184 tensors
llama_model_loader: - type iq2_xxs:    6 tensors
llama_model_loader: - type iq1_s:  168 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 819
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 129280
llm_load_print_meta: n_merges         = 127741
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_layer          = 61
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 128
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 24576
llm_load_print_meta: n_embd_v_gqa     = 16384
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18432
llm_load_print_meta: n_expert         = 256
llm_load_print_meta: n_expert_used    = 8
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 671B
llm_load_print_meta: model ftype      = IQ1_S - 1.5625 bpw
llm_load_print_meta: model params     = 671.03 B
llm_load_print_meta: model size       = 130.60 GiB (1.67 BPW) 
llm_load_print_meta: general.name     = DeepSeek R1 BF16
llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 128815 '<|PAD▁TOKEN|>'
llm_load_print_meta: LF token         = 131 'Ä'
llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 3
llm_load_print_meta: n_lora_q             = 1536
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 2048
llm_load_print_meta: n_expert_shared      = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
llm_load_tensors: offloading 21 repeating layers to GPU
llm_load_tensors: offloaded 21/62 layers to GPU
llm_load_tensors:          CPU model buffer size = 86620.57 MiB
llm_load_tensors:        Metal model buffer size = 47109.49 MiB
time=2025-01-28T17:57:43.432-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
time=2025-01-28T17:58:22.830-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding"
time=2025-01-28T17:58:23.987-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed"
llama-cpp logs
build: 4568 (a4417ddd) with Apple clang version 16.0.0 (clang-1600.0.26.6) for arm64-apple-darwin24.2.0
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device Metal (Apple M3 Max) - 49151 MiB free
llama_model_loader: additional 2 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 BF16
llama_model_loader: - kv   3:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   4:                         general.size_label str              = 256x20B
llama_model_loader: - kv   5:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   6:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   7:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   8:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   9:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv  10:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv  11:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv  12:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  15:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  16:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  17:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  18:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  19:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  20:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  21:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  22:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  23:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  24:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  25:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  26:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  27:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  28:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  29:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  30: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  31: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 128815
llama_model_loader: - kv  40:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  41:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  42:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  43:               general.quantization_version u32              = 2
llama_model_loader: - kv  44:                          general.file_type u32              = 24
llama_model_loader: - kv  45:                      quantize.imatrix.file str              = DeepSeek-R1.imatrix
llama_model_loader: - kv  46:                   quantize.imatrix.dataset str              = /training_data/calibration_datav3.txt
llama_model_loader: - kv  47:             quantize.imatrix.entries_count i32              = 720
llama_model_loader: - kv  48:              quantize.imatrix.chunks_count i32              = 124
llama_model_loader: - kv  49:                                   split.no u16              = 0
llama_model_loader: - kv  50:                        split.tensors.count i32              = 1025
llama_model_loader: - kv  51:                                split.count u16              = 3
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  190 tensors
llama_model_loader: - type q5_K:  116 tensors
llama_model_loader: - type q6_K:  184 tensors
llama_model_loader: - type iq2_xxs:    6 tensors
llama_model_loader: - type iq1_s:  168 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ1_S - 1.5625 bpw
print_info: file size   = 130.60 GiB (1.67 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 819
load: token to piece cache size = 0.8223 MB
print_info: arch             = deepseek2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 163840
print_info: n_embd           = 7168
print_info: n_layer          = 61
print_info: n_head           = 128
print_info: n_head_kv        = 128
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_embd_head_k    = 192
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 24576
print_info: n_embd_v_gqa     = 16384
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 18432
print_info: n_expert         = 256
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = yarn
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 0.025
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 671B
print_info: model params     = 671.03 B
print_info: general.name     = DeepSeek R1 BF16
print_info: n_layer_dense_lead   = 3
print_info: n_lora_q             = 1536
print_info: n_lora_kv            = 512
print_info: n_ff_exp             = 2048
print_info: n_expert_shared      = 1
print_info: expert_weights_scale = 2.5
print_info: expert_weights_norm  = 1
print_info: expert_gating_func   = sigmoid
print_info: rope_yarn_log_mul    = 0.1000
print_info: vocab type       = BPE
print_info: n_vocab          = 129280
print_info: n_merges         = 127741
print_info: BOS token        = 0 '<|begin▁of▁sentence|>'
print_info: EOS token        = 1 '<|end▁of▁sentence|>'
print_info: EOT token        = 1 '<|end▁of▁sentence|>'
print_info: PAD token        = 128815 '<|PAD▁TOKEN|>'
print_info: LF token         = 131 'Ä'
print_info: FIM PRE token    = 128801 '<|fim▁begin|>'
print_info: FIM SUF token    = 128800 '<|fim▁hole|>'
print_info: FIM MID token    = 128802 '<|fim▁end|>'
print_info: EOG token        = 1 '<|end▁of▁sentence|>'
print_info: max token length = 256
load_tensors: offloading 7 repeating layers to GPU
load_tensors: offloaded 7/62 layers to GPU
load_tensors: Metal_Mapped model buffer size = 15703.17 MiB
load_tensors:   CPU_Mapped model buffer size = 47058.04 MiB
load_tensors:   CPU_Mapped model buffer size = 47109.49 MiB
load_tensors:   CPU_Mapped model buffer size = 23859.37 MiB
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 8192
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 10000.0
llama_init_from_model: freq_scale    = 0.025
llama_init_from_model: n_ctx_per_seq (8192) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = true
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 51539.61 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'q4_0', type_v = 'f16', n_layer = 61, can_shift = 0
llama_kv_cache_init:      Metal KV buffer size =  2548.00 MiB
llama_kv_cache_init:        CPU KV buffer size = 19656.00 MiB
llama_init_from_model: KV self size  = 22204.00 MiB, K (q4_0): 6588.00 MiB, V (f16): 15616.00 MiB
llama_init_from_model:        CPU  output buffer size =     0.49 MiB
llama_init_from_model:      Metal compute buffer size =  2218.00 MiB
llama_init_from_model:        CPU compute buffer size =  2218.01 MiB
llama_init_from_model: graph nodes  = 5025
llama_init_from_model: graph splits = 1077 (with bs=512), 3 (with bs=1)
common_init_from_params: KV cache shifting is not supported for this model, disabling KV cache shifting
common_init_from_params: setting dry_penalty_last_n to ctx_size = 8192
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 12

system_info: n_threads = 12 (n_threads_batch = 12) / 16 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | 

sampler seed: 3407
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 8192
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, temp = 0.600
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 8192, n_batch = 2048, n_predict = -1, n_keep = 1

I'm not 100% sure, but it seems to me that olama is choosing the wrong number of layers to offload? (Which then may be causing the auto-kill).

<!-- gh-comment-id:2620257072 --> @fserb commented on GitHub (Jan 28, 2025): I don't think that's what's going on. This is not just a "there's not enough RAM you are going to swap and being killed" issue. I got a compressed model from https://huggingface.co/unsloth/DeepSeek-R1-GGUF (it's 150GB instead of 404GB). I had to merge it (because olama doesn't support split GGUF yet) but then, if I try to "ollama run" it, I get the same error as OP (`"Error: llama runner process has terminated: signal: killed"`) with similar logs. But if I ran the same model file with llama-cpp (actual line: `llama-cli --model DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf --cache-type-k q4_0 --threads 12 -no-cnv --n-gpu-layers 7 --prio 2 --temp 0.6 --ctx-size 8192 --seed 3407`) it works fine on the same machine. (By fine I mean a couple tokens per second). <details><summary>olama killed logs</summary> ``` llama_load_model_from_file: using device Metal (Apple M3 Max) - 49151 MiB free llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from /Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 BF16 llama_model_loader: - kv 3: general.quantized_by str = Unsloth llama_model_loader: - kv 4: general.size_label str = 256x20B llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 6: deepseek2.block_count u32 = 61 llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 15: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 16: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 17: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 18: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 19: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 20: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 21: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 22: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 23: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 24: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 25: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 26: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 27: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 28: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 29: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 30: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 31: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 128815 llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 43: general.quantization_version u32 = 2 llama_model_loader: - kv 44: general.file_type u32 = 24 llama_model_loader: - kv 45: quantize.imatrix.file str = DeepSeek-R1.imatrix llama_model_loader: - kv 46: quantize.imatrix.dataset str = /training_data/calibration_datav3.txt llama_model_loader: - kv 47: quantize.imatrix.entries_count i32 = 720 llama_model_loader: - kv 48: quantize.imatrix.chunks_count i32 = 124 llama_model_loader: - kv 49: split.no u16 = 0 llama_model_loader: - kv 50: split.tensors.count i32 = 1025 llama_model_loader: - kv 51: split.count u16 = 0 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 190 tensors llama_model_loader: - type q5_K: 116 tensors llama_model_loader: - type q6_K: 184 tensors llama_model_loader: - type iq2_xxs: 6 tensors llama_model_loader: - type iq1_s: 168 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 819 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = IQ1_S - 1.5625 bpw llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 130.60 GiB (1.67 BPW) llm_load_print_meta: general.name = DeepSeek R1 BF16 llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 128815 '<|PAD▁TOKEN|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: offloading 21 repeating layers to GPU llm_load_tensors: offloaded 21/62 layers to GPU llm_load_tensors: CPU model buffer size = 86620.57 MiB llm_load_tensors: Metal model buffer size = 47109.49 MiB time=2025-01-28T17:57:43.432-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" time=2025-01-28T17:58:22.830-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding" time=2025-01-28T17:58:23.987-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed" ``` </details> <details><summary>llama-cpp logs</summary> ``` build: 4568 (a4417ddd) with Apple clang version 16.0.0 (clang-1600.0.26.6) for arm64-apple-darwin24.2.0 main: llama backend init main: load the model and apply lora adapter, if any llama_model_load_from_file_impl: using device Metal (Apple M3 Max) - 49151 MiB free llama_model_loader: additional 2 GGUFs metadata loaded. llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 BF16 llama_model_loader: - kv 3: general.quantized_by str = Unsloth llama_model_loader: - kv 4: general.size_label str = 256x20B llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 6: deepseek2.block_count u32 = 61 llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 15: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 16: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 17: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 18: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 19: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 20: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 21: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 22: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 23: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 24: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 25: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 26: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 27: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 28: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 29: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 30: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 31: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 128815 llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 43: general.quantization_version u32 = 2 llama_model_loader: - kv 44: general.file_type u32 = 24 llama_model_loader: - kv 45: quantize.imatrix.file str = DeepSeek-R1.imatrix llama_model_loader: - kv 46: quantize.imatrix.dataset str = /training_data/calibration_datav3.txt llama_model_loader: - kv 47: quantize.imatrix.entries_count i32 = 720 llama_model_loader: - kv 48: quantize.imatrix.chunks_count i32 = 124 llama_model_loader: - kv 49: split.no u16 = 0 llama_model_loader: - kv 50: split.tensors.count i32 = 1025 llama_model_loader: - kv 51: split.count u16 = 3 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 190 tensors llama_model_loader: - type q5_K: 116 tensors llama_model_loader: - type q6_K: 184 tensors llama_model_loader: - type iq2_xxs: 6 tensors llama_model_loader: - type iq1_s: 168 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ1_S - 1.5625 bpw print_info: file size = 130.60 GiB (1.67 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 819 load: token to piece cache size = 0.8223 MB print_info: arch = deepseek2 print_info: vocab_only = 0 print_info: n_ctx_train = 163840 print_info: n_embd = 7168 print_info: n_layer = 61 print_info: n_head = 128 print_info: n_head_kv = 128 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_embd_head_k = 192 print_info: n_embd_head_v = 128 print_info: n_gqa = 1 print_info: n_embd_k_gqa = 24576 print_info: n_embd_v_gqa = 16384 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 18432 print_info: n_expert = 256 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = yarn print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 0.025 print_info: n_ctx_orig_yarn = 4096 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 671B print_info: model params = 671.03 B print_info: general.name = DeepSeek R1 BF16 print_info: n_layer_dense_lead = 3 print_info: n_lora_q = 1536 print_info: n_lora_kv = 512 print_info: n_ff_exp = 2048 print_info: n_expert_shared = 1 print_info: expert_weights_scale = 2.5 print_info: expert_weights_norm = 1 print_info: expert_gating_func = sigmoid print_info: rope_yarn_log_mul = 0.1000 print_info: vocab type = BPE print_info: n_vocab = 129280 print_info: n_merges = 127741 print_info: BOS token = 0 '<|begin▁of▁sentence|>' print_info: EOS token = 1 '<|end▁of▁sentence|>' print_info: EOT token = 1 '<|end▁of▁sentence|>' print_info: PAD token = 128815 '<|PAD▁TOKEN|>' print_info: LF token = 131 'Ä' print_info: FIM PRE token = 128801 '<|fim▁begin|>' print_info: FIM SUF token = 128800 '<|fim▁hole|>' print_info: FIM MID token = 128802 '<|fim▁end|>' print_info: EOG token = 1 '<|end▁of▁sentence|>' print_info: max token length = 256 load_tensors: offloading 7 repeating layers to GPU load_tensors: offloaded 7/62 layers to GPU load_tensors: Metal_Mapped model buffer size = 15703.17 MiB load_tensors: CPU_Mapped model buffer size = 47058.04 MiB load_tensors: CPU_Mapped model buffer size = 47109.49 MiB load_tensors: CPU_Mapped model buffer size = 23859.37 MiB llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 8192 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000.0 llama_init_from_model: freq_scale = 0.025 llama_init_from_model: n_ctx_per_seq (8192) < n_ctx_train (163840) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Max ggml_metal_init: picking default device: Apple M3 Max ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M3 Max ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = true ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'q4_0', type_v = 'f16', n_layer = 61, can_shift = 0 llama_kv_cache_init: Metal KV buffer size = 2548.00 MiB llama_kv_cache_init: CPU KV buffer size = 19656.00 MiB llama_init_from_model: KV self size = 22204.00 MiB, K (q4_0): 6588.00 MiB, V (f16): 15616.00 MiB llama_init_from_model: CPU output buffer size = 0.49 MiB llama_init_from_model: Metal compute buffer size = 2218.00 MiB llama_init_from_model: CPU compute buffer size = 2218.01 MiB llama_init_from_model: graph nodes = 5025 llama_init_from_model: graph splits = 1077 (with bs=512), 3 (with bs=1) common_init_from_params: KV cache shifting is not supported for this model, disabling KV cache shifting common_init_from_params: setting dry_penalty_last_n to ctx_size = 8192 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) main: llama threadpool init, n_threads = 12 system_info: n_threads = 12 (n_threads_batch = 12) / 16 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | sampler seed: 3407 sampler params: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 8192 top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, temp = 0.600 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist generate: n_ctx = 8192, n_batch = 2048, n_predict = -1, n_keep = 1 ``` </details> I'm not 100% sure, but it seems to me that olama is choosing the wrong number of layers to offload? (Which then may be causing the auto-kill).
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

ollama estimates the number of layers it can offload, and passes that figure to llama.cpp, which actually allocates the memory. Each architecture has it's unique way of doing things, and not all of them are encoded in ollama's memory calculations. As these models get larger and larger, what was previously a minor overage is magnified to a large overage, killing the runner when it tries to allocate memory.

As a test, If you limit the layer count in ollama with num_gpu: 7 then the model should load. Conversely, if your try to run llama-cpp with --n-gpu-layers 21 then it should die.

<!-- gh-comment-id:2620274345 --> @rick-github commented on GitHub (Jan 28, 2025): ollama estimates the number of layers it can offload, and passes that figure to llama.cpp, which actually allocates the memory. Each architecture has it's unique way of doing things, and not all of them are encoded in ollama's memory calculations. As these models get larger and larger, what was previously a minor overage is magnified to a large overage, killing the runner when it tries to allocate memory. As a test, If you limit the layer count in ollama with `num_gpu: 7` then the model should load. Conversely, if your try to run llama-cpp with `--n-gpu-layers 21` then it should die.
Author
Owner

@fserb commented on GitHub (Jan 28, 2025):

I thought num_gpu had been removed. Is it just from the documentation?

<!-- gh-comment-id:2620276507 --> @fserb commented on GitHub (Jan 28, 2025): I thought `num_gpu` had been removed. Is it just from the documentation?
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

The documentation is going through a refresh, num_gpu was removed from the modelfile documentation in e54a3c7fcd but it's still a configurable parameter.

<!-- gh-comment-id:2620283442 --> @rick-github commented on GitHub (Jan 28, 2025): The documentation is going through a refresh, `num_gpu` was removed from the modelfile documentation in e54a3c7fcd3a66486c3946a7944f3d7ce2daff6c but it's still a configurable parameter.
Author
Owner

@fserb commented on GitHub (Jan 28, 2025):

llama-cpp with --n-gpu-layers 21 does crash with oom but it has a more explicit error:

ggml_metal_graph_compute: command buffer 1 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
llama_graph_compute: ggml_backend_sched_graph_compute_async failed with error -1

olama with PARAMETER num_gpu 7 still crashes even though the "Metal model buffer size" is much smaller than total RAM (64GB). It also crashes with num_gpu:1

llm_load_tensors: offloading 7 repeating layers to GPU
llm_load_tensors: offloaded 7/62 layers to GPU
llm_load_tensors:          CPU model buffer size = 118026.90 MiB
llm_load_tensors:        Metal model buffer size = 15703.16 MiB

Could it be that olama is using the merged version and llama-cpp is using the sliced one?

<!-- gh-comment-id:2620283559 --> @fserb commented on GitHub (Jan 28, 2025): llama-cpp with `--n-gpu-layers 21` does crash with oom but it has a more explicit error: ``` ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) llama_graph_compute: ggml_backend_sched_graph_compute_async failed with error -1 ``` olama with `PARAMETER num_gpu 7` still crashes even though the "Metal model buffer size" is much smaller than total RAM (64GB). It also crashes with `num_gpu:1` ``` llm_load_tensors: offloading 7 repeating layers to GPU llm_load_tensors: offloaded 7/62 layers to GPU llm_load_tensors: CPU model buffer size = 118026.90 MiB llm_load_tensors: Metal model buffer size = 15703.16 MiB ``` Could it be that olama is using the merged version and llama-cpp is using the sliced one?
Author
Owner

@fserb commented on GitHub (Jan 28, 2025):

I wonder if this has something to do with "CPU model buffer size", i.e., olama is still loading it in RAM? As opposed to the llama-cpp log where RAM is much smaller?

<!-- gh-comment-id:2620287312 --> @fserb commented on GitHub (Jan 28, 2025): I wonder if this has something to do with "CPU model buffer size", i.e., olama is still loading it in RAM? As opposed to the llama-cpp log where RAM is much smaller?
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Does ollama run it if num_gpu:0?

<!-- gh-comment-id:2620292893 --> @rick-github commented on GitHub (Jan 28, 2025): Does ollama run it if `num_gpu:0`?
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/62 layers to GPU
llm_load_tensors:          CPU model buffer size = 133730.06 MiB

crashes.

Is there an assumption somewhere that the model must at least fit in RAM?

<!-- gh-comment-id:2620298264 --> @fserb commented on GitHub (Jan 29, 2025): ``` llm_load_tensors: offloading 0 repeating layers to GPU llm_load_tensors: offloaded 0/62 layers to GPU llm_load_tensors: CPU model buffer size = 133730.06 MiB ``` crashes. Is there an assumption somewhere that the model must at least fit in RAM?
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

OK, ruled out GPU as a problem. I see that llama-cpp is being run with quantized 8192 cache, is that the same for ollama? Can you try setting OLLAMA_DEBUG=1 in the ollama server environment? I'm not sure if it will show anything useful but worth a try.

Is there an assumption somewhere that the model must at least fit in RAM?

There's a calculation that size of model + size of context + size of graph < size of free ram + size of free swap. This is why i was surprised earlier that ollama went ahead and tried to load the model when on the face of it, there wasn't enough free stuff. But I'm not a Mac person so I don't know the underlying details.

<!-- gh-comment-id:2620307481 --> @rick-github commented on GitHub (Jan 29, 2025): OK, ruled out GPU as a problem. I see that llama-cpp is being run with quantized 8192 cache, is that the same for ollama? Can you try setting `OLLAMA_DEBUG=1` in the ollama server environment? I'm not sure if it will show anything useful but worth a try. > Is there an assumption somewhere that the model must at least fit in RAM? There's a calculation that size of model + size of context + size of graph < size of free ram + size of free swap. This is why i was surprised earlier that ollama went ahead and tried to load the model when on the face of it, there wasn't enough free stuff. But I'm not a Mac person so I don't know the underlying details.
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

I'm not sure how to set the cache. I don't see anything on the logs about KV cache init.

I did OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_DEBUG=1 ollama serve

but then:

time=2025-01-28T19:26:52.968-05:00 level=WARN source=server.go:211 msg="flash attention enabled but not supported by gpu"
time=2025-01-28T19:26:52.968-05:00 level=WARN source=server.go:234 msg="quantized kv cache requested but flash attention disabled" type=q4_0
<!-- gh-comment-id:2620330050 --> @fserb commented on GitHub (Jan 29, 2025): I'm not sure how to set the cache. I don't see anything on the logs about KV cache init. I did `OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_DEBUG=1 ollama serve` but then: ``` time=2025-01-28T19:26:52.968-05:00 level=WARN source=server.go:211 msg="flash attention enabled but not supported by gpu" time=2025-01-28T19:26:52.968-05:00 level=WARN source=server.go:234 msg="quantized kv cache requested but flash attention disabled" type=q4_0 ```
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

Could you add a full ollama log? The log snippets so far are missing details on device detection, memory calculations, context size etc that could be helpful.

<!-- gh-comment-id:2620336070 --> @rick-github commented on GitHub (Jan 29, 2025): Could you add a full ollama log? The log snippets so far are missing details on device detection, memory calculations, context size etc that could be helpful.
Author
Owner

@Auto-Rooter commented on GitHub (Jan 29, 2025):

Got the same issue on MackBook Pro, M3 Max, 128 GB RAM with 600 GB free capacity.

<!-- gh-comment-id:2620674768 --> @Auto-Rooter commented on GitHub (Jan 29, 2025): Got the same issue on MackBook Pro, M3 Max, 128 GB RAM with 600 GB free capacity.
Author
Owner

@duttaoindril commented on GitHub (Jan 29, 2025):

https://news.ycombinator.com/item?id=42850222

Is it possible to add a 1.58Bit 671b modelfile on ollama?

They describe it as:

Run in Ollama/vLLM

If you want to use Ollama or vLLM for inference on GGUFS, you need to first merge the 3 GGUF split files into 1 like the code below. Then you will need to run the model locally.
./llama.cpp/llama-gguf-split --merge \ DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ merged_file.gguf

<!-- gh-comment-id:2620712991 --> @duttaoindril commented on GitHub (Jan 29, 2025): https://news.ycombinator.com/item?id=42850222 Is it possible to add a 1.58Bit 671b modelfile on ollama? They describe it as: > # Run in Ollama/vLLM > If you want to use Ollama or vLLM for inference on GGUFS, you need to first merge the 3 GGUF split files into 1 like the code below. Then you will need to run the model locally. > `./llama.cpp/llama-gguf-split --merge \ DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ merged_file.gguf`
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

#8624

<!-- gh-comment-id:2621258993 --> @rick-github commented on GitHub (Jan 29, 2025): #8624
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

(Still from the local merged file)

$ ollama show deepseek-r1:iq1_s 
  Model
    architecture        deepseek2    
    parameters          671.0B       
    context length      163840       
    embedding length    7168         
    quantization        IQ1_S        

  Parameters
    min_p      0.05                         
    num_gpu    0                            
    stop       "<|begin▁of▁sentence|>"    
    stop       "<|end▁of▁sentence|>"      
    stop       "<|User|>"                 
    stop       "<|Assistant|>"    

$ ollama run deepseek-r1:iq1_s 
Error: llama runner process has terminated: signal: killed        
log
OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_DEBUG=1 ollama serve
2025/01/29 06:36:04 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/fserb/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-01-29T06:36:04.225-05:00 level=INFO source=images.go:432 msg="total blobs: 12"
time=2025-01-29T06:36:04.225-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-01-29T06:36:04.225-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-29T06:36:04.225-05:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
time=2025-01-29T06:36:04.225-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[metal]
time=2025-01-29T06:36:04.225-05:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2025-01-29T06:36:04.225-05:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-01-29T06:36:04.290-05:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="48.0 GiB" available="48.0 GiB"
[GIN] 2025/01/29 - 06:36:08 | 200 |      37.875µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/01/29 - 06:36:08 | 200 |   12.464458ms |       127.0.0.1 | POST     "/api/show"
time=2025-01-29T06:36:08.340-05:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x1033578c0 gpu_count=1
time=2025-01-29T06:36:08.359-05:00 level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading"
time=2025-01-29T06:36:08.359-05:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="50.1 GiB" free_swap="0 B"
time=2025-01-29T06:36:08.359-05:00 level=DEBUG source=memory.go:107 msg=evaluating library=cpu gpu_count=1 available="[50.1 GiB]"
time=2025-01-29T06:36:08.360-05:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=0 layers.model=62 layers.offload=0 layers.split="" memory.available="[50.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="172.7 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[990.7 MiB]" memory.weights.total="167.5 GiB" memory.weights.repeating="166.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"
time=2025-01-29T06:36:08.360-05:00 level=WARN source=server.go:211 msg="flash attention enabled but not supported by gpu"
time=2025-01-29T06:36:08.360-05:00 level=WARN source=server.go:234 msg="quantized kv cache requested but flash attention disabled" type=q4_0
time=2025-01-29T06:36:08.361-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/opt/homebrew/bin/ollama runner --model /Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 --ctx-size 8192 --batch-size 512 --n-gpu-layers 0 --verbose --threads 12 --no-mmap --parallel 4 --port 51336"
time=2025-01-29T06:36:08.361-05:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/Users/fserb/.yarn/bin:/Users/fserb/.config/yarn/global/node_modules/.bin:/opt/homebrew/opt/openjdk/bin:/opt/homebrew/opt/util-linux/sbin:/opt/homebrew/opt/util-linux/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/Users/fserb/.cargo/bin:/Users/fserb/.go/bin:/usr/local/opt/zip/bin:/Users/fserb/.yarn/bin:/Users/fserb/.config/yarn/global/node_modules/.bin:/Users/fserb/.deno/bin:/usr/local/lib/ruby/gems/3.0.0/bin/:/usr/local/opt/ruby/bin:/usr/local/bin:/usr/local/sbin:/Users/fserb/bin:/opt/homebrew/opt/openjdk/bin:/opt/homebrew/opt/util-linux/sbin:/opt/homebrew/opt/util-linux/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/Users/fserb/.cargo/bin:/Users/fserb/.go/bin:/usr/local/opt/zip/bin:/Users/fserb/.yarn/bin:/Users/fserb/.config/yarn/global/node_modules/.bin:/usr/local/lib/ruby/gems/3.0.0/bin/:/usr/local/opt/ruby/bin:/usr/local/bin:/usr/local/sbin:/Users/fserb/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Applications/kitty.app/Contents/MacOS:./node_modules/.bin:/Users/fserb/.deno/bin:./node_modules/.bin LD_LIBRARY_PATH=/opt/homebrew/bin]"
time=2025-01-29T06:36:08.363-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-29T06:36:08.363-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-29T06:36:08.363-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-29T06:36:08.371-05:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-01-29T06:36:08.372-05:00 level=INFO source=runner.go:937 msg=system info="Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=12
time=2025-01-29T06:36:08.372-05:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:51336"
llama_load_model_from_file: using device Metal (Apple M3 Max) - 49151 MiB free
llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from /Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 BF16
llama_model_loader: - kv   3:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   4:                         general.size_label str              = 256x20B
llama_model_loader: - kv   5:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   6:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   7:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   8:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   9:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv  10:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv  11:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv  12:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  15:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  16:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  17:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  18:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  19:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  20:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  21:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  22:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  23:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  24:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  25:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  26:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  27:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  28:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  29:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  30: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  31: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 128815
llama_model_loader: - kv  40:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  41:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  42:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  43:               general.quantization_version u32              = 2
llama_model_loader: - kv  44:                          general.file_type u32              = 24
llama_model_loader: - kv  45:                      quantize.imatrix.file str              = DeepSeek-R1.imatrix
llama_model_loader: - kv  46:                   quantize.imatrix.dataset str              = /training_data/calibration_datav3.txt
llama_model_loader: - kv  47:             quantize.imatrix.entries_count i32              = 720
llama_model_loader: - kv  48:              quantize.imatrix.chunks_count i32              = 124
llama_model_loader: - kv  49:                                   split.no u16              = 0
llama_model_loader: - kv  50:                        split.tensors.count i32              = 1025
llama_model_loader: - kv  51:                                split.count u16              = 0
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  190 tensors
llama_model_loader: - type q5_K:  116 tensors
llama_model_loader: - type q6_K:  184 tensors
llama_model_loader: - type iq2_xxs:    6 tensors
llama_model_loader: - type iq1_s:  168 tensors
llm_load_vocab: control token: 128811 '<|tool▁outputs▁end|>' is not marked as EOG
llm_load_vocab: control token: 128810 '<|tool▁outputs▁begin|>' is not marked as EOG
llm_load_vocab: control token: 128809 '<|tool▁call▁end|>' is not marked as EOG
llm_load_vocab: control token: 128808 '<|tool▁call▁begin|>' is not marked as EOG
llm_load_vocab: control token: 128806 '<|tool▁calls▁begin|>' is not marked as EOG
llm_load_vocab: control token: 128804 '<|Assistant|>' is not marked as EOG
llm_load_vocab: control token: 128803 '<|User|>' is not marked as EOG
llm_load_vocab: control token: 128796 '<|place▁holder▁no▁796|>' is not marked as EOG
llm_load_vocab: control token: 128795 '<|place▁holder▁no▁795|>' is not marked as EOG
llm_load_vocab: control token: 128792 '<|place▁holder▁no▁792|>' is not marked as EOG
llm_load_vocab: control token: 128791 '<|place▁holder▁no▁791|>' is not marked as EOG
llm_load_vocab: control token: 128790 '<|place▁holder▁no▁790|>' is not marked as EOG
llm_load_vocab: control token: 128789 '<|place▁holder▁no▁789|>' is not marked as EOG
llm_load_vocab: control token: 128787 '<|place▁holder▁no▁787|>' is not marked as EOG
llm_load_vocab: control token: 128785 '<|place▁holder▁no▁785|>' is not marked as EOG
llm_load_vocab: control token: 128784 '<|place▁holder▁no▁784|>' is not marked as EOG
llm_load_vocab: control token: 128783 '<|place▁holder▁no▁783|>' is not marked as EOG
llm_load_vocab: control token: 128782 '<|place▁holder▁no▁782|>' is not marked as EOG
llm_load_vocab: control token: 128778 '<|place▁holder▁no▁778|>' is not marked as EOG
llm_load_vocab: control token: 128775 '<|place▁holder▁no▁775|>' is not marked as EOG
llm_load_vocab: control token: 128774 '<|place▁holder▁no▁774|>' is not marked as EOG
llm_load_vocab: control token: 128772 '<|place▁holder▁no▁772|>' is not marked as EOG
llm_load_vocab: control token: 128770 '<|place▁holder▁no▁770|>' is not marked as EOG
llm_load_vocab: control token: 128769 '<|place▁holder▁no▁769|>' is not marked as EOG
llm_load_vocab: control token: 128765 '<|place▁holder▁no▁765|>' is not marked as EOG
llm_load_vocab: control token: 128764 '<|place▁holder▁no▁764|>' is not marked as EOG
llm_load_vocab: control token: 128763 '<|place▁holder▁no▁763|>' is not marked as EOG
llm_load_vocab: control token: 128761 '<|place▁holder▁no▁761|>' is not marked as EOG
llm_load_vocab: control token: 128758 '<|place▁holder▁no▁758|>' is not marked as EOG
llm_load_vocab: control token: 128756 '<|place▁holder▁no▁756|>' is not marked as EOG
llm_load_vocab: control token: 128754 '<|place▁holder▁no▁754|>' is not marked as EOG
llm_load_vocab: control token: 128751 '<|place▁holder▁no▁751|>' is not marked as EOG
llm_load_vocab: control token: 128750 '<|place▁holder▁no▁750|>' is not marked as EOG
llm_load_vocab: control token: 128748 '<|place▁holder▁no▁748|>' is not marked as EOG
llm_load_vocab: control token: 128746 '<|place▁holder▁no▁746|>' is not marked as EOG
llm_load_vocab: control token: 128745 '<|place▁holder▁no▁745|>' is not marked as EOG
llm_load_vocab: control token: 128743 '<|place▁holder▁no▁743|>' is not marked as EOG
llm_load_vocab: control token: 128739 '<|place▁holder▁no▁739|>' is not marked as EOG
llm_load_vocab: control token: 128738 '<|place▁holder▁no▁738|>' is not marked as EOG
llm_load_vocab: control token: 128736 '<|place▁holder▁no▁736|>' is not marked as EOG
llm_load_vocab: control token: 128734 '<|place▁holder▁no▁734|>' is not marked as EOG
llm_load_vocab: control token: 128733 '<|place▁holder▁no▁733|>' is not marked as EOG
llm_load_vocab: control token: 128732 '<|place▁holder▁no▁732|>' is not marked as EOG
llm_load_vocab: control token: 128731 '<|place▁holder▁no▁731|>' is not marked as EOG
llm_load_vocab: control token: 128726 '<|place▁holder▁no▁726|>' is not marked as EOG
llm_load_vocab: control token: 128724 '<|place▁holder▁no▁724|>' is not marked as EOG
llm_load_vocab: control token: 128723 '<|place▁holder▁no▁723|>' is not marked as EOG
llm_load_vocab: control token: 128721 '<|place▁holder▁no▁721|>' is not marked as EOG
llm_load_vocab: control token: 128720 '<|place▁holder▁no▁720|>' is not marked as EOG
llm_load_vocab: control token: 128718 '<|place▁holder▁no▁718|>' is not marked as EOG
llm_load_vocab: control token: 128717 '<|place▁holder▁no▁717|>' is not marked as EOG
llm_load_vocab: control token: 128715 '<|place▁holder▁no▁715|>' is not marked as EOG
llm_load_vocab: control token: 128714 '<|place▁holder▁no▁714|>' is not marked as EOG
llm_load_vocab: control token: 128713 '<|place▁holder▁no▁713|>' is not marked as EOG
llm_load_vocab: control token: 128712 '<|place▁holder▁no▁712|>' is not marked as EOG
llm_load_vocab: control token: 128710 '<|place▁holder▁no▁710|>' is not marked as EOG
llm_load_vocab: control token: 128709 '<|place▁holder▁no▁709|>' is not marked as EOG
llm_load_vocab: control token: 128706 '<|place▁holder▁no▁706|>' is not marked as EOG
llm_load_vocab: control token: 128704 '<|place▁holder▁no▁704|>' is not marked as EOG
llm_load_vocab: control token: 128702 '<|place▁holder▁no▁702|>' is not marked as EOG
llm_load_vocab: control token: 128701 '<|place▁holder▁no▁701|>' is not marked as EOG
llm_load_vocab: control token: 128700 '<|place▁holder▁no▁700|>' is not marked as EOG
llm_load_vocab: control token: 128699 '<|place▁holder▁no▁699|>' is not marked as EOG
llm_load_vocab: control token: 128697 '<|place▁holder▁no▁697|>' is not marked as EOG
llm_load_vocab: control token: 128696 '<|place▁holder▁no▁696|>' is not marked as EOG
llm_load_vocab: control token: 128694 '<|place▁holder▁no▁694|>' is not marked as EOG
llm_load_vocab: control token: 128691 '<|place▁holder▁no▁691|>' is not marked as EOG
llm_load_vocab: control token: 128690 '<|place▁holder▁no▁690|>' is not marked as EOG
llm_load_vocab: control token: 128689 '<|place▁holder▁no▁689|>' is not marked as EOG
llm_load_vocab: control token: 128688 '<|place▁holder▁no▁688|>' is not marked as EOG
llm_load_vocab: control token: 128687 '<|place▁holder▁no▁687|>' is not marked as EOG
llm_load_vocab: control token: 128686 '<|place▁holder▁no▁686|>' is not marked as EOG
llm_load_vocab: control token: 128685 '<|place▁holder▁no▁685|>' is not marked as EOG
llm_load_vocab: control token: 128682 '<|place▁holder▁no▁682|>' is not marked as EOG
llm_load_vocab: control token: 128681 '<|place▁holder▁no▁681|>' is not marked as EOG
llm_load_vocab: control token: 128678 '<|place▁holder▁no▁678|>' is not marked as EOG
llm_load_vocab: control token: 128674 '<|place▁holder▁no▁674|>' is not marked as EOG
llm_load_vocab: control token: 128672 '<|place▁holder▁no▁672|>' is not marked as EOG
llm_load_vocab: control token: 128670 '<|place▁holder▁no▁670|>' is not marked as EOG
llm_load_vocab: control token: 128668 '<|place▁holder▁no▁668|>' is not marked as EOG
llm_load_vocab: control token: 128666 '<|place▁holder▁no▁666|>' is not marked as EOG
llm_load_vocab: control token: 128665 '<|place▁holder▁no▁665|>' is not marked as EOG
llm_load_vocab: control token: 128664 '<|place▁holder▁no▁664|>' is not marked as EOG
llm_load_vocab: control token: 128662 '<|place▁holder▁no▁662|>' is not marked as EOG
llm_load_vocab: control token: 128659 '<|place▁holder▁no▁659|>' is not marked as EOG
llm_load_vocab: control token: 128658 '<|place▁holder▁no▁658|>' is not marked as EOG
llm_load_vocab: control token: 128657 '<|place▁holder▁no▁657|>' is not marked as EOG
llm_load_vocab: control token: 128655 '<|place▁holder▁no▁655|>' is not marked as EOG
llm_load_vocab: control token: 128653 '<|place▁holder▁no▁653|>' is not marked as EOG
llm_load_vocab: control token: 128649 '<|place▁holder▁no▁649|>' is not marked as EOG
llm_load_vocab: control token: 128646 '<|place▁holder▁no▁646|>' is not marked as EOG
llm_load_vocab: control token: 128644 '<|place▁holder▁no▁644|>' is not marked as EOG
llm_load_vocab: control token: 128643 '<|place▁holder▁no▁643|>' is not marked as EOG
llm_load_vocab: control token: 128642 '<|place▁holder▁no▁642|>' is not marked as EOG
llm_load_vocab: control token: 128639 '<|place▁holder▁no▁639|>' is not marked as EOG
llm_load_vocab: control token: 128636 '<|place▁holder▁no▁636|>' is not marked as EOG
llm_load_vocab: control token: 128634 '<|place▁holder▁no▁634|>' is not marked as EOG
llm_load_vocab: control token: 128630 '<|place▁holder▁no▁630|>' is not marked as EOG
llm_load_vocab: control token: 128628 '<|place▁holder▁no▁628|>' is not marked as EOG
llm_load_vocab: control token: 128627 '<|place▁holder▁no▁627|>' is not marked as EOG
llm_load_vocab: control token: 128626 '<|place▁holder▁no▁626|>' is not marked as EOG
llm_load_vocab: control token: 128624 '<|place▁holder▁no▁624|>' is not marked as EOG
llm_load_vocab: control token: 128623 '<|place▁holder▁no▁623|>' is not marked as EOG
llm_load_vocab: control token: 128622 '<|place▁holder▁no▁622|>' is not marked as EOG
llm_load_vocab: control token: 128621 '<|place▁holder▁no▁621|>' is not marked as EOG
llm_load_vocab: control token: 128618 '<|place▁holder▁no▁618|>' is not marked as EOG
llm_load_vocab: control token: 128617 '<|place▁holder▁no▁617|>' is not marked as EOG
llm_load_vocab: control token: 128616 '<|place▁holder▁no▁616|>' is not marked as EOG
llm_load_vocab: control token: 128615 '<|place▁holder▁no▁615|>' is not marked as EOG
llm_load_vocab: control token: 128614 '<|place▁holder▁no▁614|>' is not marked as EOG
llm_load_vocab: control token: 128609 '<|place▁holder▁no▁609|>' is not marked as EOG
llm_load_vocab: control token: 128605 '<|place▁holder▁no▁605|>' is not marked as EOG
llm_load_vocab: control token: 128603 '<|place▁holder▁no▁603|>' is not marked as EOG
llm_load_vocab: control token: 128601 '<|place▁holder▁no▁601|>' is not marked as EOG
llm_load_vocab: control token: 128600 '<|place▁holder▁no▁600|>' is not marked as EOG
llm_load_vocab: control token: 128597 '<|place▁holder▁no▁597|>' is not marked as EOG
llm_load_vocab: control token: 128596 '<|place▁holder▁no▁596|>' is not marked as EOG
llm_load_vocab: control token: 128595 '<|place▁holder▁no▁595|>' is not marked as EOG
llm_load_vocab: control token: 128594 '<|place▁holder▁no▁594|>' is not marked as EOG
llm_load_vocab: control token: 128591 '<|place▁holder▁no▁591|>' is not marked as EOG
llm_load_vocab: control token: 128589 '<|place▁holder▁no▁589|>' is not marked as EOG
llm_load_vocab: control token: 128588 '<|place▁holder▁no▁588|>' is not marked as EOG
llm_load_vocab: control token: 128587 '<|place▁holder▁no▁587|>' is not marked as EOG
llm_load_vocab: control token: 128583 '<|place▁holder▁no▁583|>' is not marked as EOG
llm_load_vocab: control token: 128582 '<|place▁holder▁no▁582|>' is not marked as EOG
llm_load_vocab: control token: 128581 '<|place▁holder▁no▁581|>' is not marked as EOG
llm_load_vocab: control token: 128579 '<|place▁holder▁no▁579|>' is not marked as EOG
llm_load_vocab: control token: 128578 '<|place▁holder▁no▁578|>' is not marked as EOG
llm_load_vocab: control token: 128575 '<|place▁holder▁no▁575|>' is not marked as EOG
llm_load_vocab: control token: 128574 '<|place▁holder▁no▁574|>' is not marked as EOG
llm_load_vocab: control token: 128569 '<|place▁holder▁no▁569|>' is not marked as EOG
llm_load_vocab: control token: 128566 '<|place▁holder▁no▁566|>' is not marked as EOG
llm_load_vocab: control token: 128564 '<|place▁holder▁no▁564|>' is not marked as EOG
llm_load_vocab: control token: 128563 '<|place▁holder▁no▁563|>' is not marked as EOG
llm_load_vocab: control token: 128562 '<|place▁holder▁no▁562|>' is not marked as EOG
llm_load_vocab: control token: 128560 '<|place▁holder▁no▁560|>' is not marked as EOG
llm_load_vocab: control token: 128559 '<|place▁holder▁no▁559|>' is not marked as EOG
llm_load_vocab: control token: 128558 '<|place▁holder▁no▁558|>' is not marked as EOG
llm_load_vocab: control token: 128555 '<|place▁holder▁no▁555|>' is not marked as EOG
llm_load_vocab: control token: 128553 '<|place▁holder▁no▁553|>' is not marked as EOG
llm_load_vocab: control token: 128552 '<|place▁holder▁no▁552|>' is not marked as EOG
llm_load_vocab: control token: 128551 '<|place▁holder▁no▁551|>' is not marked as EOG
llm_load_vocab: control token: 128550 '<|place▁holder▁no▁550|>' is not marked as EOG
llm_load_vocab: control token: 128549 '<|place▁holder▁no▁549|>' is not marked as EOG
llm_load_vocab: control token: 128548 '<|place▁holder▁no▁548|>' is not marked as EOG
llm_load_vocab: control token: 128546 '<|place▁holder▁no▁546|>' is not marked as EOG
llm_load_vocab: control token: 128544 '<|place▁holder▁no▁544|>' is not marked as EOG
llm_load_vocab: control token: 128543 '<|place▁holder▁no▁543|>' is not marked as EOG
llm_load_vocab: control token: 128542 '<|place▁holder▁no▁542|>' is not marked as EOG
llm_load_vocab: control token: 128541 '<|place▁holder▁no▁541|>' is not marked as EOG
llm_load_vocab: control token: 128539 '<|place▁holder▁no▁539|>' is not marked as EOG
llm_load_vocab: control token: 128538 '<|place▁holder▁no▁538|>' is not marked as EOG
llm_load_vocab: control token: 128537 '<|place▁holder▁no▁537|>' is not marked as EOG
llm_load_vocab: control token: 128536 '<|place▁holder▁no▁536|>' is not marked as EOG
llm_load_vocab: control token: 128535 '<|place▁holder▁no▁535|>' is not marked as EOG
llm_load_vocab: control token: 128533 '<|place▁holder▁no▁533|>' is not marked as EOG
llm_load_vocab: control token: 128532 '<|place▁holder▁no▁532|>' is not marked as EOG
llm_load_vocab: control token: 128531 '<|place▁holder▁no▁531|>' is not marked as EOG
llm_load_vocab: control token: 128530 '<|place▁holder▁no▁530|>' is not marked as EOG
llm_load_vocab: control token: 128528 '<|place▁holder▁no▁528|>' is not marked as EOG
llm_load_vocab: control token: 128527 '<|place▁holder▁no▁527|>' is not marked as EOG
llm_load_vocab: control token: 128525 '<|place▁holder▁no▁525|>' is not marked as EOG
llm_load_vocab: control token: 128523 '<|place▁holder▁no▁523|>' is not marked as EOG
llm_load_vocab: control token: 128522 '<|place▁holder▁no▁522|>' is not marked as EOG
llm_load_vocab: control token: 128521 '<|place▁holder▁no▁521|>' is not marked as EOG
llm_load_vocab: control token: 128517 '<|place▁holder▁no▁517|>' is not marked as EOG
llm_load_vocab: control token: 128515 '<|place▁holder▁no▁515|>' is not marked as EOG
llm_load_vocab: control token: 128514 '<|place▁holder▁no▁514|>' is not marked as EOG
llm_load_vocab: control token: 128513 '<|place▁holder▁no▁513|>' is not marked as EOG
llm_load_vocab: control token: 128509 '<|place▁holder▁no▁509|>' is not marked as EOG
llm_load_vocab: control token: 128508 '<|place▁holder▁no▁508|>' is not marked as EOG
llm_load_vocab: control token: 128505 '<|place▁holder▁no▁505|>' is not marked as EOG
llm_load_vocab: control token: 128504 '<|place▁holder▁no▁504|>' is not marked as EOG
llm_load_vocab: control token: 128501 '<|place▁holder▁no▁501|>' is not marked as EOG
llm_load_vocab: control token: 128498 '<|place▁holder▁no▁498|>' is not marked as EOG
llm_load_vocab: control token: 128492 '<|place▁holder▁no▁492|>' is not marked as EOG
llm_load_vocab: control token: 128491 '<|place▁holder▁no▁491|>' is not marked as EOG
llm_load_vocab: control token: 128488 '<|place▁holder▁no▁488|>' is not marked as EOG
llm_load_vocab: control token: 128487 '<|place▁holder▁no▁487|>' is not marked as EOG
llm_load_vocab: control token: 128486 '<|place▁holder▁no▁486|>' is not marked as EOG
llm_load_vocab: control token: 128484 '<|place▁holder▁no▁484|>' is not marked as EOG
llm_load_vocab: control token: 128483 '<|place▁holder▁no▁483|>' is not marked as EOG
llm_load_vocab: control token: 128481 '<|place▁holder▁no▁481|>' is not marked as EOG
llm_load_vocab: control token: 128480 '<|place▁holder▁no▁480|>' is not marked as EOG
llm_load_vocab: control token: 128478 '<|place▁holder▁no▁478|>' is not marked as EOG
llm_load_vocab: control token: 128476 '<|place▁holder▁no▁476|>' is not marked as EOG
llm_load_vocab: control token: 128475 '<|place▁holder▁no▁475|>' is not marked as EOG
llm_load_vocab: control token: 128474 '<|place▁holder▁no▁474|>' is not marked as EOG
llm_load_vocab: control token: 128473 '<|place▁holder▁no▁473|>' is not marked as EOG
llm_load_vocab: control token: 128470 '<|place▁holder▁no▁470|>' is not marked as EOG
llm_load_vocab: control token: 128469 '<|place▁holder▁no▁469|>' is not marked as EOG
llm_load_vocab: control token: 128468 '<|place▁holder▁no▁468|>' is not marked as EOG
llm_load_vocab: control token: 128466 '<|place▁holder▁no▁466|>' is not marked as EOG
llm_load_vocab: control token: 128463 '<|place▁holder▁no▁463|>' is not marked as EOG
llm_load_vocab: control token: 128462 '<|place▁holder▁no▁462|>' is not marked as EOG
llm_load_vocab: control token: 128460 '<|place▁holder▁no▁460|>' is not marked as EOG
llm_load_vocab: control token: 128458 '<|place▁holder▁no▁458|>' is not marked as EOG
llm_load_vocab: control token: 128457 '<|place▁holder▁no▁457|>' is not marked as EOG
llm_load_vocab: control token: 128453 '<|place▁holder▁no▁453|>' is not marked as EOG
llm_load_vocab: control token: 128452 '<|place▁holder▁no▁452|>' is not marked as EOG
llm_load_vocab: control token: 128450 '<|place▁holder▁no▁450|>' is not marked as EOG
llm_load_vocab: control token: 128449 '<|place▁holder▁no▁449|>' is not marked as EOG
llm_load_vocab: control token: 128448 '<|place▁holder▁no▁448|>' is not marked as EOG
llm_load_vocab: control token: 128447 '<|place▁holder▁no▁447|>' is not marked as EOG
llm_load_vocab: control token: 128444 '<|place▁holder▁no▁444|>' is not marked as EOG
llm_load_vocab: control token: 128443 '<|place▁holder▁no▁443|>' is not marked as EOG
llm_load_vocab: control token: 128442 '<|place▁holder▁no▁442|>' is not marked as EOG
llm_load_vocab: control token: 128436 '<|place▁holder▁no▁436|>' is not marked as EOG
llm_load_vocab: control token: 128435 '<|place▁holder▁no▁435|>' is not marked as EOG
llm_load_vocab: control token: 128432 '<|place▁holder▁no▁432|>' is not marked as EOG
llm_load_vocab: control token: 128429 '<|place▁holder▁no▁429|>' is not marked as EOG
llm_load_vocab: control token: 128425 '<|place▁holder▁no▁425|>' is not marked as EOG
llm_load_vocab: control token: 128422 '<|place▁holder▁no▁422|>' is not marked as EOG
llm_load_vocab: control token: 128419 '<|place▁holder▁no▁419|>' is not marked as EOG
llm_load_vocab: control token: 128418 '<|place▁holder▁no▁418|>' is not marked as EOG
llm_load_vocab: control token: 128417 '<|place▁holder▁no▁417|>' is not marked as EOG
llm_load_vocab: control token: 128416 '<|place▁holder▁no▁416|>' is not marked as EOG
llm_load_vocab: control token: 128414 '<|place▁holder▁no▁414|>' is not marked as EOG
llm_load_vocab: control token: 128413 '<|place▁holder▁no▁413|>' is not marked as EOG
llm_load_vocab: control token: 128411 '<|place▁holder▁no▁411|>' is not marked as EOG
llm_load_vocab: control token: 128408 '<|place▁holder▁no▁408|>' is not marked as EOG
llm_load_vocab: control token: 128407 '<|place▁holder▁no▁407|>' is not marked as EOG
llm_load_vocab: control token: 128406 '<|place▁holder▁no▁406|>' is not marked as EOG
llm_load_vocab: control token: 128405 '<|place▁holder▁no▁405|>' is not marked as EOG
llm_load_vocab: control token: 128404 '<|place▁holder▁no▁404|>' is not marked as EOG
llm_load_vocab: control token: 128402 '<|place▁holder▁no▁402|>' is not marked as EOG
llm_load_vocab: control token: 128401 '<|place▁holder▁no▁401|>' is not marked as EOG
llm_load_vocab: control token: 128400 '<|place▁holder▁no▁400|>' is not marked as EOG
llm_load_vocab: control token: 128399 '<|place▁holder▁no▁399|>' is not marked as EOG
llm_load_vocab: control token: 128398 '<|place▁holder▁no▁398|>' is not marked as EOG
llm_load_vocab: control token: 128397 '<|place▁holder▁no▁397|>' is not marked as EOG
llm_load_vocab: control token: 128396 '<|place▁holder▁no▁396|>' is not marked as EOG
llm_load_vocab: control token: 128395 '<|place▁holder▁no▁395|>' is not marked as EOG
llm_load_vocab: control token: 128393 '<|place▁holder▁no▁393|>' is not marked as EOG
llm_load_vocab: control token: 128392 '<|place▁holder▁no▁392|>' is not marked as EOG
llm_load_vocab: control token: 128391 '<|place▁holder▁no▁391|>' is not marked as EOG
llm_load_vocab: control token: 128385 '<|place▁holder▁no▁385|>' is not marked as EOG
llm_load_vocab: control token: 128384 '<|place▁holder▁no▁384|>' is not marked as EOG
llm_load_vocab: control token: 128382 '<|place▁holder▁no▁382|>' is not marked as EOG
llm_load_vocab: control token: 128378 '<|place▁holder▁no▁378|>' is not marked as EOG
llm_load_vocab: control token: 128376 '<|place▁holder▁no▁376|>' is not marked as EOG
llm_load_vocab: control token: 128375 '<|place▁holder▁no▁375|>' is not marked as EOG
llm_load_vocab: control token: 128374 '<|place▁holder▁no▁374|>' is not marked as EOG
llm_load_vocab: control token: 128372 '<|place▁holder▁no▁372|>' is not marked as EOG
llm_load_vocab: control token: 128369 '<|place▁holder▁no▁369|>' is not marked as EOG
llm_load_vocab: control token: 128364 '<|place▁holder▁no▁364|>' is not marked as EOG
llm_load_vocab: control token: 128363 '<|place▁holder▁no▁363|>' is not marked as EOG
llm_load_vocab: control token: 128361 '<|place▁holder▁no▁361|>' is not marked as EOG
llm_load_vocab: control token: 128359 '<|place▁holder▁no▁359|>' is not marked as EOG
llm_load_vocab: control token: 128358 '<|place▁holder▁no▁358|>' is not marked as EOG
llm_load_vocab: control token: 128355 '<|place▁holder▁no▁355|>' is not marked as EOG
llm_load_vocab: control token: 128353 '<|place▁holder▁no▁353|>' is not marked as EOG
llm_load_vocab: control token: 128352 '<|place▁holder▁no▁352|>' is not marked as EOG
llm_load_vocab: control token: 128349 '<|place▁holder▁no▁349|>' is not marked as EOG
llm_load_vocab: control token: 128348 '<|place▁holder▁no▁348|>' is not marked as EOG
llm_load_vocab: control token: 128347 '<|place▁holder▁no▁347|>' is not marked as EOG
llm_load_vocab: control token: 128344 '<|place▁holder▁no▁344|>' is not marked as EOG
llm_load_vocab: control token: 128343 '<|place▁holder▁no▁343|>' is not marked as EOG
llm_load_vocab: control token: 128340 '<|place▁holder▁no▁340|>' is not marked as EOG
llm_load_vocab: control token: 128338 '<|place▁holder▁no▁338|>' is not marked as EOG
llm_load_vocab: control token: 128333 '<|place▁holder▁no▁333|>' is not marked as EOG
llm_load_vocab: control token: 128332 '<|place▁holder▁no▁332|>' is not marked as EOG
llm_load_vocab: control token: 128329 '<|place▁holder▁no▁329|>' is not marked as EOG
llm_load_vocab: control token: 128327 '<|place▁holder▁no▁327|>' is not marked as EOG
llm_load_vocab: control token: 128325 '<|place▁holder▁no▁325|>' is not marked as EOG
llm_load_vocab: control token: 128322 '<|place▁holder▁no▁322|>' is not marked as EOG
llm_load_vocab: control token: 128321 '<|place▁holder▁no▁321|>' is not marked as EOG
llm_load_vocab: control token: 128320 '<|place▁holder▁no▁320|>' is not marked as EOG
llm_load_vocab: control token: 128319 '<|place▁holder▁no▁319|>' is not marked as EOG
llm_load_vocab: control token: 128316 '<|place▁holder▁no▁316|>' is not marked as EOG
llm_load_vocab: control token: 128314 '<|place▁holder▁no▁314|>' is not marked as EOG
llm_load_vocab: control token: 128313 '<|place▁holder▁no▁313|>' is not marked as EOG
llm_load_vocab: control token: 128311 '<|place▁holder▁no▁311|>' is not marked as EOG
llm_load_vocab: control token: 128309 '<|place▁holder▁no▁309|>' is not marked as EOG
llm_load_vocab: control token: 128308 '<|place▁holder▁no▁308|>' is not marked as EOG
llm_load_vocab: control token: 128306 '<|place▁holder▁no▁306|>' is not marked as EOG
llm_load_vocab: control token: 128303 '<|place▁holder▁no▁303|>' is not marked as EOG
llm_load_vocab: control token: 128298 '<|place▁holder▁no▁298|>' is not marked as EOG
llm_load_vocab: control token: 128297 '<|place▁holder▁no▁297|>' is not marked as EOG
llm_load_vocab: control token: 128296 '<|place▁holder▁no▁296|>' is not marked as EOG
llm_load_vocab: control token: 128295 '<|place▁holder▁no▁295|>' is not marked as EOG
llm_load_vocab: control token: 128293 '<|place▁holder▁no▁293|>' is not marked as EOG
llm_load_vocab: control token: 128292 '<|place▁holder▁no▁292|>' is not marked as EOG
llm_load_vocab: control token: 128291 '<|place▁holder▁no▁291|>' is not marked as EOG
llm_load_vocab: control token: 128290 '<|place▁holder▁no▁290|>' is not marked as EOG
llm_load_vocab: control token: 128287 '<|place▁holder▁no▁287|>' is not marked as EOG
llm_load_vocab: control token: 128284 '<|place▁holder▁no▁284|>' is not marked as EOG
llm_load_vocab: control token: 128283 '<|place▁holder▁no▁283|>' is not marked as EOG
llm_load_vocab: control token: 128282 '<|place▁holder▁no▁282|>' is not marked as EOG
llm_load_vocab: control token: 128279 '<|place▁holder▁no▁279|>' is not marked as EOG
llm_load_vocab: control token: 128273 '<|place▁holder▁no▁273|>' is not marked as EOG
llm_load_vocab: control token: 128271 '<|place▁holder▁no▁271|>' is not marked as EOG
llm_load_vocab: control token: 128268 '<|place▁holder▁no▁268|>' is not marked as EOG
llm_load_vocab: control token: 128266 '<|place▁holder▁no▁266|>' is not marked as EOG
llm_load_vocab: control token: 128265 '<|place▁holder▁no▁265|>' is not marked as EOG
llm_load_vocab: control token: 128263 '<|place▁holder▁no▁263|>' is not marked as EOG
llm_load_vocab: control token: 128262 '<|place▁holder▁no▁262|>' is not marked as EOG
llm_load_vocab: control token: 128260 '<|place▁holder▁no▁260|>' is not marked as EOG
llm_load_vocab: control token: 128259 '<|place▁holder▁no▁259|>' is not marked as EOG
llm_load_vocab: control token: 128258 '<|place▁holder▁no▁258|>' is not marked as EOG
llm_load_vocab: control token: 128257 '<|place▁holder▁no▁257|>' is not marked as EOG
llm_load_vocab: control token: 128251 '<|place▁holder▁no▁251|>' is not marked as EOG
llm_load_vocab: control token: 128248 '<|place▁holder▁no▁248|>' is not marked as EOG
llm_load_vocab: control token: 128243 '<|place▁holder▁no▁243|>' is not marked as EOG
llm_load_vocab: control token: 128242 '<|place▁holder▁no▁242|>' is not marked as EOG
llm_load_vocab: control token: 128239 '<|place▁holder▁no▁239|>' is not marked as EOG
llm_load_vocab: control token: 128238 '<|place▁holder▁no▁238|>' is not marked as EOG
llm_load_vocab: control token: 128235 '<|place▁holder▁no▁235|>' is not marked as EOG
llm_load_vocab: control token: 128234 '<|place▁holder▁no▁234|>' is not marked as EOG
llm_load_vocab: control token: 128233 '<|place▁holder▁no▁233|>' is not marked as EOG
llm_load_vocab: control token: 128232 '<|place▁holder▁no▁232|>' is not marked as EOG
llm_load_vocab: control token: 128228 '<|place▁holder▁no▁228|>' is not marked as EOG
llm_load_vocab: control token: 128227 '<|place▁holder▁no▁227|>' is not marked as EOG
llm_load_vocab: control token: 128226 '<|place▁holder▁no▁226|>' is not marked as EOG
llm_load_vocab: control token: 128225 '<|place▁holder▁no▁225|>' is not marked as EOG
llm_load_vocab: control token: 128224 '<|place▁holder▁no▁224|>' is not marked as EOG
llm_load_vocab: control token: 128222 '<|place▁holder▁no▁222|>' is not marked as EOG
llm_load_vocab: control token: 128221 '<|place▁holder▁no▁221|>' is not marked as EOG
llm_load_vocab: control token: 128220 '<|place▁holder▁no▁220|>' is not marked as EOG
llm_load_vocab: control token: 128219 '<|place▁holder▁no▁219|>' is not marked as EOG
llm_load_vocab: control token: 128215 '<|place▁holder▁no▁215|>' is not marked as EOG
llm_load_vocab: control token: 128213 '<|place▁holder▁no▁213|>' is not marked as EOG
llm_load_vocab: control token: 128212 '<|place▁holder▁no▁212|>' is not marked as EOG
llm_load_vocab: control token: 128211 '<|place▁holder▁no▁211|>' is not marked as EOG
llm_load_vocab: control token: 128209 '<|place▁holder▁no▁209|>' is not marked as EOG
llm_load_vocab: control token: 128208 '<|place▁holder▁no▁208|>' is not marked as EOG
llm_load_vocab: control token: 128207 '<|place▁holder▁no▁207|>' is not marked as EOG
llm_load_vocab: control token: 128206 '<|place▁holder▁no▁206|>' is not marked as EOG
llm_load_vocab: control token: 128202 '<|place▁holder▁no▁202|>' is not marked as EOG
llm_load_vocab: control token: 128201 '<|place▁holder▁no▁201|>' is not marked as EOG
llm_load_vocab: control token: 128200 '<|place▁holder▁no▁200|>' is not marked as EOG
llm_load_vocab: control token: 128198 '<|place▁holder▁no▁198|>' is not marked as EOG
llm_load_vocab: control token: 128197 '<|place▁holder▁no▁197|>' is not marked as EOG
llm_load_vocab: control token: 128195 '<|place▁holder▁no▁195|>' is not marked as EOG
llm_load_vocab: control token: 128194 '<|place▁holder▁no▁194|>' is not marked as EOG
llm_load_vocab: control token: 128192 '<|place▁holder▁no▁192|>' is not marked as EOG
llm_load_vocab: control token: 128188 '<|place▁holder▁no▁188|>' is not marked as EOG
llm_load_vocab: control token: 128187 '<|place▁holder▁no▁187|>' is not marked as EOG
llm_load_vocab: control token: 128185 '<|place▁holder▁no▁185|>' is not marked as EOG
llm_load_vocab: control token: 128184 '<|place▁holder▁no▁184|>' is not marked as EOG
llm_load_vocab: control token: 128183 '<|place▁holder▁no▁183|>' is not marked as EOG
llm_load_vocab: control token: 128181 '<|place▁holder▁no▁181|>' is not marked as EOG
llm_load_vocab: control token: 128180 '<|place▁holder▁no▁180|>' is not marked as EOG
llm_load_vocab: control token: 128178 '<|place▁holder▁no▁178|>' is not marked as EOG
llm_load_vocab: control token: 128176 '<|place▁holder▁no▁176|>' is not marked as EOG
llm_load_vocab: control token: 128174 '<|place▁holder▁no▁174|>' is not marked as EOG
llm_load_vocab: control token: 128173 '<|place▁holder▁no▁173|>' is not marked as EOG
llm_load_vocab: control token: 128171 '<|place▁holder▁no▁171|>' is not marked as EOG
llm_load_vocab: control token: 128170 '<|place▁holder▁no▁170|>' is not marked as EOG
llm_load_vocab: control token: 128166 '<|place▁holder▁no▁166|>' is not marked as EOG
llm_load_vocab: control token: 128159 '<|place▁holder▁no▁159|>' is not marked as EOG
llm_load_vocab: control token: 128158 '<|place▁holder▁no▁158|>' is not marked as EOG
llm_load_vocab: control token: 128155 '<|place▁holder▁no▁155|>' is not marked as EOG
llm_load_vocab: control token: 128152 '<|place▁holder▁no▁152|>' is not marked as EOG
llm_load_vocab: control token: 128151 '<|place▁holder▁no▁151|>' is not marked as EOG
llm_load_vocab: control token: 128149 '<|place▁holder▁no▁149|>' is not marked as EOG
llm_load_vocab: control token: 128147 '<|place▁holder▁no▁147|>' is not marked as EOG
llm_load_vocab: control token: 128146 '<|place▁holder▁no▁146|>' is not marked as EOG
llm_load_vocab: control token: 128144 '<|place▁holder▁no▁144|>' is not marked as EOG
llm_load_vocab: control token: 128142 '<|place▁holder▁no▁142|>' is not marked as EOG
llm_load_vocab: control token: 128141 '<|place▁holder▁no▁141|>' is not marked as EOG
llm_load_vocab: control token: 128140 '<|place▁holder▁no▁140|>' is not marked as EOG
llm_load_vocab: control token: 128139 '<|place▁holder▁no▁139|>' is not marked as EOG
llm_load_vocab: control token: 128137 '<|place▁holder▁no▁137|>' is not marked as EOG
llm_load_vocab: control token: 128136 '<|place▁holder▁no▁136|>' is not marked as EOG
llm_load_vocab: control token: 128135 '<|place▁holder▁no▁135|>' is not marked as EOG
llm_load_vocab: control token: 128134 '<|place▁holder▁no▁134|>' is not marked as EOG
llm_load_vocab: control token: 128132 '<|place▁holder▁no▁132|>' is not marked as EOG
llm_load_vocab: control token: 128131 '<|place▁holder▁no▁131|>' is not marked as EOG
llm_load_vocab: control token: 128130 '<|place▁holder▁no▁130|>' is not marked as EOG
llm_load_vocab: control token: 128127 '<|place▁holder▁no▁127|>' is not marked as EOG
llm_load_vocab: control token: 128125 '<|place▁holder▁no▁125|>' is not marked as EOG
llm_load_vocab: control token: 128124 '<|place▁holder▁no▁124|>' is not marked as EOG
llm_load_vocab: control token: 128123 '<|place▁holder▁no▁123|>' is not marked as EOG
llm_load_vocab: control token: 128122 '<|place▁holder▁no▁122|>' is not marked as EOG
llm_load_vocab: control token: 128120 '<|place▁holder▁no▁120|>' is not marked as EOG
llm_load_vocab: control token: 128119 '<|place▁holder▁no▁119|>' is not marked as EOG
llm_load_vocab: control token: 128116 '<|place▁holder▁no▁116|>' is not marked as EOG
llm_load_vocab: control token: 128115 '<|place▁holder▁no▁115|>' is not marked as EOG
llm_load_vocab: control token: 128113 '<|place▁holder▁no▁113|>' is not marked as EOG
llm_load_vocab: control token: 128110 '<|place▁holder▁no▁110|>' is not marked as EOG
llm_load_vocab: control token: 128109 '<|place▁holder▁no▁109|>' is not marked as EOG
llm_load_vocab: control token: 128106 '<|place▁holder▁no▁106|>' is not marked as EOG
llm_load_vocab: control token: 128104 '<|place▁holder▁no▁104|>' is not marked as EOG
llm_load_vocab: control token: 128102 '<|place▁holder▁no▁102|>' is not marked as EOG
llm_load_vocab: control token: 128101 '<|place▁holder▁no▁101|>' is not marked as EOG
llm_load_vocab: control token: 128099 '<|place▁holder▁no▁99|>' is not marked as EOG
llm_load_vocab: control token: 128098 '<|place▁holder▁no▁98|>' is not marked as EOG
llm_load_vocab: control token: 128095 '<|place▁holder▁no▁95|>' is not marked as EOG
llm_load_vocab: control token: 128091 '<|place▁holder▁no▁91|>' is not marked as EOG
llm_load_vocab: control token: 128088 '<|place▁holder▁no▁88|>' is not marked as EOG
llm_load_vocab: control token: 128087 '<|place▁holder▁no▁87|>' is not marked as EOG
llm_load_vocab: control token: 128085 '<|place▁holder▁no▁85|>' is not marked as EOG
llm_load_vocab: control token: 128084 '<|place▁holder▁no▁84|>' is not marked as EOG
llm_load_vocab: control token: 128082 '<|place▁holder▁no▁82|>' is not marked as EOG
llm_load_vocab: control token: 128081 '<|place▁holder▁no▁81|>' is not marked as EOG
llm_load_vocab: control token: 128080 '<|place▁holder▁no▁80|>' is not marked as EOG
llm_load_vocab: control token: 128079 '<|place▁holder▁no▁79|>' is not marked as EOG
llm_load_vocab: control token: 128076 '<|place▁holder▁no▁76|>' is not marked as EOG
llm_load_vocab: control token: 128075 '<|place▁holder▁no▁75|>' is not marked as EOG
llm_load_vocab: control token: 128072 '<|place▁holder▁no▁72|>' is not marked as EOG
llm_load_vocab: control token: 128071 '<|place▁holder▁no▁71|>' is not marked as EOG
llm_load_vocab: control token: 128069 '<|place▁holder▁no▁69|>' is not marked as EOG
llm_load_vocab: control token: 128067 '<|place▁holder▁no▁67|>' is not marked as EOG
llm_load_vocab: control token: 128065 '<|place▁holder▁no▁65|>' is not marked as EOG
llm_load_vocab: control token: 128064 '<|place▁holder▁no▁64|>' is not marked as EOG
llm_load_vocab: control token: 128063 '<|place▁holder▁no▁63|>' is not marked as EOG
llm_load_vocab: control token: 128060 '<|place▁holder▁no▁60|>' is not marked as EOG
llm_load_vocab: control token: 128059 '<|place▁holder▁no▁59|>' is not marked as EOG
llm_load_vocab: control token: 128058 '<|place▁holder▁no▁58|>' is not marked as EOG
llm_load_vocab: control token: 128057 '<|place▁holder▁no▁57|>' is not marked as EOG
llm_load_vocab: control token: 128056 '<|place▁holder▁no▁56|>' is not marked as EOG
llm_load_vocab: control token: 128055 '<|place▁holder▁no▁55|>' is not marked as EOG
llm_load_vocab: control token: 128054 '<|place▁holder▁no▁54|>' is not marked as EOG
llm_load_vocab: control token: 128052 '<|place▁holder▁no▁52|>' is not marked as EOG
llm_load_vocab: control token: 128051 '<|place▁holder▁no▁51|>' is not marked as EOG
llm_load_vocab: control token: 128050 '<|place▁holder▁no▁50|>' is not marked as EOG
llm_load_vocab: control token: 128049 '<|place▁holder▁no▁49|>' is not marked as EOG
llm_load_vocab: control token: 128048 '<|place▁holder▁no▁48|>' is not marked as EOG
llm_load_vocab: control token: 128047 '<|place▁holder▁no▁47|>' is not marked as EOG
llm_load_vocab: control token: 128046 '<|place▁holder▁no▁46|>' is not marked as EOG
llm_load_vocab: control token: 128043 '<|place▁holder▁no▁43|>' is not marked as EOG
llm_load_vocab: control token: 128041 '<|place▁holder▁no▁41|>' is not marked as EOG
llm_load_vocab: control token: 128037 '<|place▁holder▁no▁37|>' is not marked as EOG
llm_load_vocab: control token: 128036 '<|place▁holder▁no▁36|>' is not marked as EOG
llm_load_vocab: control token: 128034 '<|place▁holder▁no▁34|>' is not marked as EOG
llm_load_vocab: control token: 128030 '<|place▁holder▁no▁30|>' is not marked as EOG
llm_load_vocab: control token: 128028 '<|place▁holder▁no▁28|>' is not marked as EOG
llm_load_vocab: control token: 128024 '<|place▁holder▁no▁24|>' is not marked as EOG
llm_load_vocab: control token: 128022 '<|place▁holder▁no▁22|>' is not marked as EOG
llm_load_vocab: control token: 128020 '<|place▁holder▁no▁20|>' is not marked as EOG
llm_load_vocab: control token: 128019 '<|place▁holder▁no▁19|>' is not marked as EOG
llm_load_vocab: control token: 128017 '<|place▁holder▁no▁17|>' is not marked as EOG
llm_load_vocab: control token: 128014 '<|place▁holder▁no▁14|>' is not marked as EOG
llm_load_vocab: control token: 128013 '<|place▁holder▁no▁13|>' is not marked as EOG
llm_load_vocab: control token: 128012 '<|place▁holder▁no▁12|>' is not marked as EOG
llm_load_vocab: control token: 128010 '<|place▁holder▁no▁10|>' is not marked as EOG
llm_load_vocab: control token: 128009 '<|place▁holder▁no▁9|>' is not marked as EOG
llm_load_vocab: control token: 128008 '<|place▁holder▁no▁8|>' is not marked as EOG
llm_load_vocab: control token: 128006 '<|place▁holder▁no▁6|>' is not marked as EOG
llm_load_vocab: control token: 128001 '<|place▁holder▁no▁1|>' is not marked as EOG
llm_load_vocab: control token: 128651 '<|place▁holder▁no▁651|>' is not marked as EOG
llm_load_vocab: control token: 128373 '<|place▁holder▁no▁373|>' is not marked as EOG
llm_load_vocab: control token: 128801 '<|fim▁begin|>' is not marked as EOG
llm_load_vocab: control token: 128472 '<|place▁holder▁no▁472|>' is not marked as EOG
llm_load_vocab: control token: 128114 '<|place▁holder▁no▁114|>' is not marked as EOG
llm_load_vocab: control token: 128294 '<|place▁holder▁no▁294|>' is not marked as EOG
llm_load_vocab: control token: 128317 '<|place▁holder▁no▁317|>' is not marked as EOG
llm_load_vocab: control token: 128026 '<|place▁holder▁no▁26|>' is not marked as EOG
llm_load_vocab: control token: 128729 '<|place▁holder▁no▁729|>' is not marked as EOG
llm_load_vocab: control token: 128557 '<|place▁holder▁no▁557|>' is not marked as EOG
llm_load_vocab: control token: 128339 '<|place▁holder▁no▁339|>' is not marked as EOG
llm_load_vocab: control token: 128797 '<|place▁holder▁no▁797|>' is not marked as EOG
llm_load_vocab: control token: 128237 '<|place▁holder▁no▁237|>' is not marked as EOG
llm_load_vocab: control token: 128086 '<|place▁holder▁no▁86|>' is not marked as EOG
llm_load_vocab: control token: 128625 '<|place▁holder▁no▁625|>' is not marked as EOG
llm_load_vocab: control token: 128716 '<|place▁holder▁no▁716|>' is not marked as EOG
llm_load_vocab: control token: 128420 '<|place▁holder▁no▁420|>' is not marked as EOG
llm_load_vocab: control token: 128236 '<|place▁holder▁no▁236|>' is not marked as EOG
llm_load_vocab: control token: 128727 '<|place▁holder▁no▁727|>' is not marked as EOG
llm_load_vocab: control token: 128150 '<|place▁holder▁no▁150|>' is not marked as EOG
llm_load_vocab: control token: 128465 '<|place▁holder▁no▁465|>' is not marked as EOG
llm_load_vocab: control token: 128760 '<|place▁holder▁no▁760|>' is not marked as EOG
llm_load_vocab: control token: 128461 '<|place▁holder▁no▁461|>' is not marked as EOG
llm_load_vocab: control token: 128451 '<|place▁holder▁no▁451|>' is not marked as EOG
llm_load_vocab: control token: 128534 '<|place▁holder▁no▁534|>' is not marked as EOG
llm_load_vocab: control token: 128346 '<|place▁holder▁no▁346|>' is not marked as EOG
llm_load_vocab: control token: 128759 '<|place▁holder▁no▁759|>' is not marked as EOG
llm_load_vocab: control token: 128602 '<|place▁holder▁no▁602|>' is not marked as EOG
llm_load_vocab: control token: 128383 '<|place▁holder▁no▁383|>' is not marked as EOG
llm_load_vocab: control token: 128053 '<|place▁holder▁no▁53|>' is not marked as EOG
llm_load_vocab: control token: 128794 '<|place▁holder▁no▁794|>' is not marked as EOG
llm_load_vocab: control token: 128755 '<|place▁holder▁no▁755|>' is not marked as EOG
llm_load_vocab: control token: 128631 '<|place▁holder▁no▁631|>' is not marked as EOG
llm_load_vocab: control token: 128692 '<|place▁holder▁no▁692|>' is not marked as EOG
llm_load_vocab: control token: 128357 '<|place▁holder▁no▁357|>' is not marked as EOG
llm_load_vocab: control token: 128362 '<|place▁holder▁no▁362|>' is not marked as EOG
llm_load_vocab: control token: 128038 '<|place▁holder▁no▁38|>' is not marked as EOG
llm_load_vocab: control token: 128275 '<|place▁holder▁no▁275|>' is not marked as EOG
llm_load_vocab: control token: 128742 '<|place▁holder▁no▁742|>' is not marked as EOG
llm_load_vocab: control token: 128196 '<|place▁holder▁no▁196|>' is not marked as EOG
llm_load_vocab: control token: 128683 '<|place▁holder▁no▁683|>' is not marked as EOG
llm_load_vocab: control token: 128269 '<|place▁holder▁no▁269|>' is not marked as EOG
llm_load_vocab: control token: 128512 '<|place▁holder▁no▁512|>' is not marked as EOG
llm_load_vocab: control token: 128381 '<|place▁holder▁no▁381|>' is not marked as EOG
llm_load_vocab: control token: 128377 '<|place▁holder▁no▁377|>' is not marked as EOG
llm_load_vocab: control token: 128576 '<|place▁holder▁no▁576|>' is not marked as EOG
llm_load_vocab: control token: 128218 '<|place▁holder▁no▁218|>' is not marked as EOG
llm_load_vocab: control token: 128762 '<|place▁holder▁no▁762|>' is not marked as EOG
llm_load_vocab: control token: 128044 '<|place▁holder▁no▁44|>' is not marked as EOG
llm_load_vocab: control token: 128252 '<|place▁holder▁no▁252|>' is not marked as EOG
llm_load_vocab: control token: 128156 '<|place▁holder▁no▁156|>' is not marked as EOG
llm_load_vocab: control token: 128415 '<|place▁holder▁no▁415|>' is not marked as EOG
llm_load_vocab: control token: 128118 '<|place▁holder▁no▁118|>' is not marked as EOG
llm_load_vocab: control token: 128490 '<|place▁holder▁no▁490|>' is not marked as EOG
llm_load_vocab: control token: 128439 '<|place▁holder▁no▁439|>' is not marked as EOG
llm_load_vocab: control token: 128593 '<|place▁holder▁no▁593|>' is not marked as EOG
llm_load_vocab: control token: 128323 '<|place▁holder▁no▁323|>' is not marked as EOG
llm_load_vocab: control token: 128441 '<|place▁holder▁no▁441|>' is not marked as EOG
llm_load_vocab: control token: 128172 '<|place▁holder▁no▁172|>' is not marked as EOG
llm_load_vocab: control token: 128097 '<|place▁holder▁no▁97|>' is not marked as EOG
llm_load_vocab: control token: 128652 '<|place▁holder▁no▁652|>' is not marked as EOG
llm_load_vocab: control token: 128516 '<|place▁holder▁no▁516|>' is not marked as EOG
llm_load_vocab: control token: 128241 '<|place▁holder▁no▁241|>' is not marked as EOG
llm_load_vocab: control token: 128360 '<|place▁holder▁no▁360|>' is not marked as EOG
llm_load_vocab: control token: 128267 '<|place▁holder▁no▁267|>' is not marked as EOG
llm_load_vocab: control token: 128673 '<|place▁holder▁no▁673|>' is not marked as EOG
llm_load_vocab: control token: 128033 '<|place▁holder▁no▁33|>' is not marked as EOG
llm_load_vocab: control token: 128387 '<|place▁holder▁no▁387|>' is not marked as EOG
llm_load_vocab: control token: 128430 '<|place▁holder▁no▁430|>' is not marked as EOG
llm_load_vocab: control token: 128471 '<|place▁holder▁no▁471|>' is not marked as EOG
llm_load_vocab: control token: 128510 '<|place▁holder▁no▁510|>' is not marked as EOG
llm_load_vocab: control token: 128089 '<|place▁holder▁no▁89|>' is not marked as EOG
llm_load_vocab: control token: 128494 '<|place▁holder▁no▁494|>' is not marked as EOG
llm_load_vocab: control token: 128068 '<|place▁holder▁no▁68|>' is not marked as EOG
llm_load_vocab: control token: 128440 '<|place▁holder▁no▁440|>' is not marked as EOG
llm_load_vocab: control token: 128529 '<|place▁holder▁no▁529|>' is not marked as EOG
llm_load_vocab: control token: 128584 '<|place▁holder▁no▁584|>' is not marked as EOG
llm_load_vocab: control token: 128032 '<|place▁holder▁no▁32|>' is not marked as EOG
llm_load_vocab: control token: 128210 '<|place▁holder▁no▁210|>' is not marked as EOG
llm_load_vocab: control token: 128771 '<|place▁holder▁no▁771|>' is not marked as EOG
llm_load_vocab: control token: 128167 '<|place▁holder▁no▁167|>' is not marked as EOG
llm_load_vocab: control token: 128524 '<|place▁holder▁no▁524|>' is not marked as EOG
llm_load_vocab: control token: 128572 '<|place▁holder▁no▁572|>' is not marked as EOG
llm_load_vocab: control token: 128074 '<|place▁holder▁no▁74|>' is not marked as EOG
llm_load_vocab: control token: 128654 '<|place▁holder▁no▁654|>' is not marked as EOG
llm_load_vocab: control token: 128002 '<|place▁holder▁no▁2|>' is not marked as EOG
llm_load_vocab: control token: 128520 '<|place▁holder▁no▁520|>' is not marked as EOG
llm_load_vocab: control token: 128606 '<|place▁holder▁no▁606|>' is not marked as EOG
llm_load_vocab: control token: 128410 '<|place▁holder▁no▁410|>' is not marked as EOG
llm_load_vocab: control token: 128740 '<|place▁holder▁no▁740|>' is not marked as EOG
llm_load_vocab: control token: 128497 '<|place▁holder▁no▁497|>' is not marked as EOG
llm_load_vocab: control token: 128632 '<|place▁holder▁no▁632|>' is not marked as EOG
llm_load_vocab: control token: 128573 '<|place▁holder▁no▁573|>' is not marked as EOG
llm_load_vocab: control token: 128169 '<|place▁holder▁no▁169|>' is not marked as EOG
llm_load_vocab: control token: 128300 '<|place▁holder▁no▁300|>' is not marked as EOG
llm_load_vocab: control token: 128249 '<|place▁holder▁no▁249|>' is not marked as EOG
llm_load_vocab: control token: 128003 '<|place▁holder▁no▁3|>' is not marked as EOG
llm_load_vocab: control token: 128496 '<|place▁holder▁no▁496|>' is not marked as EOG
llm_load_vocab: control token: 128105 '<|place▁holder▁no▁105|>' is not marked as EOG
llm_load_vocab: control token: 128590 '<|place▁holder▁no▁590|>' is not marked as EOG
llm_load_vocab: control token: 128190 '<|place▁holder▁no▁190|>' is not marked as EOG
llm_load_vocab: control token: 128641 '<|place▁holder▁no▁641|>' is not marked as EOG
llm_load_vocab: control token: 128324 '<|place▁holder▁no▁324|>' is not marked as EOG
llm_load_vocab: control token: 128768 '<|place▁holder▁no▁768|>' is not marked as EOG
llm_load_vocab: control token: 128540 '<|place▁holder▁no▁540|>' is not marked as EOG
llm_load_vocab: control token: 128423 '<|place▁holder▁no▁423|>' is not marked as EOG
llm_load_vocab: control token: 128107 '<|place▁holder▁no▁107|>' is not marked as EOG
llm_load_vocab: control token: 128143 '<|place▁holder▁no▁143|>' is not marked as EOG
llm_load_vocab: control token: 128421 '<|place▁holder▁no▁421|>' is not marked as EOG
llm_load_vocab: control token: 128276 '<|place▁holder▁no▁276|>' is not marked as EOG
llm_load_vocab: control token: 128446 '<|place▁holder▁no▁446|>' is not marked as EOG
llm_load_vocab: control token: 128773 '<|place▁holder▁no▁773|>' is not marked as EOG
llm_load_vocab: control token: 128163 '<|place▁holder▁no▁163|>' is not marked as EOG
llm_load_vocab: control token: 128042 '<|place▁holder▁no▁42|>' is not marked as EOG
llm_load_vocab: control token: 128157 '<|place▁holder▁no▁157|>' is not marked as EOG
llm_load_vocab: control token: 128577 '<|place▁holder▁no▁577|>' is not marked as EOG
llm_load_vocab: control token: 128073 '<|place▁holder▁no▁73|>' is not marked as EOG
llm_load_vocab: control token: 128386 '<|place▁holder▁no▁386|>' is not marked as EOG
llm_load_vocab: control token: 128456 '<|place▁holder▁no▁456|>' is not marked as EOG
llm_load_vocab: control token: 128096 '<|place▁holder▁no▁96|>' is not marked as EOG
llm_load_vocab: control token: 128214 '<|place▁holder▁no▁214|>' is not marked as EOG
llm_load_vocab: control token: 128160 '<|place▁holder▁no▁160|>' is not marked as EOG
llm_load_vocab: control token: 128663 '<|place▁holder▁no▁663|>' is not marked as EOG
llm_load_vocab: control token: 128608 '<|place▁holder▁no▁608|>' is not marked as EOG
llm_load_vocab: control token: 128285 '<|place▁holder▁no▁285|>' is not marked as EOG
llm_load_vocab: control token: 128216 '<|place▁holder▁no▁216|>' is not marked as EOG
llm_load_vocab: control token: 128029 '<|place▁holder▁no▁29|>' is not marked as EOG
llm_load_vocab: control token: 128094 '<|place▁holder▁no▁94|>' is not marked as EOG
llm_load_vocab: control token: 128511 '<|place▁holder▁no▁511|>' is not marked as EOG
llm_load_vocab: control token: 128018 '<|place▁holder▁no▁18|>' is not marked as EOG
llm_load_vocab: control token: 128753 '<|place▁holder▁no▁753|>' is not marked as EOG
llm_load_vocab: control token: 128676 '<|place▁holder▁no▁676|>' is not marked as EOG
llm_load_vocab: control token: 128752 '<|place▁holder▁no▁752|>' is not marked as EOG
llm_load_vocab: control token: 128070 '<|place▁holder▁no▁70|>' is not marked as EOG
llm_load_vocab: control token: 128145 '<|place▁holder▁no▁145|>' is not marked as EOG
llm_load_vocab: control token: 128554 '<|place▁holder▁no▁554|>' is not marked as EOG
llm_load_vocab: control token: 128345 '<|place▁holder▁no▁345|>' is not marked as EOG
llm_load_vocab: control token: 128223 '<|place▁holder▁no▁223|>' is not marked as EOG
llm_load_vocab: control token: 128231 '<|place▁holder▁no▁231|>' is not marked as EOG
llm_load_vocab: control token: 128777 '<|place▁holder▁no▁777|>' is not marked as EOG
llm_load_vocab: control token: 128635 '<|place▁holder▁no▁635|>' is not marked as EOG
llm_load_vocab: control token: 128708 '<|place▁holder▁no▁708|>' is not marked as EOG
llm_load_vocab: control token: 128735 '<|place▁holder▁no▁735|>' is not marked as EOG
llm_load_vocab: control token: 128776 '<|place▁holder▁no▁776|>' is not marked as EOG
llm_load_vocab: control token: 128112 '<|place▁holder▁no▁112|>' is not marked as EOG
llm_load_vocab: control token: 128301 '<|place▁holder▁no▁301|>' is not marked as EOG
llm_load_vocab: control token: 128675 '<|place▁holder▁no▁675|>' is not marked as EOG
llm_load_vocab: control token: 128518 '<|place▁holder▁no▁518|>' is not marked as EOG
llm_load_vocab: control token: 128162 '<|place▁holder▁no▁162|>' is not marked as EOG
llm_load_vocab: control token: 128767 '<|place▁holder▁no▁767|>' is not marked as EOG
llm_load_vocab: control token: 128288 '<|place▁holder▁no▁288|>' is not marked as EOG
llm_load_vocab: control token: 128493 '<|place▁holder▁no▁493|>' is not marked as EOG
llm_load_vocab: control token: 128161 '<|place▁holder▁no▁161|>' is not marked as EOG
llm_load_vocab: control token: 128354 '<|place▁holder▁no▁354|>' is not marked as EOG
llm_load_vocab: control token: 128613 '<|place▁holder▁no▁613|>' is not marked as EOG
llm_load_vocab: control token: 128230 '<|place▁holder▁no▁230|>' is not marked as EOG
llm_load_vocab: control token: 128133 '<|place▁holder▁no▁133|>' is not marked as EOG
llm_load_vocab: control token: 128307 '<|place▁holder▁no▁307|>' is not marked as EOG
llm_load_vocab: control token: 128599 '<|place▁holder▁no▁599|>' is not marked as EOG
llm_load_vocab: control token: 128330 '<|place▁holder▁no▁330|>' is not marked as EOG
llm_load_vocab: control token: 128424 '<|place▁holder▁no▁424|>' is not marked as EOG
llm_load_vocab: control token: 128336 '<|place▁holder▁no▁336|>' is not marked as EOG
llm_load_vocab: control token: 128464 '<|place▁holder▁no▁464|>' is not marked as EOG
llm_load_vocab: control token: 128126 '<|place▁holder▁no▁126|>' is not marked as EOG
llm_load_vocab: control token: 128807 '<|tool▁calls▁end|>' is not marked as EOG
llm_load_vocab: control token: 128245 '<|place▁holder▁no▁245|>' is not marked as EOG
llm_load_vocab: control token: 128502 '<|place▁holder▁no▁502|>' is not marked as EOG
llm_load_vocab: control token: 128459 '<|place▁holder▁no▁459|>' is not marked as EOG
llm_load_vocab: control token: 128040 '<|place▁holder▁no▁40|>' is not marked as EOG
llm_load_vocab: control token: 128039 '<|place▁holder▁no▁39|>' is not marked as EOG
llm_load_vocab: control token: 128693 '<|place▁holder▁no▁693|>' is not marked as EOG
llm_load_vocab: control token: 128645 '<|place▁holder▁no▁645|>' is not marked as EOG
llm_load_vocab: control token: 128719 '<|place▁holder▁no▁719|>' is not marked as EOG
llm_load_vocab: control token: 128592 '<|place▁holder▁no▁592|>' is not marked as EOG
llm_load_vocab: control token: 128078 '<|place▁holder▁no▁78|>' is not marked as EOG
llm_load_vocab: control token: 128779 '<|place▁holder▁no▁779|>' is not marked as EOG
llm_load_vocab: control token: 128092 '<|place▁holder▁no▁92|>' is not marked as EOG
llm_load_vocab: control token: 128280 '<|place▁holder▁no▁280|>' is not marked as EOG
llm_load_vocab: control token: 128035 '<|place▁holder▁no▁35|>' is not marked as EOG
llm_load_vocab: control token: 128684 '<|place▁holder▁no▁684|>' is not marked as EOG
llm_load_vocab: control token: 128650 '<|place▁holder▁no▁650|>' is not marked as EOG
llm_load_vocab: control token: 128205 '<|place▁holder▁no▁205|>' is not marked as EOG
llm_load_vocab: control token: 128341 '<|place▁holder▁no▁341|>' is not marked as EOG
llm_load_vocab: control token: 128281 '<|place▁holder▁no▁281|>' is not marked as EOG
llm_load_vocab: control token: 128680 '<|place▁holder▁no▁680|>' is not marked as EOG
llm_load_vocab: control token: 128199 '<|place▁holder▁no▁199|>' is not marked as EOG
llm_load_vocab: control token: 128805 '<|EOT|>' is not marked as EOG
llm_load_vocab: control token: 128351 '<|place▁holder▁no▁351|>' is not marked as EOG
llm_load_vocab: control token: 128506 '<|place▁holder▁no▁506|>' is not marked as EOG
llm_load_vocab: control token: 128612 '<|place▁holder▁no▁612|>' is not marked as EOG
llm_load_vocab: control token: 128138 '<|place▁holder▁no▁138|>' is not marked as EOG
llm_load_vocab: control token: 128479 '<|place▁holder▁no▁479|>' is not marked as EOG
llm_load_vocab: control token: 128428 '<|place▁holder▁no▁428|>' is not marked as EOG
llm_load_vocab: control token: 128744 '<|place▁holder▁no▁744|>' is not marked as EOG
llm_load_vocab: control token: 128005 '<|place▁holder▁no▁5|>' is not marked as EOG
llm_load_vocab: control token: 128711 '<|place▁holder▁no▁711|>' is not marked as EOG
llm_load_vocab: control token: 128757 '<|place▁holder▁no▁757|>' is not marked as EOG
llm_load_vocab: control token: 128394 '<|place▁holder▁no▁394|>' is not marked as EOG
llm_load_vocab: control token: 128203 '<|place▁holder▁no▁203|>' is not marked as EOG
llm_load_vocab: control token: 128812 '<|tool▁output▁begin|>' is not marked as EOG
llm_load_vocab: control token: 128403 '<|place▁holder▁no▁403|>' is not marked as EOG
llm_load_vocab: control token: 128388 '<|place▁holder▁no▁388|>' is not marked as EOG
llm_load_vocab: control token: 128570 '<|place▁holder▁no▁570|>' is not marked as EOG
llm_load_vocab: control token: 128815 '<|PAD▁TOKEN|>' is not marked as EOG
llm_load_vocab: control token: 128766 '<|place▁holder▁no▁766|>' is not marked as EOG
llm_load_vocab: control token: 128412 '<|place▁holder▁no▁412|>' is not marked as EOG
llm_load_vocab: control token: 128619 '<|place▁holder▁no▁619|>' is not marked as EOG
llm_load_vocab: control token: 128409 '<|place▁holder▁no▁409|>' is not marked as EOG
llm_load_vocab: control token: 128108 '<|place▁holder▁no▁108|>' is not marked as EOG
llm_load_vocab: control token: 128328 '<|place▁holder▁no▁328|>' is not marked as EOG
llm_load_vocab: control token: 128477 '<|place▁holder▁no▁477|>' is not marked as EOG
llm_load_vocab: control token: 128728 '<|place▁holder▁no▁728|>' is not marked as EOG
llm_load_vocab: control token: 128278 '<|place▁holder▁no▁278|>' is not marked as EOG
llm_load_vocab: control token: 128111 '<|place▁holder▁no▁111|>' is not marked as EOG
llm_load_vocab: control token: 128725 '<|place▁holder▁no▁725|>' is not marked as EOG
llm_load_vocab: control token: 128204 '<|place▁holder▁no▁204|>' is not marked as EOG
llm_load_vocab: control token: 128561 '<|place▁holder▁no▁561|>' is not marked as EOG
llm_load_vocab: control token: 128274 '<|place▁holder▁no▁274|>' is not marked as EOG
llm_load_vocab: control token: 128023 '<|place▁holder▁no▁23|>' is not marked as EOG
llm_load_vocab: control token: 128485 '<|place▁holder▁no▁485|>' is not marked as EOG
llm_load_vocab: control token: 128389 '<|place▁holder▁no▁389|>' is not marked as EOG
llm_load_vocab: control token: 128367 '<|place▁holder▁no▁367|>' is not marked as EOG
llm_load_vocab: control token: 128781 '<|place▁holder▁no▁781|>' is not marked as EOG
llm_load_vocab: control token: 128045 '<|place▁holder▁no▁45|>' is not marked as EOG
llm_load_vocab: control token: 128467 '<|place▁holder▁no▁467|>' is not marked as EOG
llm_load_vocab: control token: 128182 '<|place▁holder▁no▁182|>' is not marked as EOG
llm_load_vocab: control token: 128565 '<|place▁holder▁no▁565|>' is not marked as EOG
llm_load_vocab: control token: 128741 '<|place▁holder▁no▁741|>' is not marked as EOG
llm_load_vocab: control token: 128337 '<|place▁holder▁no▁337|>' is not marked as EOG
llm_load_vocab: control token: 128004 '<|place▁holder▁no▁4|>' is not marked as EOG
llm_load_vocab: control token: 128482 '<|place▁holder▁no▁482|>' is not marked as EOG
llm_load_vocab: control token: 128335 '<|place▁holder▁no▁335|>' is not marked as EOG
llm_load_vocab: control token: 128129 '<|place▁holder▁no▁129|>' is not marked as EOG
llm_load_vocab: control token: 128495 '<|place▁holder▁no▁495|>' is not marked as EOG
llm_load_vocab: control token: 128545 '<|place▁holder▁no▁545|>' is not marked as EOG
llm_load_vocab: control token: 128168 '<|place▁holder▁no▁168|>' is not marked as EOG
llm_load_vocab: control token: 128780 '<|place▁holder▁no▁780|>' is not marked as EOG
llm_load_vocab: control token: 128240 '<|place▁holder▁no▁240|>' is not marked as EOG
llm_load_vocab: control token: 128186 '<|place▁holder▁no▁186|>' is not marked as EOG
llm_load_vocab: control token: 128640 '<|place▁holder▁no▁640|>' is not marked as EOG
llm_load_vocab: control token: 128264 '<|place▁holder▁no▁264|>' is not marked as EOG
llm_load_vocab: control token: 128021 '<|place▁holder▁no▁21|>' is not marked as EOG
llm_load_vocab: control token: 128571 '<|place▁holder▁no▁571|>' is not marked as EOG
llm_load_vocab: control token: 128193 '<|place▁holder▁no▁193|>' is not marked as EOG
llm_load_vocab: control token: 128128 '<|place▁holder▁no▁128|>' is not marked as EOG
llm_load_vocab: control token: 128695 '<|place▁holder▁no▁695|>' is not marked as EOG
llm_load_vocab: control token: 128703 '<|place▁holder▁no▁703|>' is not marked as EOG
llm_load_vocab: control token: 128061 '<|place▁holder▁no▁61|>' is not marked as EOG
llm_load_vocab: control token: 128611 '<|place▁holder▁no▁611|>' is not marked as EOG
llm_load_vocab: control token: 128246 '<|place▁holder▁no▁246|>' is not marked as EOG
llm_load_vocab: control token: 128077 '<|place▁holder▁no▁77|>' is not marked as EOG
llm_load_vocab: control token: 128217 '<|place▁holder▁no▁217|>' is not marked as EOG
llm_load_vocab: control token: 128380 '<|place▁holder▁no▁380|>' is not marked as EOG
llm_load_vocab: control token: 128567 '<|place▁holder▁no▁567|>' is not marked as EOG
llm_load_vocab: control token: 128365 '<|place▁holder▁no▁365|>' is not marked as EOG
llm_load_vocab: control token: 128793 '<|place▁holder▁no▁793|>' is not marked as EOG
llm_load_vocab: control token: 128547 '<|place▁holder▁no▁547|>' is not marked as EOG
llm_load_vocab: control token:      2 '<|▁pad▁|>' is not marked as EOG
llm_load_vocab: control token: 128272 '<|place▁holder▁no▁272|>' is not marked as EOG
llm_load_vocab: control token: 128633 '<|place▁holder▁no▁633|>' is not marked as EOG
llm_load_vocab: control token: 128580 '<|place▁holder▁no▁580|>' is not marked as EOG
llm_load_vocab: control token: 128677 '<|place▁holder▁no▁677|>' is not marked as EOG
llm_load_vocab: control token: 128255 '<|place▁holder▁no▁255|>' is not marked as EOG
llm_load_vocab: control token: 128434 '<|place▁holder▁no▁434|>' is not marked as EOG
llm_load_vocab: control token: 128647 '<|place▁holder▁no▁647|>' is not marked as EOG
llm_load_vocab: control token: 128656 '<|place▁holder▁no▁656|>' is not marked as EOG
llm_load_vocab: control token: 128179 '<|place▁holder▁no▁179|>' is not marked as EOG
llm_load_vocab: control token: 128270 '<|place▁holder▁no▁270|>' is not marked as EOG
llm_load_vocab: control token: 128342 '<|place▁holder▁no▁342|>' is not marked as EOG
llm_load_vocab: control token: 128305 '<|place▁holder▁no▁305|>' is not marked as EOG
llm_load_vocab: control token: 128299 '<|place▁holder▁no▁299|>' is not marked as EOG
llm_load_vocab: control token: 128431 '<|place▁holder▁no▁431|>' is not marked as EOG
llm_load_vocab: control token: 128154 '<|place▁holder▁no▁154|>' is not marked as EOG
llm_load_vocab: control token: 128371 '<|place▁holder▁no▁371|>' is not marked as EOG
llm_load_vocab: control token: 128244 '<|place▁holder▁no▁244|>' is not marked as EOG
llm_load_vocab: control token: 128585 '<|place▁holder▁no▁585|>' is not marked as EOG
llm_load_vocab: control token: 128000 '<|place▁holder▁no▁0|>' is not marked as EOG
llm_load_vocab: control token: 128669 '<|place▁holder▁no▁669|>' is not marked as EOG
llm_load_vocab: control token: 128648 '<|place▁holder▁no▁648|>' is not marked as EOG
llm_load_vocab: control token: 128103 '<|place▁holder▁no▁103|>' is not marked as EOG
llm_load_vocab: control token: 128737 '<|place▁holder▁no▁737|>' is not marked as EOG
llm_load_vocab: control token: 128667 '<|place▁holder▁no▁667|>' is not marked as EOG
llm_load_vocab: control token: 128356 '<|place▁holder▁no▁356|>' is not marked as EOG
llm_load_vocab: control token: 128261 '<|place▁holder▁no▁261|>' is not marked as EOG
llm_load_vocab: control token: 128503 '<|place▁holder▁no▁503|>' is not marked as EOG
llm_load_vocab: control token: 128326 '<|place▁holder▁no▁326|>' is not marked as EOG
llm_load_vocab: control token: 128671 '<|place▁holder▁no▁671|>' is not marked as EOG
llm_load_vocab: control token: 128637 '<|place▁holder▁no▁637|>' is not marked as EOG
llm_load_vocab: control token: 128148 '<|place▁holder▁no▁148|>' is not marked as EOG
llm_load_vocab: control token: 128229 '<|place▁holder▁no▁229|>' is not marked as EOG
llm_load_vocab: control token: 128556 '<|place▁holder▁no▁556|>' is not marked as EOG
llm_load_vocab: control token: 128438 '<|place▁holder▁no▁438|>' is not marked as EOG
llm_load_vocab: control token: 128315 '<|place▁holder▁no▁315|>' is not marked as EOG
llm_load_vocab: control token: 128507 '<|place▁holder▁no▁507|>' is not marked as EOG
llm_load_vocab: control token: 128368 '<|place▁holder▁no▁368|>' is not marked as EOG
llm_load_vocab: control token: 128814 '<|tool▁sep|>' is not marked as EOG
llm_load_vocab: control token: 128117 '<|place▁holder▁no▁117|>' is not marked as EOG
llm_load_vocab: control token: 128277 '<|place▁holder▁no▁277|>' is not marked as EOG
llm_load_vocab: control token: 128660 '<|place▁holder▁no▁660|>' is not marked as EOG
llm_load_vocab: control token: 128310 '<|place▁holder▁no▁310|>' is not marked as EOG
llm_load_vocab: control token: 128707 '<|place▁holder▁no▁707|>' is not marked as EOG
llm_load_vocab: control token: 128433 '<|place▁holder▁no▁433|>' is not marked as EOG
llm_load_vocab: control token: 128177 '<|place▁holder▁no▁177|>' is not marked as EOG
llm_load_vocab: control token: 128500 '<|place▁holder▁no▁500|>' is not marked as EOG
llm_load_vocab: control token: 128437 '<|place▁holder▁no▁437|>' is not marked as EOG
llm_load_vocab: control token: 128031 '<|place▁holder▁no▁31|>' is not marked as EOG
llm_load_vocab: control token: 128698 '<|place▁holder▁no▁698|>' is not marked as EOG
llm_load_vocab: control token: 128254 '<|place▁holder▁no▁254|>' is not marked as EOG
llm_load_vocab: control token: 128445 '<|place▁holder▁no▁445|>' is not marked as EOG
llm_load_vocab: control token: 128526 '<|place▁holder▁no▁526|>' is not marked as EOG
llm_load_vocab: control token: 128011 '<|place▁holder▁no▁11|>' is not marked as EOG
llm_load_vocab: control token: 128304 '<|place▁holder▁no▁304|>' is not marked as EOG
llm_load_vocab: control token: 128586 '<|place▁holder▁no▁586|>' is not marked as EOG
llm_load_vocab: control token: 128454 '<|place▁holder▁no▁454|>' is not marked as EOG
llm_load_vocab: control token: 128189 '<|place▁holder▁no▁189|>' is not marked as EOG
llm_load_vocab: control token: 128679 '<|place▁holder▁no▁679|>' is not marked as EOG
llm_load_vocab: control token: 128062 '<|place▁holder▁no▁62|>' is not marked as EOG
llm_load_vocab: control token: 128318 '<|place▁holder▁no▁318|>' is not marked as EOG
llm_load_vocab: control token: 128455 '<|place▁holder▁no▁455|>' is not marked as EOG
llm_load_vocab: control token: 128705 '<|place▁holder▁no▁705|>' is not marked as EOG
llm_load_vocab: control token: 128747 '<|place▁holder▁no▁747|>' is not marked as EOG
llm_load_vocab: control token: 128620 '<|place▁holder▁no▁620|>' is not marked as EOG
llm_load_vocab: control token: 128289 '<|place▁holder▁no▁289|>' is not marked as EOG
llm_load_vocab: control token: 128802 '<|fim▁end|>' is not marked as EOG
llm_load_vocab: control token: 128331 '<|place▁holder▁no▁331|>' is not marked as EOG
llm_load_vocab: control token: 128610 '<|place▁holder▁no▁610|>' is not marked as EOG
llm_load_vocab: control token: 128025 '<|place▁holder▁no▁25|>' is not marked as EOG
llm_load_vocab: control token: 128568 '<|place▁holder▁no▁568|>' is not marked as EOG
llm_load_vocab: control token: 128390 '<|place▁holder▁no▁390|>' is not marked as EOG
llm_load_vocab: control token: 128066 '<|place▁holder▁no▁66|>' is not marked as EOG
llm_load_vocab: control token: 128350 '<|place▁holder▁no▁350|>' is not marked as EOG
llm_load_vocab: control token: 128153 '<|place▁holder▁no▁153|>' is not marked as EOG
llm_load_vocab: control token: 128629 '<|place▁holder▁no▁629|>' is not marked as EOG
llm_load_vocab: control token: 128722 '<|place▁holder▁no▁722|>' is not marked as EOG
llm_load_vocab: control token: 128191 '<|place▁holder▁no▁191|>' is not marked as EOG
llm_load_vocab: control token: 128607 '<|place▁holder▁no▁607|>' is not marked as EOG
llm_load_vocab: control token: 128598 '<|place▁holder▁no▁598|>' is not marked as EOG
llm_load_vocab: control token: 128253 '<|place▁holder▁no▁253|>' is not marked as EOG
llm_load_vocab: control token: 128100 '<|place▁holder▁no▁100|>' is not marked as EOG
llm_load_vocab: control token: 128121 '<|place▁holder▁no▁121|>' is not marked as EOG
llm_load_vocab: control token: 128302 '<|place▁holder▁no▁302|>' is not marked as EOG
llm_load_vocab: control token: 128175 '<|place▁holder▁no▁175|>' is not marked as EOG
llm_load_vocab: control token: 128661 '<|place▁holder▁no▁661|>' is not marked as EOG
llm_load_vocab: control token: 128519 '<|place▁holder▁no▁519|>' is not marked as EOG
llm_load_vocab: control token: 128250 '<|place▁holder▁no▁250|>' is not marked as EOG
llm_load_vocab: control token: 128165 '<|place▁holder▁no▁165|>' is not marked as EOG
llm_load_vocab: control token: 128788 '<|place▁holder▁no▁788|>' is not marked as EOG
llm_load_vocab: control token: 128489 '<|place▁holder▁no▁489|>' is not marked as EOG
llm_load_vocab: control token: 128604 '<|place▁holder▁no▁604|>' is not marked as EOG
llm_load_vocab: control token: 128093 '<|place▁holder▁no▁93|>' is not marked as EOG
llm_load_vocab: control token: 128730 '<|place▁holder▁no▁730|>' is not marked as EOG
llm_load_vocab: control token:      0 '<|begin▁of▁sentence|>' is not marked as EOG
llm_load_vocab: control token: 128370 '<|place▁holder▁no▁370|>' is not marked as EOG
llm_load_vocab: control token: 128164 '<|place▁holder▁no▁164|>' is not marked as EOG
llm_load_vocab: control token: 128007 '<|place▁holder▁no▁7|>' is not marked as EOG
llm_load_vocab: control token: 128083 '<|place▁holder▁no▁83|>' is not marked as EOG
llm_load_vocab: control token: 128090 '<|place▁holder▁no▁90|>' is not marked as EOG
llm_load_vocab: control token: 128334 '<|place▁holder▁no▁334|>' is not marked as EOG
llm_load_vocab: control token: 128312 '<|place▁holder▁no▁312|>' is not marked as EOG
llm_load_vocab: control token: 128016 '<|place▁holder▁no▁16|>' is not marked as EOG
llm_load_vocab: control token: 128256 '<|place▁holder▁no▁256|>' is not marked as EOG
llm_load_vocab: control token: 128499 '<|place▁holder▁no▁499|>' is not marked as EOG
llm_load_vocab: control token: 128427 '<|place▁holder▁no▁427|>' is not marked as EOG
llm_load_vocab: control token: 128426 '<|place▁holder▁no▁426|>' is not marked as EOG
llm_load_vocab: control token: 128786 '<|place▁holder▁no▁786|>' is not marked as EOG
llm_load_vocab: control token: 128015 '<|place▁holder▁no▁15|>' is not marked as EOG
llm_load_vocab: control token: 128379 '<|place▁holder▁no▁379|>' is not marked as EOG
llm_load_vocab: control token: 128638 '<|place▁holder▁no▁638|>' is not marked as EOG
llm_load_vocab: control token: 128286 '<|place▁holder▁no▁286|>' is not marked as EOG
llm_load_vocab: control token: 128247 '<|place▁holder▁no▁247|>' is not marked as EOG
llm_load_vocab: control token: 128813 '<|tool▁output▁end|>' is not marked as EOG
llm_load_vocab: control token:      1 '<|end▁of▁sentence|>' is not marked as EOG
llm_load_vocab: control token: 128800 '<|fim▁hole|>' is not marked as EOG
llm_load_vocab: control token: 128027 '<|place▁holder▁no▁27|>' is not marked as EOG
llm_load_vocab: control token: 128749 '<|place▁holder▁no▁749|>' is not marked as EOG
llm_load_vocab: control token: 128366 '<|place▁holder▁no▁366|>' is not marked as EOG
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 819
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 129280
llm_load_print_meta: n_merges         = 127741
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_layer          = 61
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 128
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 24576
llm_load_print_meta: n_embd_v_gqa     = 16384
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18432
llm_load_print_meta: n_expert         = 256
llm_load_print_meta: n_expert_used    = 8
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 671B
llm_load_print_meta: model ftype      = IQ1_S - 1.5625 bpw
llm_load_print_meta: model params     = 671.03 B
llm_load_print_meta: model size       = 130.60 GiB (1.67 BPW) 
llm_load_print_meta: general.name     = DeepSeek R1 BF16
llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 128815 '<|PAD▁TOKEN|>'
llm_load_print_meta: LF token         = 131 'Ä'
llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 3
llm_load_print_meta: n_lora_q             = 1536
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 2048
llm_load_print_meta: n_expert_shared      = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 1024 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/62 layers to GPU
llm_load_tensors:          CPU model buffer size = 133730.06 MiB
load_all_data: no device found for buffer type CPU for async uploads
time=2025-01-29T06:36:08.616-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
time=2025-01-29T06:36:08.868-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.01"
time=2025-01-29T06:36:09.119-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.02"
time=2025-01-29T06:36:09.370-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.04"
time=2025-01-29T06:36:09.622-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.04"
time=2025-01-29T06:36:09.874-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.05"
time=2025-01-29T06:36:10.126-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.07"
time=2025-01-29T06:36:10.377-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.08"
time=2025-01-29T06:36:10.628-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.09"
time=2025-01-29T06:36:10.879-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.10"
time=2025-01-29T06:36:11.131-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.12"
time=2025-01-29T06:36:11.381-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.13"
time=2025-01-29T06:36:11.633-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.14"
time=2025-01-29T06:36:11.884-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.15"
time=2025-01-29T06:36:12.135-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.16"
time=2025-01-29T06:36:12.386-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.17"
time=2025-01-29T06:36:12.637-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.18"
time=2025-01-29T06:36:12.888-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.20"
time=2025-01-29T06:36:13.138-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.21"
time=2025-01-29T06:36:13.389-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.21"
time=2025-01-29T06:36:13.640-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.22"
time=2025-01-29T06:36:13.890-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.23"
time=2025-01-29T06:36:14.141-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.24"
time=2025-01-29T06:36:14.392-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.25"
time=2025-01-29T06:36:14.643-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.25"
time=2025-01-29T06:36:14.894-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.26"
time=2025-01-29T06:36:15.145-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.26"
time=2025-01-29T06:36:15.396-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.27"
time=2025-01-29T06:36:15.648-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.27"
time=2025-01-29T06:36:15.899-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.28"
time=2025-01-29T06:36:16.150-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.28"
time=2025-01-29T06:36:16.401-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.30"
time=2025-01-29T06:36:16.652-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.30"
time=2025-01-29T06:36:16.902-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.30"
time=2025-01-29T06:36:17.153-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.31"
time=2025-01-29T06:36:17.404-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.31"
time=2025-01-29T06:36:17.655-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.32"
time=2025-01-29T06:36:17.906-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.33"
time=2025-01-29T06:36:18.157-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.34"
time=2025-01-29T06:36:18.408-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.34"
time=2025-01-29T06:36:18.659-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.35"
time=2025-01-29T06:36:18.910-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.35"
time=2025-01-29T06:36:19.161-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.36"
time=2025-01-29T06:36:19.413-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.36"
time=2025-01-29T06:36:19.663-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.37"
time=2025-01-29T06:36:19.914-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.38"
time=2025-01-29T06:36:20.165-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.39"
time=2025-01-29T06:36:20.416-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.39"
time=2025-01-29T06:36:20.667-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.40"
time=2025-01-29T06:36:20.919-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.40"
time=2025-01-29T06:36:21.170-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.41"
time=2025-01-29T06:36:21.421-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.41"
time=2025-01-29T06:36:21.672-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.42"
time=2025-01-29T06:36:21.924-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.42"
time=2025-01-29T06:36:22.175-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.43"
time=2025-01-29T06:36:22.426-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.44"
time=2025-01-29T06:36:22.677-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.44"
time=2025-01-29T06:36:22.928-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.45"
time=2025-01-29T06:36:23.180-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.45"
time=2025-01-29T06:36:23.432-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.46"
time=2025-01-29T06:36:23.934-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.46"
time=2025-01-29T06:36:24.185-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.47"
time=2025-01-29T06:36:24.438-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.47"
time=2025-01-29T06:36:24.690-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.48"
time=2025-01-29T06:36:24.940-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.49"
time=2025-01-29T06:36:25.192-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.49"
time=2025-01-29T06:36:25.447-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.50"
time=2025-01-29T06:36:25.949-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.50"
time=2025-01-29T06:36:26.200-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.51"
time=2025-01-29T06:36:26.451-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.51"
time=2025-01-29T06:36:26.701-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.52"
time=2025-01-29T06:36:26.951-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.52"
time=2025-01-29T06:36:27.206-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.53"
time=2025-01-29T06:36:27.456-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.54"
time=2025-01-29T06:36:27.707-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.54"
time=2025-01-29T06:36:27.958-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.55"
time=2025-01-29T06:36:28.208-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.55"
time=2025-01-29T06:36:28.459-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.56"
time=2025-01-29T06:36:28.722-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.56"
time=2025-01-29T06:36:29.224-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.57"
time=2025-01-29T06:36:29.476-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.58"
time=2025-01-29T06:36:29.727-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.58"
time=2025-01-29T06:36:29.978-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.59"
time=2025-01-29T06:36:30.230-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.59"
time=2025-01-29T06:36:30.481-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.60"
time=2025-01-29T06:36:30.732-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.60"
time=2025-01-29T06:36:30.983-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.61"
time=2025-01-29T06:36:31.484-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.62"
time=2025-01-29T06:36:31.986-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.62"
time=2025-01-29T06:36:32.238-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.63"
time=2025-01-29T06:36:32.489-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.64"
time=2025-01-29T06:36:32.740-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.64"
time=2025-01-29T06:36:32.992-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.65"
time=2025-01-29T06:36:33.243-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.65"
time=2025-01-29T06:36:33.744-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.66"
time=2025-01-29T06:36:33.995-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.67"
time=2025-01-29T06:36:34.246-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.67"
time=2025-01-29T06:36:34.497-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.68"
time=2025-01-29T06:36:34.748-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.68"
time=2025-01-29T06:36:34.998-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.69"
time=2025-01-29T06:36:35.249-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.69"
time=2025-01-29T06:36:35.501-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.70"
time=2025-01-29T06:36:35.752-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.70"
time=2025-01-29T06:36:36.204-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding"
time=2025-01-29T06:36:38.264-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed"
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:458 msg="triggering expiration for failed load" model=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=server.go:1079 msg="stopping llama server"
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:380 msg="runner released" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6
time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests"
[GIN] 2025/01/29 - 06:36:38 | 500 | 29.935871625s |       127.0.0.1 | POST     "/api/generate"
        
<!-- gh-comment-id:2621406092 --> @fserb commented on GitHub (Jan 29, 2025): (Still from the local merged file) ``` $ ollama show deepseek-r1:iq1_s Model architecture deepseek2 parameters 671.0B context length 163840 embedding length 7168 quantization IQ1_S Parameters min_p 0.05 num_gpu 0 stop "<|begin▁of▁sentence|>" stop "<|end▁of▁sentence|>" stop "<|User|>" stop "<|Assistant|>" $ ollama run deepseek-r1:iq1_s Error: llama runner process has terminated: signal: killed ``` <details> <summary>log</summary> ``` OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_DEBUG=1 ollama serve 2025/01/29 06:36:04 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/fserb/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-01-29T06:36:04.225-05:00 level=INFO source=images.go:432 msg="total blobs: 12" time=2025-01-29T06:36:04.225-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-01-29T06:36:04.225-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-01-29T06:36:04.225-05:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" time=2025-01-29T06:36:04.225-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[metal] time=2025-01-29T06:36:04.225-05:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2025-01-29T06:36:04.225-05:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-01-29T06:36:04.290-05:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="48.0 GiB" available="48.0 GiB" [GIN] 2025/01/29 - 06:36:08 | 200 | 37.875µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/29 - 06:36:08 | 200 | 12.464458ms | 127.0.0.1 | POST "/api/show" time=2025-01-29T06:36:08.340-05:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x1033578c0 gpu_count=1 time=2025-01-29T06:36:08.359-05:00 level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading" time=2025-01-29T06:36:08.359-05:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="50.1 GiB" free_swap="0 B" time=2025-01-29T06:36:08.359-05:00 level=DEBUG source=memory.go:107 msg=evaluating library=cpu gpu_count=1 available="[50.1 GiB]" time=2025-01-29T06:36:08.360-05:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=0 layers.model=62 layers.offload=0 layers.split="" memory.available="[50.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="172.7 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[990.7 MiB]" memory.weights.total="167.5 GiB" memory.weights.repeating="166.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-01-29T06:36:08.360-05:00 level=WARN source=server.go:211 msg="flash attention enabled but not supported by gpu" time=2025-01-29T06:36:08.360-05:00 level=WARN source=server.go:234 msg="quantized kv cache requested but flash attention disabled" type=q4_0 time=2025-01-29T06:36:08.361-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/opt/homebrew/bin/ollama runner --model /Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 --ctx-size 8192 --batch-size 512 --n-gpu-layers 0 --verbose --threads 12 --no-mmap --parallel 4 --port 51336" time=2025-01-29T06:36:08.361-05:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/Users/fserb/.yarn/bin:/Users/fserb/.config/yarn/global/node_modules/.bin:/opt/homebrew/opt/openjdk/bin:/opt/homebrew/opt/util-linux/sbin:/opt/homebrew/opt/util-linux/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/Users/fserb/.cargo/bin:/Users/fserb/.go/bin:/usr/local/opt/zip/bin:/Users/fserb/.yarn/bin:/Users/fserb/.config/yarn/global/node_modules/.bin:/Users/fserb/.deno/bin:/usr/local/lib/ruby/gems/3.0.0/bin/:/usr/local/opt/ruby/bin:/usr/local/bin:/usr/local/sbin:/Users/fserb/bin:/opt/homebrew/opt/openjdk/bin:/opt/homebrew/opt/util-linux/sbin:/opt/homebrew/opt/util-linux/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/Users/fserb/.cargo/bin:/Users/fserb/.go/bin:/usr/local/opt/zip/bin:/Users/fserb/.yarn/bin:/Users/fserb/.config/yarn/global/node_modules/.bin:/usr/local/lib/ruby/gems/3.0.0/bin/:/usr/local/opt/ruby/bin:/usr/local/bin:/usr/local/sbin:/Users/fserb/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Applications/kitty.app/Contents/MacOS:./node_modules/.bin:/Users/fserb/.deno/bin:./node_modules/.bin LD_LIBRARY_PATH=/opt/homebrew/bin]" time=2025-01-29T06:36:08.363-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-29T06:36:08.363-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-29T06:36:08.363-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-29T06:36:08.371-05:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-29T06:36:08.372-05:00 level=INFO source=runner.go:937 msg=system info="Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=12 time=2025-01-29T06:36:08.372-05:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:51336" llama_load_model_from_file: using device Metal (Apple M3 Max) - 49151 MiB free llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from /Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 BF16 llama_model_loader: - kv 3: general.quantized_by str = Unsloth llama_model_loader: - kv 4: general.size_label str = 256x20B llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 6: deepseek2.block_count u32 = 61 llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 15: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 16: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 17: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 18: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 19: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 20: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 21: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 22: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 23: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 24: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 25: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 26: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 27: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 28: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 29: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 30: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 31: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 128815 llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 43: general.quantization_version u32 = 2 llama_model_loader: - kv 44: general.file_type u32 = 24 llama_model_loader: - kv 45: quantize.imatrix.file str = DeepSeek-R1.imatrix llama_model_loader: - kv 46: quantize.imatrix.dataset str = /training_data/calibration_datav3.txt llama_model_loader: - kv 47: quantize.imatrix.entries_count i32 = 720 llama_model_loader: - kv 48: quantize.imatrix.chunks_count i32 = 124 llama_model_loader: - kv 49: split.no u16 = 0 llama_model_loader: - kv 50: split.tensors.count i32 = 1025 llama_model_loader: - kv 51: split.count u16 = 0 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 190 tensors llama_model_loader: - type q5_K: 116 tensors llama_model_loader: - type q6_K: 184 tensors llama_model_loader: - type iq2_xxs: 6 tensors llama_model_loader: - type iq1_s: 168 tensors llm_load_vocab: control token: 128811 '<|tool▁outputs▁end|>' is not marked as EOG llm_load_vocab: control token: 128810 '<|tool▁outputs▁begin|>' is not marked as EOG llm_load_vocab: control token: 128809 '<|tool▁call▁end|>' is not marked as EOG llm_load_vocab: control token: 128808 '<|tool▁call▁begin|>' is not marked as EOG llm_load_vocab: control token: 128806 '<|tool▁calls▁begin|>' is not marked as EOG llm_load_vocab: control token: 128804 '<|Assistant|>' is not marked as EOG llm_load_vocab: control token: 128803 '<|User|>' is not marked as EOG llm_load_vocab: control token: 128796 '<|place▁holder▁no▁796|>' is not marked as EOG llm_load_vocab: control token: 128795 '<|place▁holder▁no▁795|>' is not marked as EOG llm_load_vocab: control token: 128792 '<|place▁holder▁no▁792|>' is not marked as EOG llm_load_vocab: control token: 128791 '<|place▁holder▁no▁791|>' is not marked as EOG llm_load_vocab: control token: 128790 '<|place▁holder▁no▁790|>' is not marked as EOG llm_load_vocab: control token: 128789 '<|place▁holder▁no▁789|>' is not marked as EOG llm_load_vocab: control token: 128787 '<|place▁holder▁no▁787|>' is not marked as EOG llm_load_vocab: control token: 128785 '<|place▁holder▁no▁785|>' is not marked as EOG llm_load_vocab: control token: 128784 '<|place▁holder▁no▁784|>' is not marked as EOG llm_load_vocab: control token: 128783 '<|place▁holder▁no▁783|>' is not marked as EOG llm_load_vocab: control token: 128782 '<|place▁holder▁no▁782|>' is not marked as EOG llm_load_vocab: control token: 128778 '<|place▁holder▁no▁778|>' is not marked as EOG llm_load_vocab: control token: 128775 '<|place▁holder▁no▁775|>' is not marked as EOG llm_load_vocab: control token: 128774 '<|place▁holder▁no▁774|>' is not marked as EOG llm_load_vocab: control token: 128772 '<|place▁holder▁no▁772|>' is not marked as EOG llm_load_vocab: control token: 128770 '<|place▁holder▁no▁770|>' is not marked as EOG llm_load_vocab: control token: 128769 '<|place▁holder▁no▁769|>' is not marked as EOG llm_load_vocab: control token: 128765 '<|place▁holder▁no▁765|>' is not marked as EOG llm_load_vocab: control token: 128764 '<|place▁holder▁no▁764|>' is not marked as EOG llm_load_vocab: control token: 128763 '<|place▁holder▁no▁763|>' is not marked as EOG llm_load_vocab: control token: 128761 '<|place▁holder▁no▁761|>' is not marked as EOG llm_load_vocab: control token: 128758 '<|place▁holder▁no▁758|>' is not marked as EOG llm_load_vocab: control token: 128756 '<|place▁holder▁no▁756|>' is not marked as EOG llm_load_vocab: control token: 128754 '<|place▁holder▁no▁754|>' is not marked as EOG llm_load_vocab: control token: 128751 '<|place▁holder▁no▁751|>' is not marked as EOG llm_load_vocab: control token: 128750 '<|place▁holder▁no▁750|>' is not marked as EOG llm_load_vocab: control token: 128748 '<|place▁holder▁no▁748|>' is not marked as EOG llm_load_vocab: control token: 128746 '<|place▁holder▁no▁746|>' is not marked as EOG llm_load_vocab: control token: 128745 '<|place▁holder▁no▁745|>' is not marked as EOG llm_load_vocab: control token: 128743 '<|place▁holder▁no▁743|>' is not marked as EOG llm_load_vocab: control token: 128739 '<|place▁holder▁no▁739|>' is not marked as EOG llm_load_vocab: control token: 128738 '<|place▁holder▁no▁738|>' is not marked as EOG llm_load_vocab: control token: 128736 '<|place▁holder▁no▁736|>' is not marked as EOG llm_load_vocab: control token: 128734 '<|place▁holder▁no▁734|>' is not marked as EOG llm_load_vocab: control token: 128733 '<|place▁holder▁no▁733|>' is not marked as EOG llm_load_vocab: control token: 128732 '<|place▁holder▁no▁732|>' is not marked as EOG llm_load_vocab: control token: 128731 '<|place▁holder▁no▁731|>' is not marked as EOG llm_load_vocab: control token: 128726 '<|place▁holder▁no▁726|>' is not marked as EOG llm_load_vocab: control token: 128724 '<|place▁holder▁no▁724|>' is not marked as EOG llm_load_vocab: control token: 128723 '<|place▁holder▁no▁723|>' is not marked as EOG llm_load_vocab: control token: 128721 '<|place▁holder▁no▁721|>' is not marked as EOG llm_load_vocab: control token: 128720 '<|place▁holder▁no▁720|>' is not marked as EOG llm_load_vocab: control token: 128718 '<|place▁holder▁no▁718|>' is not marked as EOG llm_load_vocab: control token: 128717 '<|place▁holder▁no▁717|>' is not marked as EOG llm_load_vocab: control token: 128715 '<|place▁holder▁no▁715|>' is not marked as EOG llm_load_vocab: control token: 128714 '<|place▁holder▁no▁714|>' is not marked as EOG llm_load_vocab: control token: 128713 '<|place▁holder▁no▁713|>' is not marked as EOG llm_load_vocab: control token: 128712 '<|place▁holder▁no▁712|>' is not marked as EOG llm_load_vocab: control token: 128710 '<|place▁holder▁no▁710|>' is not marked as EOG llm_load_vocab: control token: 128709 '<|place▁holder▁no▁709|>' is not marked as EOG llm_load_vocab: control token: 128706 '<|place▁holder▁no▁706|>' is not marked as EOG llm_load_vocab: control token: 128704 '<|place▁holder▁no▁704|>' is not marked as EOG llm_load_vocab: control token: 128702 '<|place▁holder▁no▁702|>' is not marked as EOG llm_load_vocab: control token: 128701 '<|place▁holder▁no▁701|>' is not marked as EOG llm_load_vocab: control token: 128700 '<|place▁holder▁no▁700|>' is not marked as EOG llm_load_vocab: control token: 128699 '<|place▁holder▁no▁699|>' is not marked as EOG llm_load_vocab: control token: 128697 '<|place▁holder▁no▁697|>' is not marked as EOG llm_load_vocab: control token: 128696 '<|place▁holder▁no▁696|>' is not marked as EOG llm_load_vocab: control token: 128694 '<|place▁holder▁no▁694|>' is not marked as EOG llm_load_vocab: control token: 128691 '<|place▁holder▁no▁691|>' is not marked as EOG llm_load_vocab: control token: 128690 '<|place▁holder▁no▁690|>' is not marked as EOG llm_load_vocab: control token: 128689 '<|place▁holder▁no▁689|>' is not marked as EOG llm_load_vocab: control token: 128688 '<|place▁holder▁no▁688|>' is not marked as EOG llm_load_vocab: control token: 128687 '<|place▁holder▁no▁687|>' is not marked as EOG llm_load_vocab: control token: 128686 '<|place▁holder▁no▁686|>' is not marked as EOG llm_load_vocab: control token: 128685 '<|place▁holder▁no▁685|>' is not marked as EOG llm_load_vocab: control token: 128682 '<|place▁holder▁no▁682|>' is not marked as EOG llm_load_vocab: control token: 128681 '<|place▁holder▁no▁681|>' is not marked as EOG llm_load_vocab: control token: 128678 '<|place▁holder▁no▁678|>' is not marked as EOG llm_load_vocab: control token: 128674 '<|place▁holder▁no▁674|>' is not marked as EOG llm_load_vocab: control token: 128672 '<|place▁holder▁no▁672|>' is not marked as EOG llm_load_vocab: control token: 128670 '<|place▁holder▁no▁670|>' is not marked as EOG llm_load_vocab: control token: 128668 '<|place▁holder▁no▁668|>' is not marked as EOG llm_load_vocab: control token: 128666 '<|place▁holder▁no▁666|>' is not marked as EOG llm_load_vocab: control token: 128665 '<|place▁holder▁no▁665|>' is not marked as EOG llm_load_vocab: control token: 128664 '<|place▁holder▁no▁664|>' is not marked as EOG llm_load_vocab: control token: 128662 '<|place▁holder▁no▁662|>' is not marked as EOG llm_load_vocab: control token: 128659 '<|place▁holder▁no▁659|>' is not marked as EOG llm_load_vocab: control token: 128658 '<|place▁holder▁no▁658|>' is not marked as EOG llm_load_vocab: control token: 128657 '<|place▁holder▁no▁657|>' is not marked as EOG llm_load_vocab: control token: 128655 '<|place▁holder▁no▁655|>' is not marked as EOG llm_load_vocab: control token: 128653 '<|place▁holder▁no▁653|>' is not marked as EOG llm_load_vocab: control token: 128649 '<|place▁holder▁no▁649|>' is not marked as EOG llm_load_vocab: control token: 128646 '<|place▁holder▁no▁646|>' is not marked as EOG llm_load_vocab: control token: 128644 '<|place▁holder▁no▁644|>' is not marked as EOG llm_load_vocab: control token: 128643 '<|place▁holder▁no▁643|>' is not marked as EOG llm_load_vocab: control token: 128642 '<|place▁holder▁no▁642|>' is not marked as EOG llm_load_vocab: control token: 128639 '<|place▁holder▁no▁639|>' is not marked as EOG llm_load_vocab: control token: 128636 '<|place▁holder▁no▁636|>' is not marked as EOG llm_load_vocab: control token: 128634 '<|place▁holder▁no▁634|>' is not marked as EOG llm_load_vocab: control token: 128630 '<|place▁holder▁no▁630|>' is not marked as EOG llm_load_vocab: control token: 128628 '<|place▁holder▁no▁628|>' is not marked as EOG llm_load_vocab: control token: 128627 '<|place▁holder▁no▁627|>' is not marked as EOG llm_load_vocab: control token: 128626 '<|place▁holder▁no▁626|>' is not marked as EOG llm_load_vocab: control token: 128624 '<|place▁holder▁no▁624|>' is not marked as EOG llm_load_vocab: control token: 128623 '<|place▁holder▁no▁623|>' is not marked as EOG llm_load_vocab: control token: 128622 '<|place▁holder▁no▁622|>' is not marked as EOG llm_load_vocab: control token: 128621 '<|place▁holder▁no▁621|>' is not marked as EOG llm_load_vocab: control token: 128618 '<|place▁holder▁no▁618|>' is not marked as EOG llm_load_vocab: control token: 128617 '<|place▁holder▁no▁617|>' is not marked as EOG llm_load_vocab: control token: 128616 '<|place▁holder▁no▁616|>' is not marked as EOG llm_load_vocab: control token: 128615 '<|place▁holder▁no▁615|>' is not marked as EOG llm_load_vocab: control token: 128614 '<|place▁holder▁no▁614|>' is not marked as EOG llm_load_vocab: control token: 128609 '<|place▁holder▁no▁609|>' is not marked as EOG llm_load_vocab: control token: 128605 '<|place▁holder▁no▁605|>' is not marked as EOG llm_load_vocab: control token: 128603 '<|place▁holder▁no▁603|>' is not marked as EOG llm_load_vocab: control token: 128601 '<|place▁holder▁no▁601|>' is not marked as EOG llm_load_vocab: control token: 128600 '<|place▁holder▁no▁600|>' is not marked as EOG llm_load_vocab: control token: 128597 '<|place▁holder▁no▁597|>' is not marked as EOG llm_load_vocab: control token: 128596 '<|place▁holder▁no▁596|>' is not marked as EOG llm_load_vocab: control token: 128595 '<|place▁holder▁no▁595|>' is not marked as EOG llm_load_vocab: control token: 128594 '<|place▁holder▁no▁594|>' is not marked as EOG llm_load_vocab: control token: 128591 '<|place▁holder▁no▁591|>' is not marked as EOG llm_load_vocab: control token: 128589 '<|place▁holder▁no▁589|>' is not marked as EOG llm_load_vocab: control token: 128588 '<|place▁holder▁no▁588|>' is not marked as EOG llm_load_vocab: control token: 128587 '<|place▁holder▁no▁587|>' is not marked as EOG llm_load_vocab: control token: 128583 '<|place▁holder▁no▁583|>' is not marked as EOG llm_load_vocab: control token: 128582 '<|place▁holder▁no▁582|>' is not marked as EOG llm_load_vocab: control token: 128581 '<|place▁holder▁no▁581|>' is not marked as EOG llm_load_vocab: control token: 128579 '<|place▁holder▁no▁579|>' is not marked as EOG llm_load_vocab: control token: 128578 '<|place▁holder▁no▁578|>' is not marked as EOG llm_load_vocab: control token: 128575 '<|place▁holder▁no▁575|>' is not marked as EOG llm_load_vocab: control token: 128574 '<|place▁holder▁no▁574|>' is not marked as EOG llm_load_vocab: control token: 128569 '<|place▁holder▁no▁569|>' is not marked as EOG llm_load_vocab: control token: 128566 '<|place▁holder▁no▁566|>' is not marked as EOG llm_load_vocab: control token: 128564 '<|place▁holder▁no▁564|>' is not marked as EOG llm_load_vocab: control token: 128563 '<|place▁holder▁no▁563|>' is not marked as EOG llm_load_vocab: control token: 128562 '<|place▁holder▁no▁562|>' is not marked as EOG llm_load_vocab: control token: 128560 '<|place▁holder▁no▁560|>' is not marked as EOG llm_load_vocab: control token: 128559 '<|place▁holder▁no▁559|>' is not marked as EOG llm_load_vocab: control token: 128558 '<|place▁holder▁no▁558|>' is not marked as EOG llm_load_vocab: control token: 128555 '<|place▁holder▁no▁555|>' is not marked as EOG llm_load_vocab: control token: 128553 '<|place▁holder▁no▁553|>' is not marked as EOG llm_load_vocab: control token: 128552 '<|place▁holder▁no▁552|>' is not marked as EOG llm_load_vocab: control token: 128551 '<|place▁holder▁no▁551|>' is not marked as EOG llm_load_vocab: control token: 128550 '<|place▁holder▁no▁550|>' is not marked as EOG llm_load_vocab: control token: 128549 '<|place▁holder▁no▁549|>' is not marked as EOG llm_load_vocab: control token: 128548 '<|place▁holder▁no▁548|>' is not marked as EOG llm_load_vocab: control token: 128546 '<|place▁holder▁no▁546|>' is not marked as EOG llm_load_vocab: control token: 128544 '<|place▁holder▁no▁544|>' is not marked as EOG llm_load_vocab: control token: 128543 '<|place▁holder▁no▁543|>' is not marked as EOG llm_load_vocab: control token: 128542 '<|place▁holder▁no▁542|>' is not marked as EOG llm_load_vocab: control token: 128541 '<|place▁holder▁no▁541|>' is not marked as EOG llm_load_vocab: control token: 128539 '<|place▁holder▁no▁539|>' is not marked as EOG llm_load_vocab: control token: 128538 '<|place▁holder▁no▁538|>' is not marked as EOG llm_load_vocab: control token: 128537 '<|place▁holder▁no▁537|>' is not marked as EOG llm_load_vocab: control token: 128536 '<|place▁holder▁no▁536|>' is not marked as EOG llm_load_vocab: control token: 128535 '<|place▁holder▁no▁535|>' is not marked as EOG llm_load_vocab: control token: 128533 '<|place▁holder▁no▁533|>' is not marked as EOG llm_load_vocab: control token: 128532 '<|place▁holder▁no▁532|>' is not marked as EOG llm_load_vocab: control token: 128531 '<|place▁holder▁no▁531|>' is not marked as EOG llm_load_vocab: control token: 128530 '<|place▁holder▁no▁530|>' is not marked as EOG llm_load_vocab: control token: 128528 '<|place▁holder▁no▁528|>' is not marked as EOG llm_load_vocab: control token: 128527 '<|place▁holder▁no▁527|>' is not marked as EOG llm_load_vocab: control token: 128525 '<|place▁holder▁no▁525|>' is not marked as EOG llm_load_vocab: control token: 128523 '<|place▁holder▁no▁523|>' is not marked as EOG llm_load_vocab: control token: 128522 '<|place▁holder▁no▁522|>' is not marked as EOG llm_load_vocab: control token: 128521 '<|place▁holder▁no▁521|>' is not marked as EOG llm_load_vocab: control token: 128517 '<|place▁holder▁no▁517|>' is not marked as EOG llm_load_vocab: control token: 128515 '<|place▁holder▁no▁515|>' is not marked as EOG llm_load_vocab: control token: 128514 '<|place▁holder▁no▁514|>' is not marked as EOG llm_load_vocab: control token: 128513 '<|place▁holder▁no▁513|>' is not marked as EOG llm_load_vocab: control token: 128509 '<|place▁holder▁no▁509|>' is not marked as EOG llm_load_vocab: control token: 128508 '<|place▁holder▁no▁508|>' is not marked as EOG llm_load_vocab: control token: 128505 '<|place▁holder▁no▁505|>' is not marked as EOG llm_load_vocab: control token: 128504 '<|place▁holder▁no▁504|>' is not marked as EOG llm_load_vocab: control token: 128501 '<|place▁holder▁no▁501|>' is not marked as EOG llm_load_vocab: control token: 128498 '<|place▁holder▁no▁498|>' is not marked as EOG llm_load_vocab: control token: 128492 '<|place▁holder▁no▁492|>' is not marked as EOG llm_load_vocab: control token: 128491 '<|place▁holder▁no▁491|>' is not marked as EOG llm_load_vocab: control token: 128488 '<|place▁holder▁no▁488|>' is not marked as EOG llm_load_vocab: control token: 128487 '<|place▁holder▁no▁487|>' is not marked as EOG llm_load_vocab: control token: 128486 '<|place▁holder▁no▁486|>' is not marked as EOG llm_load_vocab: control token: 128484 '<|place▁holder▁no▁484|>' is not marked as EOG llm_load_vocab: control token: 128483 '<|place▁holder▁no▁483|>' is not marked as EOG llm_load_vocab: control token: 128481 '<|place▁holder▁no▁481|>' is not marked as EOG llm_load_vocab: control token: 128480 '<|place▁holder▁no▁480|>' is not marked as EOG llm_load_vocab: control token: 128478 '<|place▁holder▁no▁478|>' is not marked as EOG llm_load_vocab: control token: 128476 '<|place▁holder▁no▁476|>' is not marked as EOG llm_load_vocab: control token: 128475 '<|place▁holder▁no▁475|>' is not marked as EOG llm_load_vocab: control token: 128474 '<|place▁holder▁no▁474|>' is not marked as EOG llm_load_vocab: control token: 128473 '<|place▁holder▁no▁473|>' is not marked as EOG llm_load_vocab: control token: 128470 '<|place▁holder▁no▁470|>' is not marked as EOG llm_load_vocab: control token: 128469 '<|place▁holder▁no▁469|>' is not marked as EOG llm_load_vocab: control token: 128468 '<|place▁holder▁no▁468|>' is not marked as EOG llm_load_vocab: control token: 128466 '<|place▁holder▁no▁466|>' is not marked as EOG llm_load_vocab: control token: 128463 '<|place▁holder▁no▁463|>' is not marked as EOG llm_load_vocab: control token: 128462 '<|place▁holder▁no▁462|>' is not marked as EOG llm_load_vocab: control token: 128460 '<|place▁holder▁no▁460|>' is not marked as EOG llm_load_vocab: control token: 128458 '<|place▁holder▁no▁458|>' is not marked as EOG llm_load_vocab: control token: 128457 '<|place▁holder▁no▁457|>' is not marked as EOG llm_load_vocab: control token: 128453 '<|place▁holder▁no▁453|>' is not marked as EOG llm_load_vocab: control token: 128452 '<|place▁holder▁no▁452|>' is not marked as EOG llm_load_vocab: control token: 128450 '<|place▁holder▁no▁450|>' is not marked as EOG llm_load_vocab: control token: 128449 '<|place▁holder▁no▁449|>' is not marked as EOG llm_load_vocab: control token: 128448 '<|place▁holder▁no▁448|>' is not marked as EOG llm_load_vocab: control token: 128447 '<|place▁holder▁no▁447|>' is not marked as EOG llm_load_vocab: control token: 128444 '<|place▁holder▁no▁444|>' is not marked as EOG llm_load_vocab: control token: 128443 '<|place▁holder▁no▁443|>' is not marked as EOG llm_load_vocab: control token: 128442 '<|place▁holder▁no▁442|>' is not marked as EOG llm_load_vocab: control token: 128436 '<|place▁holder▁no▁436|>' is not marked as EOG llm_load_vocab: control token: 128435 '<|place▁holder▁no▁435|>' is not marked as EOG llm_load_vocab: control token: 128432 '<|place▁holder▁no▁432|>' is not marked as EOG llm_load_vocab: control token: 128429 '<|place▁holder▁no▁429|>' is not marked as EOG llm_load_vocab: control token: 128425 '<|place▁holder▁no▁425|>' is not marked as EOG llm_load_vocab: control token: 128422 '<|place▁holder▁no▁422|>' is not marked as EOG llm_load_vocab: control token: 128419 '<|place▁holder▁no▁419|>' is not marked as EOG llm_load_vocab: control token: 128418 '<|place▁holder▁no▁418|>' is not marked as EOG llm_load_vocab: control token: 128417 '<|place▁holder▁no▁417|>' is not marked as EOG llm_load_vocab: control token: 128416 '<|place▁holder▁no▁416|>' is not marked as EOG llm_load_vocab: control token: 128414 '<|place▁holder▁no▁414|>' is not marked as EOG llm_load_vocab: control token: 128413 '<|place▁holder▁no▁413|>' is not marked as EOG llm_load_vocab: control token: 128411 '<|place▁holder▁no▁411|>' is not marked as EOG llm_load_vocab: control token: 128408 '<|place▁holder▁no▁408|>' is not marked as EOG llm_load_vocab: control token: 128407 '<|place▁holder▁no▁407|>' is not marked as EOG llm_load_vocab: control token: 128406 '<|place▁holder▁no▁406|>' is not marked as EOG llm_load_vocab: control token: 128405 '<|place▁holder▁no▁405|>' is not marked as EOG llm_load_vocab: control token: 128404 '<|place▁holder▁no▁404|>' is not marked as EOG llm_load_vocab: control token: 128402 '<|place▁holder▁no▁402|>' is not marked as EOG llm_load_vocab: control token: 128401 '<|place▁holder▁no▁401|>' is not marked as EOG llm_load_vocab: control token: 128400 '<|place▁holder▁no▁400|>' is not marked as EOG llm_load_vocab: control token: 128399 '<|place▁holder▁no▁399|>' is not marked as EOG llm_load_vocab: control token: 128398 '<|place▁holder▁no▁398|>' is not marked as EOG llm_load_vocab: control token: 128397 '<|place▁holder▁no▁397|>' is not marked as EOG llm_load_vocab: control token: 128396 '<|place▁holder▁no▁396|>' is not marked as EOG llm_load_vocab: control token: 128395 '<|place▁holder▁no▁395|>' is not marked as EOG llm_load_vocab: control token: 128393 '<|place▁holder▁no▁393|>' is not marked as EOG llm_load_vocab: control token: 128392 '<|place▁holder▁no▁392|>' is not marked as EOG llm_load_vocab: control token: 128391 '<|place▁holder▁no▁391|>' is not marked as EOG llm_load_vocab: control token: 128385 '<|place▁holder▁no▁385|>' is not marked as EOG llm_load_vocab: control token: 128384 '<|place▁holder▁no▁384|>' is not marked as EOG llm_load_vocab: control token: 128382 '<|place▁holder▁no▁382|>' is not marked as EOG llm_load_vocab: control token: 128378 '<|place▁holder▁no▁378|>' is not marked as EOG llm_load_vocab: control token: 128376 '<|place▁holder▁no▁376|>' is not marked as EOG llm_load_vocab: control token: 128375 '<|place▁holder▁no▁375|>' is not marked as EOG llm_load_vocab: control token: 128374 '<|place▁holder▁no▁374|>' is not marked as EOG llm_load_vocab: control token: 128372 '<|place▁holder▁no▁372|>' is not marked as EOG llm_load_vocab: control token: 128369 '<|place▁holder▁no▁369|>' is not marked as EOG llm_load_vocab: control token: 128364 '<|place▁holder▁no▁364|>' is not marked as EOG llm_load_vocab: control token: 128363 '<|place▁holder▁no▁363|>' is not marked as EOG llm_load_vocab: control token: 128361 '<|place▁holder▁no▁361|>' is not marked as EOG llm_load_vocab: control token: 128359 '<|place▁holder▁no▁359|>' is not marked as EOG llm_load_vocab: control token: 128358 '<|place▁holder▁no▁358|>' is not marked as EOG llm_load_vocab: control token: 128355 '<|place▁holder▁no▁355|>' is not marked as EOG llm_load_vocab: control token: 128353 '<|place▁holder▁no▁353|>' is not marked as EOG llm_load_vocab: control token: 128352 '<|place▁holder▁no▁352|>' is not marked as EOG llm_load_vocab: control token: 128349 '<|place▁holder▁no▁349|>' is not marked as EOG llm_load_vocab: control token: 128348 '<|place▁holder▁no▁348|>' is not marked as EOG llm_load_vocab: control token: 128347 '<|place▁holder▁no▁347|>' is not marked as EOG llm_load_vocab: control token: 128344 '<|place▁holder▁no▁344|>' is not marked as EOG llm_load_vocab: control token: 128343 '<|place▁holder▁no▁343|>' is not marked as EOG llm_load_vocab: control token: 128340 '<|place▁holder▁no▁340|>' is not marked as EOG llm_load_vocab: control token: 128338 '<|place▁holder▁no▁338|>' is not marked as EOG llm_load_vocab: control token: 128333 '<|place▁holder▁no▁333|>' is not marked as EOG llm_load_vocab: control token: 128332 '<|place▁holder▁no▁332|>' is not marked as EOG llm_load_vocab: control token: 128329 '<|place▁holder▁no▁329|>' is not marked as EOG llm_load_vocab: control token: 128327 '<|place▁holder▁no▁327|>' is not marked as EOG llm_load_vocab: control token: 128325 '<|place▁holder▁no▁325|>' is not marked as EOG llm_load_vocab: control token: 128322 '<|place▁holder▁no▁322|>' is not marked as EOG llm_load_vocab: control token: 128321 '<|place▁holder▁no▁321|>' is not marked as EOG llm_load_vocab: control token: 128320 '<|place▁holder▁no▁320|>' is not marked as EOG llm_load_vocab: control token: 128319 '<|place▁holder▁no▁319|>' is not marked as EOG llm_load_vocab: control token: 128316 '<|place▁holder▁no▁316|>' is not marked as EOG llm_load_vocab: control token: 128314 '<|place▁holder▁no▁314|>' is not marked as EOG llm_load_vocab: control token: 128313 '<|place▁holder▁no▁313|>' is not marked as EOG llm_load_vocab: control token: 128311 '<|place▁holder▁no▁311|>' is not marked as EOG llm_load_vocab: control token: 128309 '<|place▁holder▁no▁309|>' is not marked as EOG llm_load_vocab: control token: 128308 '<|place▁holder▁no▁308|>' is not marked as EOG llm_load_vocab: control token: 128306 '<|place▁holder▁no▁306|>' is not marked as EOG llm_load_vocab: control token: 128303 '<|place▁holder▁no▁303|>' is not marked as EOG llm_load_vocab: control token: 128298 '<|place▁holder▁no▁298|>' is not marked as EOG llm_load_vocab: control token: 128297 '<|place▁holder▁no▁297|>' is not marked as EOG llm_load_vocab: control token: 128296 '<|place▁holder▁no▁296|>' is not marked as EOG llm_load_vocab: control token: 128295 '<|place▁holder▁no▁295|>' is not marked as EOG llm_load_vocab: control token: 128293 '<|place▁holder▁no▁293|>' is not marked as EOG llm_load_vocab: control token: 128292 '<|place▁holder▁no▁292|>' is not marked as EOG llm_load_vocab: control token: 128291 '<|place▁holder▁no▁291|>' is not marked as EOG llm_load_vocab: control token: 128290 '<|place▁holder▁no▁290|>' is not marked as EOG llm_load_vocab: control token: 128287 '<|place▁holder▁no▁287|>' is not marked as EOG llm_load_vocab: control token: 128284 '<|place▁holder▁no▁284|>' is not marked as EOG llm_load_vocab: control token: 128283 '<|place▁holder▁no▁283|>' is not marked as EOG llm_load_vocab: control token: 128282 '<|place▁holder▁no▁282|>' is not marked as EOG llm_load_vocab: control token: 128279 '<|place▁holder▁no▁279|>' is not marked as EOG llm_load_vocab: control token: 128273 '<|place▁holder▁no▁273|>' is not marked as EOG llm_load_vocab: control token: 128271 '<|place▁holder▁no▁271|>' is not marked as EOG llm_load_vocab: control token: 128268 '<|place▁holder▁no▁268|>' is not marked as EOG llm_load_vocab: control token: 128266 '<|place▁holder▁no▁266|>' is not marked as EOG llm_load_vocab: control token: 128265 '<|place▁holder▁no▁265|>' is not marked as EOG llm_load_vocab: control token: 128263 '<|place▁holder▁no▁263|>' is not marked as EOG llm_load_vocab: control token: 128262 '<|place▁holder▁no▁262|>' is not marked as EOG llm_load_vocab: control token: 128260 '<|place▁holder▁no▁260|>' is not marked as EOG llm_load_vocab: control token: 128259 '<|place▁holder▁no▁259|>' is not marked as EOG llm_load_vocab: control token: 128258 '<|place▁holder▁no▁258|>' is not marked as EOG llm_load_vocab: control token: 128257 '<|place▁holder▁no▁257|>' is not marked as EOG llm_load_vocab: control token: 128251 '<|place▁holder▁no▁251|>' is not marked as EOG llm_load_vocab: control token: 128248 '<|place▁holder▁no▁248|>' is not marked as EOG llm_load_vocab: control token: 128243 '<|place▁holder▁no▁243|>' is not marked as EOG llm_load_vocab: control token: 128242 '<|place▁holder▁no▁242|>' is not marked as EOG llm_load_vocab: control token: 128239 '<|place▁holder▁no▁239|>' is not marked as EOG llm_load_vocab: control token: 128238 '<|place▁holder▁no▁238|>' is not marked as EOG llm_load_vocab: control token: 128235 '<|place▁holder▁no▁235|>' is not marked as EOG llm_load_vocab: control token: 128234 '<|place▁holder▁no▁234|>' is not marked as EOG llm_load_vocab: control token: 128233 '<|place▁holder▁no▁233|>' is not marked as EOG llm_load_vocab: control token: 128232 '<|place▁holder▁no▁232|>' is not marked as EOG llm_load_vocab: control token: 128228 '<|place▁holder▁no▁228|>' is not marked as EOG llm_load_vocab: control token: 128227 '<|place▁holder▁no▁227|>' is not marked as EOG llm_load_vocab: control token: 128226 '<|place▁holder▁no▁226|>' is not marked as EOG llm_load_vocab: control token: 128225 '<|place▁holder▁no▁225|>' is not marked as EOG llm_load_vocab: control token: 128224 '<|place▁holder▁no▁224|>' is not marked as EOG llm_load_vocab: control token: 128222 '<|place▁holder▁no▁222|>' is not marked as EOG llm_load_vocab: control token: 128221 '<|place▁holder▁no▁221|>' is not marked as EOG llm_load_vocab: control token: 128220 '<|place▁holder▁no▁220|>' is not marked as EOG llm_load_vocab: control token: 128219 '<|place▁holder▁no▁219|>' is not marked as EOG llm_load_vocab: control token: 128215 '<|place▁holder▁no▁215|>' is not marked as EOG llm_load_vocab: control token: 128213 '<|place▁holder▁no▁213|>' is not marked as EOG llm_load_vocab: control token: 128212 '<|place▁holder▁no▁212|>' is not marked as EOG llm_load_vocab: control token: 128211 '<|place▁holder▁no▁211|>' is not marked as EOG llm_load_vocab: control token: 128209 '<|place▁holder▁no▁209|>' is not marked as EOG llm_load_vocab: control token: 128208 '<|place▁holder▁no▁208|>' is not marked as EOG llm_load_vocab: control token: 128207 '<|place▁holder▁no▁207|>' is not marked as EOG llm_load_vocab: control token: 128206 '<|place▁holder▁no▁206|>' is not marked as EOG llm_load_vocab: control token: 128202 '<|place▁holder▁no▁202|>' is not marked as EOG llm_load_vocab: control token: 128201 '<|place▁holder▁no▁201|>' is not marked as EOG llm_load_vocab: control token: 128200 '<|place▁holder▁no▁200|>' is not marked as EOG llm_load_vocab: control token: 128198 '<|place▁holder▁no▁198|>' is not marked as EOG llm_load_vocab: control token: 128197 '<|place▁holder▁no▁197|>' is not marked as EOG llm_load_vocab: control token: 128195 '<|place▁holder▁no▁195|>' is not marked as EOG llm_load_vocab: control token: 128194 '<|place▁holder▁no▁194|>' is not marked as EOG llm_load_vocab: control token: 128192 '<|place▁holder▁no▁192|>' is not marked as EOG llm_load_vocab: control token: 128188 '<|place▁holder▁no▁188|>' is not marked as EOG llm_load_vocab: control token: 128187 '<|place▁holder▁no▁187|>' is not marked as EOG llm_load_vocab: control token: 128185 '<|place▁holder▁no▁185|>' is not marked as EOG llm_load_vocab: control token: 128184 '<|place▁holder▁no▁184|>' is not marked as EOG llm_load_vocab: control token: 128183 '<|place▁holder▁no▁183|>' is not marked as EOG llm_load_vocab: control token: 128181 '<|place▁holder▁no▁181|>' is not marked as EOG llm_load_vocab: control token: 128180 '<|place▁holder▁no▁180|>' is not marked as EOG llm_load_vocab: control token: 128178 '<|place▁holder▁no▁178|>' is not marked as EOG llm_load_vocab: control token: 128176 '<|place▁holder▁no▁176|>' is not marked as EOG llm_load_vocab: control token: 128174 '<|place▁holder▁no▁174|>' is not marked as EOG llm_load_vocab: control token: 128173 '<|place▁holder▁no▁173|>' is not marked as EOG llm_load_vocab: control token: 128171 '<|place▁holder▁no▁171|>' is not marked as EOG llm_load_vocab: control token: 128170 '<|place▁holder▁no▁170|>' is not marked as EOG llm_load_vocab: control token: 128166 '<|place▁holder▁no▁166|>' is not marked as EOG llm_load_vocab: control token: 128159 '<|place▁holder▁no▁159|>' is not marked as EOG llm_load_vocab: control token: 128158 '<|place▁holder▁no▁158|>' is not marked as EOG llm_load_vocab: control token: 128155 '<|place▁holder▁no▁155|>' is not marked as EOG llm_load_vocab: control token: 128152 '<|place▁holder▁no▁152|>' is not marked as EOG llm_load_vocab: control token: 128151 '<|place▁holder▁no▁151|>' is not marked as EOG llm_load_vocab: control token: 128149 '<|place▁holder▁no▁149|>' is not marked as EOG llm_load_vocab: control token: 128147 '<|place▁holder▁no▁147|>' is not marked as EOG llm_load_vocab: control token: 128146 '<|place▁holder▁no▁146|>' is not marked as EOG llm_load_vocab: control token: 128144 '<|place▁holder▁no▁144|>' is not marked as EOG llm_load_vocab: control token: 128142 '<|place▁holder▁no▁142|>' is not marked as EOG llm_load_vocab: control token: 128141 '<|place▁holder▁no▁141|>' is not marked as EOG llm_load_vocab: control token: 128140 '<|place▁holder▁no▁140|>' is not marked as EOG llm_load_vocab: control token: 128139 '<|place▁holder▁no▁139|>' is not marked as EOG llm_load_vocab: control token: 128137 '<|place▁holder▁no▁137|>' is not marked as EOG llm_load_vocab: control token: 128136 '<|place▁holder▁no▁136|>' is not marked as EOG llm_load_vocab: control token: 128135 '<|place▁holder▁no▁135|>' is not marked as EOG llm_load_vocab: control token: 128134 '<|place▁holder▁no▁134|>' is not marked as EOG llm_load_vocab: control token: 128132 '<|place▁holder▁no▁132|>' is not marked as EOG llm_load_vocab: control token: 128131 '<|place▁holder▁no▁131|>' is not marked as EOG llm_load_vocab: control token: 128130 '<|place▁holder▁no▁130|>' is not marked as EOG llm_load_vocab: control token: 128127 '<|place▁holder▁no▁127|>' is not marked as EOG llm_load_vocab: control token: 128125 '<|place▁holder▁no▁125|>' is not marked as EOG llm_load_vocab: control token: 128124 '<|place▁holder▁no▁124|>' is not marked as EOG llm_load_vocab: control token: 128123 '<|place▁holder▁no▁123|>' is not marked as EOG llm_load_vocab: control token: 128122 '<|place▁holder▁no▁122|>' is not marked as EOG llm_load_vocab: control token: 128120 '<|place▁holder▁no▁120|>' is not marked as EOG llm_load_vocab: control token: 128119 '<|place▁holder▁no▁119|>' is not marked as EOG llm_load_vocab: control token: 128116 '<|place▁holder▁no▁116|>' is not marked as EOG llm_load_vocab: control token: 128115 '<|place▁holder▁no▁115|>' is not marked as EOG llm_load_vocab: control token: 128113 '<|place▁holder▁no▁113|>' is not marked as EOG llm_load_vocab: control token: 128110 '<|place▁holder▁no▁110|>' is not marked as EOG llm_load_vocab: control token: 128109 '<|place▁holder▁no▁109|>' is not marked as EOG llm_load_vocab: control token: 128106 '<|place▁holder▁no▁106|>' is not marked as EOG llm_load_vocab: control token: 128104 '<|place▁holder▁no▁104|>' is not marked as EOG llm_load_vocab: control token: 128102 '<|place▁holder▁no▁102|>' is not marked as EOG llm_load_vocab: control token: 128101 '<|place▁holder▁no▁101|>' is not marked as EOG llm_load_vocab: control token: 128099 '<|place▁holder▁no▁99|>' is not marked as EOG llm_load_vocab: control token: 128098 '<|place▁holder▁no▁98|>' is not marked as EOG llm_load_vocab: control token: 128095 '<|place▁holder▁no▁95|>' is not marked as EOG llm_load_vocab: control token: 128091 '<|place▁holder▁no▁91|>' is not marked as EOG llm_load_vocab: control token: 128088 '<|place▁holder▁no▁88|>' is not marked as EOG llm_load_vocab: control token: 128087 '<|place▁holder▁no▁87|>' is not marked as EOG llm_load_vocab: control token: 128085 '<|place▁holder▁no▁85|>' is not marked as EOG llm_load_vocab: control token: 128084 '<|place▁holder▁no▁84|>' is not marked as EOG llm_load_vocab: control token: 128082 '<|place▁holder▁no▁82|>' is not marked as EOG llm_load_vocab: control token: 128081 '<|place▁holder▁no▁81|>' is not marked as EOG llm_load_vocab: control token: 128080 '<|place▁holder▁no▁80|>' is not marked as EOG llm_load_vocab: control token: 128079 '<|place▁holder▁no▁79|>' is not marked as EOG llm_load_vocab: control token: 128076 '<|place▁holder▁no▁76|>' is not marked as EOG llm_load_vocab: control token: 128075 '<|place▁holder▁no▁75|>' is not marked as EOG llm_load_vocab: control token: 128072 '<|place▁holder▁no▁72|>' is not marked as EOG llm_load_vocab: control token: 128071 '<|place▁holder▁no▁71|>' is not marked as EOG llm_load_vocab: control token: 128069 '<|place▁holder▁no▁69|>' is not marked as EOG llm_load_vocab: control token: 128067 '<|place▁holder▁no▁67|>' is not marked as EOG llm_load_vocab: control token: 128065 '<|place▁holder▁no▁65|>' is not marked as EOG llm_load_vocab: control token: 128064 '<|place▁holder▁no▁64|>' is not marked as EOG llm_load_vocab: control token: 128063 '<|place▁holder▁no▁63|>' is not marked as EOG llm_load_vocab: control token: 128060 '<|place▁holder▁no▁60|>' is not marked as EOG llm_load_vocab: control token: 128059 '<|place▁holder▁no▁59|>' is not marked as EOG llm_load_vocab: control token: 128058 '<|place▁holder▁no▁58|>' is not marked as EOG llm_load_vocab: control token: 128057 '<|place▁holder▁no▁57|>' is not marked as EOG llm_load_vocab: control token: 128056 '<|place▁holder▁no▁56|>' is not marked as EOG llm_load_vocab: control token: 128055 '<|place▁holder▁no▁55|>' is not marked as EOG llm_load_vocab: control token: 128054 '<|place▁holder▁no▁54|>' is not marked as EOG llm_load_vocab: control token: 128052 '<|place▁holder▁no▁52|>' is not marked as EOG llm_load_vocab: control token: 128051 '<|place▁holder▁no▁51|>' is not marked as EOG llm_load_vocab: control token: 128050 '<|place▁holder▁no▁50|>' is not marked as EOG llm_load_vocab: control token: 128049 '<|place▁holder▁no▁49|>' is not marked as EOG llm_load_vocab: control token: 128048 '<|place▁holder▁no▁48|>' is not marked as EOG llm_load_vocab: control token: 128047 '<|place▁holder▁no▁47|>' is not marked as EOG llm_load_vocab: control token: 128046 '<|place▁holder▁no▁46|>' is not marked as EOG llm_load_vocab: control token: 128043 '<|place▁holder▁no▁43|>' is not marked as EOG llm_load_vocab: control token: 128041 '<|place▁holder▁no▁41|>' is not marked as EOG llm_load_vocab: control token: 128037 '<|place▁holder▁no▁37|>' is not marked as EOG llm_load_vocab: control token: 128036 '<|place▁holder▁no▁36|>' is not marked as EOG llm_load_vocab: control token: 128034 '<|place▁holder▁no▁34|>' is not marked as EOG llm_load_vocab: control token: 128030 '<|place▁holder▁no▁30|>' is not marked as EOG llm_load_vocab: control token: 128028 '<|place▁holder▁no▁28|>' is not marked as EOG llm_load_vocab: control token: 128024 '<|place▁holder▁no▁24|>' is not marked as EOG llm_load_vocab: control token: 128022 '<|place▁holder▁no▁22|>' is not marked as EOG llm_load_vocab: control token: 128020 '<|place▁holder▁no▁20|>' is not marked as EOG llm_load_vocab: control token: 128019 '<|place▁holder▁no▁19|>' is not marked as EOG llm_load_vocab: control token: 128017 '<|place▁holder▁no▁17|>' is not marked as EOG llm_load_vocab: control token: 128014 '<|place▁holder▁no▁14|>' is not marked as EOG llm_load_vocab: control token: 128013 '<|place▁holder▁no▁13|>' is not marked as EOG llm_load_vocab: control token: 128012 '<|place▁holder▁no▁12|>' is not marked as EOG llm_load_vocab: control token: 128010 '<|place▁holder▁no▁10|>' is not marked as EOG llm_load_vocab: control token: 128009 '<|place▁holder▁no▁9|>' is not marked as EOG llm_load_vocab: control token: 128008 '<|place▁holder▁no▁8|>' is not marked as EOG llm_load_vocab: control token: 128006 '<|place▁holder▁no▁6|>' is not marked as EOG llm_load_vocab: control token: 128001 '<|place▁holder▁no▁1|>' is not marked as EOG llm_load_vocab: control token: 128651 '<|place▁holder▁no▁651|>' is not marked as EOG llm_load_vocab: control token: 128373 '<|place▁holder▁no▁373|>' is not marked as EOG llm_load_vocab: control token: 128801 '<|fim▁begin|>' is not marked as EOG llm_load_vocab: control token: 128472 '<|place▁holder▁no▁472|>' is not marked as EOG llm_load_vocab: control token: 128114 '<|place▁holder▁no▁114|>' is not marked as EOG llm_load_vocab: control token: 128294 '<|place▁holder▁no▁294|>' is not marked as EOG llm_load_vocab: control token: 128317 '<|place▁holder▁no▁317|>' is not marked as EOG llm_load_vocab: control token: 128026 '<|place▁holder▁no▁26|>' is not marked as EOG llm_load_vocab: control token: 128729 '<|place▁holder▁no▁729|>' is not marked as EOG llm_load_vocab: control token: 128557 '<|place▁holder▁no▁557|>' is not marked as EOG llm_load_vocab: control token: 128339 '<|place▁holder▁no▁339|>' is not marked as EOG llm_load_vocab: control token: 128797 '<|place▁holder▁no▁797|>' is not marked as EOG llm_load_vocab: control token: 128237 '<|place▁holder▁no▁237|>' is not marked as EOG llm_load_vocab: control token: 128086 '<|place▁holder▁no▁86|>' is not marked as EOG llm_load_vocab: control token: 128625 '<|place▁holder▁no▁625|>' is not marked as EOG llm_load_vocab: control token: 128716 '<|place▁holder▁no▁716|>' is not marked as EOG llm_load_vocab: control token: 128420 '<|place▁holder▁no▁420|>' is not marked as EOG llm_load_vocab: control token: 128236 '<|place▁holder▁no▁236|>' is not marked as EOG llm_load_vocab: control token: 128727 '<|place▁holder▁no▁727|>' is not marked as EOG llm_load_vocab: control token: 128150 '<|place▁holder▁no▁150|>' is not marked as EOG llm_load_vocab: control token: 128465 '<|place▁holder▁no▁465|>' is not marked as EOG llm_load_vocab: control token: 128760 '<|place▁holder▁no▁760|>' is not marked as EOG llm_load_vocab: control token: 128461 '<|place▁holder▁no▁461|>' is not marked as EOG llm_load_vocab: control token: 128451 '<|place▁holder▁no▁451|>' is not marked as EOG llm_load_vocab: control token: 128534 '<|place▁holder▁no▁534|>' is not marked as EOG llm_load_vocab: control token: 128346 '<|place▁holder▁no▁346|>' is not marked as EOG llm_load_vocab: control token: 128759 '<|place▁holder▁no▁759|>' is not marked as EOG llm_load_vocab: control token: 128602 '<|place▁holder▁no▁602|>' is not marked as EOG llm_load_vocab: control token: 128383 '<|place▁holder▁no▁383|>' is not marked as EOG llm_load_vocab: control token: 128053 '<|place▁holder▁no▁53|>' is not marked as EOG llm_load_vocab: control token: 128794 '<|place▁holder▁no▁794|>' is not marked as EOG llm_load_vocab: control token: 128755 '<|place▁holder▁no▁755|>' is not marked as EOG llm_load_vocab: control token: 128631 '<|place▁holder▁no▁631|>' is not marked as EOG llm_load_vocab: control token: 128692 '<|place▁holder▁no▁692|>' is not marked as EOG llm_load_vocab: control token: 128357 '<|place▁holder▁no▁357|>' is not marked as EOG llm_load_vocab: control token: 128362 '<|place▁holder▁no▁362|>' is not marked as EOG llm_load_vocab: control token: 128038 '<|place▁holder▁no▁38|>' is not marked as EOG llm_load_vocab: control token: 128275 '<|place▁holder▁no▁275|>' is not marked as EOG llm_load_vocab: control token: 128742 '<|place▁holder▁no▁742|>' is not marked as EOG llm_load_vocab: control token: 128196 '<|place▁holder▁no▁196|>' is not marked as EOG llm_load_vocab: control token: 128683 '<|place▁holder▁no▁683|>' is not marked as EOG llm_load_vocab: control token: 128269 '<|place▁holder▁no▁269|>' is not marked as EOG llm_load_vocab: control token: 128512 '<|place▁holder▁no▁512|>' is not marked as EOG llm_load_vocab: control token: 128381 '<|place▁holder▁no▁381|>' is not marked as EOG llm_load_vocab: control token: 128377 '<|place▁holder▁no▁377|>' is not marked as EOG llm_load_vocab: control token: 128576 '<|place▁holder▁no▁576|>' is not marked as EOG llm_load_vocab: control token: 128218 '<|place▁holder▁no▁218|>' is not marked as EOG llm_load_vocab: control token: 128762 '<|place▁holder▁no▁762|>' is not marked as EOG llm_load_vocab: control token: 128044 '<|place▁holder▁no▁44|>' is not marked as EOG llm_load_vocab: control token: 128252 '<|place▁holder▁no▁252|>' is not marked as EOG llm_load_vocab: control token: 128156 '<|place▁holder▁no▁156|>' is not marked as EOG llm_load_vocab: control token: 128415 '<|place▁holder▁no▁415|>' is not marked as EOG llm_load_vocab: control token: 128118 '<|place▁holder▁no▁118|>' is not marked as EOG llm_load_vocab: control token: 128490 '<|place▁holder▁no▁490|>' is not marked as EOG llm_load_vocab: control token: 128439 '<|place▁holder▁no▁439|>' is not marked as EOG llm_load_vocab: control token: 128593 '<|place▁holder▁no▁593|>' is not marked as EOG llm_load_vocab: control token: 128323 '<|place▁holder▁no▁323|>' is not marked as EOG llm_load_vocab: control token: 128441 '<|place▁holder▁no▁441|>' is not marked as EOG llm_load_vocab: control token: 128172 '<|place▁holder▁no▁172|>' is not marked as EOG llm_load_vocab: control token: 128097 '<|place▁holder▁no▁97|>' is not marked as EOG llm_load_vocab: control token: 128652 '<|place▁holder▁no▁652|>' is not marked as EOG llm_load_vocab: control token: 128516 '<|place▁holder▁no▁516|>' is not marked as EOG llm_load_vocab: control token: 128241 '<|place▁holder▁no▁241|>' is not marked as EOG llm_load_vocab: control token: 128360 '<|place▁holder▁no▁360|>' is not marked as EOG llm_load_vocab: control token: 128267 '<|place▁holder▁no▁267|>' is not marked as EOG llm_load_vocab: control token: 128673 '<|place▁holder▁no▁673|>' is not marked as EOG llm_load_vocab: control token: 128033 '<|place▁holder▁no▁33|>' is not marked as EOG llm_load_vocab: control token: 128387 '<|place▁holder▁no▁387|>' is not marked as EOG llm_load_vocab: control token: 128430 '<|place▁holder▁no▁430|>' is not marked as EOG llm_load_vocab: control token: 128471 '<|place▁holder▁no▁471|>' is not marked as EOG llm_load_vocab: control token: 128510 '<|place▁holder▁no▁510|>' is not marked as EOG llm_load_vocab: control token: 128089 '<|place▁holder▁no▁89|>' is not marked as EOG llm_load_vocab: control token: 128494 '<|place▁holder▁no▁494|>' is not marked as EOG llm_load_vocab: control token: 128068 '<|place▁holder▁no▁68|>' is not marked as EOG llm_load_vocab: control token: 128440 '<|place▁holder▁no▁440|>' is not marked as EOG llm_load_vocab: control token: 128529 '<|place▁holder▁no▁529|>' is not marked as EOG llm_load_vocab: control token: 128584 '<|place▁holder▁no▁584|>' is not marked as EOG llm_load_vocab: control token: 128032 '<|place▁holder▁no▁32|>' is not marked as EOG llm_load_vocab: control token: 128210 '<|place▁holder▁no▁210|>' is not marked as EOG llm_load_vocab: control token: 128771 '<|place▁holder▁no▁771|>' is not marked as EOG llm_load_vocab: control token: 128167 '<|place▁holder▁no▁167|>' is not marked as EOG llm_load_vocab: control token: 128524 '<|place▁holder▁no▁524|>' is not marked as EOG llm_load_vocab: control token: 128572 '<|place▁holder▁no▁572|>' is not marked as EOG llm_load_vocab: control token: 128074 '<|place▁holder▁no▁74|>' is not marked as EOG llm_load_vocab: control token: 128654 '<|place▁holder▁no▁654|>' is not marked as EOG llm_load_vocab: control token: 128002 '<|place▁holder▁no▁2|>' is not marked as EOG llm_load_vocab: control token: 128520 '<|place▁holder▁no▁520|>' is not marked as EOG llm_load_vocab: control token: 128606 '<|place▁holder▁no▁606|>' is not marked as EOG llm_load_vocab: control token: 128410 '<|place▁holder▁no▁410|>' is not marked as EOG llm_load_vocab: control token: 128740 '<|place▁holder▁no▁740|>' is not marked as EOG llm_load_vocab: control token: 128497 '<|place▁holder▁no▁497|>' is not marked as EOG llm_load_vocab: control token: 128632 '<|place▁holder▁no▁632|>' is not marked as EOG llm_load_vocab: control token: 128573 '<|place▁holder▁no▁573|>' is not marked as EOG llm_load_vocab: control token: 128169 '<|place▁holder▁no▁169|>' is not marked as EOG llm_load_vocab: control token: 128300 '<|place▁holder▁no▁300|>' is not marked as EOG llm_load_vocab: control token: 128249 '<|place▁holder▁no▁249|>' is not marked as EOG llm_load_vocab: control token: 128003 '<|place▁holder▁no▁3|>' is not marked as EOG llm_load_vocab: control token: 128496 '<|place▁holder▁no▁496|>' is not marked as EOG llm_load_vocab: control token: 128105 '<|place▁holder▁no▁105|>' is not marked as EOG llm_load_vocab: control token: 128590 '<|place▁holder▁no▁590|>' is not marked as EOG llm_load_vocab: control token: 128190 '<|place▁holder▁no▁190|>' is not marked as EOG llm_load_vocab: control token: 128641 '<|place▁holder▁no▁641|>' is not marked as EOG llm_load_vocab: control token: 128324 '<|place▁holder▁no▁324|>' is not marked as EOG llm_load_vocab: control token: 128768 '<|place▁holder▁no▁768|>' is not marked as EOG llm_load_vocab: control token: 128540 '<|place▁holder▁no▁540|>' is not marked as EOG llm_load_vocab: control token: 128423 '<|place▁holder▁no▁423|>' is not marked as EOG llm_load_vocab: control token: 128107 '<|place▁holder▁no▁107|>' is not marked as EOG llm_load_vocab: control token: 128143 '<|place▁holder▁no▁143|>' is not marked as EOG llm_load_vocab: control token: 128421 '<|place▁holder▁no▁421|>' is not marked as EOG llm_load_vocab: control token: 128276 '<|place▁holder▁no▁276|>' is not marked as EOG llm_load_vocab: control token: 128446 '<|place▁holder▁no▁446|>' is not marked as EOG llm_load_vocab: control token: 128773 '<|place▁holder▁no▁773|>' is not marked as EOG llm_load_vocab: control token: 128163 '<|place▁holder▁no▁163|>' is not marked as EOG llm_load_vocab: control token: 128042 '<|place▁holder▁no▁42|>' is not marked as EOG llm_load_vocab: control token: 128157 '<|place▁holder▁no▁157|>' is not marked as EOG llm_load_vocab: control token: 128577 '<|place▁holder▁no▁577|>' is not marked as EOG llm_load_vocab: control token: 128073 '<|place▁holder▁no▁73|>' is not marked as EOG llm_load_vocab: control token: 128386 '<|place▁holder▁no▁386|>' is not marked as EOG llm_load_vocab: control token: 128456 '<|place▁holder▁no▁456|>' is not marked as EOG llm_load_vocab: control token: 128096 '<|place▁holder▁no▁96|>' is not marked as EOG llm_load_vocab: control token: 128214 '<|place▁holder▁no▁214|>' is not marked as EOG llm_load_vocab: control token: 128160 '<|place▁holder▁no▁160|>' is not marked as EOG llm_load_vocab: control token: 128663 '<|place▁holder▁no▁663|>' is not marked as EOG llm_load_vocab: control token: 128608 '<|place▁holder▁no▁608|>' is not marked as EOG llm_load_vocab: control token: 128285 '<|place▁holder▁no▁285|>' is not marked as EOG llm_load_vocab: control token: 128216 '<|place▁holder▁no▁216|>' is not marked as EOG llm_load_vocab: control token: 128029 '<|place▁holder▁no▁29|>' is not marked as EOG llm_load_vocab: control token: 128094 '<|place▁holder▁no▁94|>' is not marked as EOG llm_load_vocab: control token: 128511 '<|place▁holder▁no▁511|>' is not marked as EOG llm_load_vocab: control token: 128018 '<|place▁holder▁no▁18|>' is not marked as EOG llm_load_vocab: control token: 128753 '<|place▁holder▁no▁753|>' is not marked as EOG llm_load_vocab: control token: 128676 '<|place▁holder▁no▁676|>' is not marked as EOG llm_load_vocab: control token: 128752 '<|place▁holder▁no▁752|>' is not marked as EOG llm_load_vocab: control token: 128070 '<|place▁holder▁no▁70|>' is not marked as EOG llm_load_vocab: control token: 128145 '<|place▁holder▁no▁145|>' is not marked as EOG llm_load_vocab: control token: 128554 '<|place▁holder▁no▁554|>' is not marked as EOG llm_load_vocab: control token: 128345 '<|place▁holder▁no▁345|>' is not marked as EOG llm_load_vocab: control token: 128223 '<|place▁holder▁no▁223|>' is not marked as EOG llm_load_vocab: control token: 128231 '<|place▁holder▁no▁231|>' is not marked as EOG llm_load_vocab: control token: 128777 '<|place▁holder▁no▁777|>' is not marked as EOG llm_load_vocab: control token: 128635 '<|place▁holder▁no▁635|>' is not marked as EOG llm_load_vocab: control token: 128708 '<|place▁holder▁no▁708|>' is not marked as EOG llm_load_vocab: control token: 128735 '<|place▁holder▁no▁735|>' is not marked as EOG llm_load_vocab: control token: 128776 '<|place▁holder▁no▁776|>' is not marked as EOG llm_load_vocab: control token: 128112 '<|place▁holder▁no▁112|>' is not marked as EOG llm_load_vocab: control token: 128301 '<|place▁holder▁no▁301|>' is not marked as EOG llm_load_vocab: control token: 128675 '<|place▁holder▁no▁675|>' is not marked as EOG llm_load_vocab: control token: 128518 '<|place▁holder▁no▁518|>' is not marked as EOG llm_load_vocab: control token: 128162 '<|place▁holder▁no▁162|>' is not marked as EOG llm_load_vocab: control token: 128767 '<|place▁holder▁no▁767|>' is not marked as EOG llm_load_vocab: control token: 128288 '<|place▁holder▁no▁288|>' is not marked as EOG llm_load_vocab: control token: 128493 '<|place▁holder▁no▁493|>' is not marked as EOG llm_load_vocab: control token: 128161 '<|place▁holder▁no▁161|>' is not marked as EOG llm_load_vocab: control token: 128354 '<|place▁holder▁no▁354|>' is not marked as EOG llm_load_vocab: control token: 128613 '<|place▁holder▁no▁613|>' is not marked as EOG llm_load_vocab: control token: 128230 '<|place▁holder▁no▁230|>' is not marked as EOG llm_load_vocab: control token: 128133 '<|place▁holder▁no▁133|>' is not marked as EOG llm_load_vocab: control token: 128307 '<|place▁holder▁no▁307|>' is not marked as EOG llm_load_vocab: control token: 128599 '<|place▁holder▁no▁599|>' is not marked as EOG llm_load_vocab: control token: 128330 '<|place▁holder▁no▁330|>' is not marked as EOG llm_load_vocab: control token: 128424 '<|place▁holder▁no▁424|>' is not marked as EOG llm_load_vocab: control token: 128336 '<|place▁holder▁no▁336|>' is not marked as EOG llm_load_vocab: control token: 128464 '<|place▁holder▁no▁464|>' is not marked as EOG llm_load_vocab: control token: 128126 '<|place▁holder▁no▁126|>' is not marked as EOG llm_load_vocab: control token: 128807 '<|tool▁calls▁end|>' is not marked as EOG llm_load_vocab: control token: 128245 '<|place▁holder▁no▁245|>' is not marked as EOG llm_load_vocab: control token: 128502 '<|place▁holder▁no▁502|>' is not marked as EOG llm_load_vocab: control token: 128459 '<|place▁holder▁no▁459|>' is not marked as EOG llm_load_vocab: control token: 128040 '<|place▁holder▁no▁40|>' is not marked as EOG llm_load_vocab: control token: 128039 '<|place▁holder▁no▁39|>' is not marked as EOG llm_load_vocab: control token: 128693 '<|place▁holder▁no▁693|>' is not marked as EOG llm_load_vocab: control token: 128645 '<|place▁holder▁no▁645|>' is not marked as EOG llm_load_vocab: control token: 128719 '<|place▁holder▁no▁719|>' is not marked as EOG llm_load_vocab: control token: 128592 '<|place▁holder▁no▁592|>' is not marked as EOG llm_load_vocab: control token: 128078 '<|place▁holder▁no▁78|>' is not marked as EOG llm_load_vocab: control token: 128779 '<|place▁holder▁no▁779|>' is not marked as EOG llm_load_vocab: control token: 128092 '<|place▁holder▁no▁92|>' is not marked as EOG llm_load_vocab: control token: 128280 '<|place▁holder▁no▁280|>' is not marked as EOG llm_load_vocab: control token: 128035 '<|place▁holder▁no▁35|>' is not marked as EOG llm_load_vocab: control token: 128684 '<|place▁holder▁no▁684|>' is not marked as EOG llm_load_vocab: control token: 128650 '<|place▁holder▁no▁650|>' is not marked as EOG llm_load_vocab: control token: 128205 '<|place▁holder▁no▁205|>' is not marked as EOG llm_load_vocab: control token: 128341 '<|place▁holder▁no▁341|>' is not marked as EOG llm_load_vocab: control token: 128281 '<|place▁holder▁no▁281|>' is not marked as EOG llm_load_vocab: control token: 128680 '<|place▁holder▁no▁680|>' is not marked as EOG llm_load_vocab: control token: 128199 '<|place▁holder▁no▁199|>' is not marked as EOG llm_load_vocab: control token: 128805 '<|EOT|>' is not marked as EOG llm_load_vocab: control token: 128351 '<|place▁holder▁no▁351|>' is not marked as EOG llm_load_vocab: control token: 128506 '<|place▁holder▁no▁506|>' is not marked as EOG llm_load_vocab: control token: 128612 '<|place▁holder▁no▁612|>' is not marked as EOG llm_load_vocab: control token: 128138 '<|place▁holder▁no▁138|>' is not marked as EOG llm_load_vocab: control token: 128479 '<|place▁holder▁no▁479|>' is not marked as EOG llm_load_vocab: control token: 128428 '<|place▁holder▁no▁428|>' is not marked as EOG llm_load_vocab: control token: 128744 '<|place▁holder▁no▁744|>' is not marked as EOG llm_load_vocab: control token: 128005 '<|place▁holder▁no▁5|>' is not marked as EOG llm_load_vocab: control token: 128711 '<|place▁holder▁no▁711|>' is not marked as EOG llm_load_vocab: control token: 128757 '<|place▁holder▁no▁757|>' is not marked as EOG llm_load_vocab: control token: 128394 '<|place▁holder▁no▁394|>' is not marked as EOG llm_load_vocab: control token: 128203 '<|place▁holder▁no▁203|>' is not marked as EOG llm_load_vocab: control token: 128812 '<|tool▁output▁begin|>' is not marked as EOG llm_load_vocab: control token: 128403 '<|place▁holder▁no▁403|>' is not marked as EOG llm_load_vocab: control token: 128388 '<|place▁holder▁no▁388|>' is not marked as EOG llm_load_vocab: control token: 128570 '<|place▁holder▁no▁570|>' is not marked as EOG llm_load_vocab: control token: 128815 '<|PAD▁TOKEN|>' is not marked as EOG llm_load_vocab: control token: 128766 '<|place▁holder▁no▁766|>' is not marked as EOG llm_load_vocab: control token: 128412 '<|place▁holder▁no▁412|>' is not marked as EOG llm_load_vocab: control token: 128619 '<|place▁holder▁no▁619|>' is not marked as EOG llm_load_vocab: control token: 128409 '<|place▁holder▁no▁409|>' is not marked as EOG llm_load_vocab: control token: 128108 '<|place▁holder▁no▁108|>' is not marked as EOG llm_load_vocab: control token: 128328 '<|place▁holder▁no▁328|>' is not marked as EOG llm_load_vocab: control token: 128477 '<|place▁holder▁no▁477|>' is not marked as EOG llm_load_vocab: control token: 128728 '<|place▁holder▁no▁728|>' is not marked as EOG llm_load_vocab: control token: 128278 '<|place▁holder▁no▁278|>' is not marked as EOG llm_load_vocab: control token: 128111 '<|place▁holder▁no▁111|>' is not marked as EOG llm_load_vocab: control token: 128725 '<|place▁holder▁no▁725|>' is not marked as EOG llm_load_vocab: control token: 128204 '<|place▁holder▁no▁204|>' is not marked as EOG llm_load_vocab: control token: 128561 '<|place▁holder▁no▁561|>' is not marked as EOG llm_load_vocab: control token: 128274 '<|place▁holder▁no▁274|>' is not marked as EOG llm_load_vocab: control token: 128023 '<|place▁holder▁no▁23|>' is not marked as EOG llm_load_vocab: control token: 128485 '<|place▁holder▁no▁485|>' is not marked as EOG llm_load_vocab: control token: 128389 '<|place▁holder▁no▁389|>' is not marked as EOG llm_load_vocab: control token: 128367 '<|place▁holder▁no▁367|>' is not marked as EOG llm_load_vocab: control token: 128781 '<|place▁holder▁no▁781|>' is not marked as EOG llm_load_vocab: control token: 128045 '<|place▁holder▁no▁45|>' is not marked as EOG llm_load_vocab: control token: 128467 '<|place▁holder▁no▁467|>' is not marked as EOG llm_load_vocab: control token: 128182 '<|place▁holder▁no▁182|>' is not marked as EOG llm_load_vocab: control token: 128565 '<|place▁holder▁no▁565|>' is not marked as EOG llm_load_vocab: control token: 128741 '<|place▁holder▁no▁741|>' is not marked as EOG llm_load_vocab: control token: 128337 '<|place▁holder▁no▁337|>' is not marked as EOG llm_load_vocab: control token: 128004 '<|place▁holder▁no▁4|>' is not marked as EOG llm_load_vocab: control token: 128482 '<|place▁holder▁no▁482|>' is not marked as EOG llm_load_vocab: control token: 128335 '<|place▁holder▁no▁335|>' is not marked as EOG llm_load_vocab: control token: 128129 '<|place▁holder▁no▁129|>' is not marked as EOG llm_load_vocab: control token: 128495 '<|place▁holder▁no▁495|>' is not marked as EOG llm_load_vocab: control token: 128545 '<|place▁holder▁no▁545|>' is not marked as EOG llm_load_vocab: control token: 128168 '<|place▁holder▁no▁168|>' is not marked as EOG llm_load_vocab: control token: 128780 '<|place▁holder▁no▁780|>' is not marked as EOG llm_load_vocab: control token: 128240 '<|place▁holder▁no▁240|>' is not marked as EOG llm_load_vocab: control token: 128186 '<|place▁holder▁no▁186|>' is not marked as EOG llm_load_vocab: control token: 128640 '<|place▁holder▁no▁640|>' is not marked as EOG llm_load_vocab: control token: 128264 '<|place▁holder▁no▁264|>' is not marked as EOG llm_load_vocab: control token: 128021 '<|place▁holder▁no▁21|>' is not marked as EOG llm_load_vocab: control token: 128571 '<|place▁holder▁no▁571|>' is not marked as EOG llm_load_vocab: control token: 128193 '<|place▁holder▁no▁193|>' is not marked as EOG llm_load_vocab: control token: 128128 '<|place▁holder▁no▁128|>' is not marked as EOG llm_load_vocab: control token: 128695 '<|place▁holder▁no▁695|>' is not marked as EOG llm_load_vocab: control token: 128703 '<|place▁holder▁no▁703|>' is not marked as EOG llm_load_vocab: control token: 128061 '<|place▁holder▁no▁61|>' is not marked as EOG llm_load_vocab: control token: 128611 '<|place▁holder▁no▁611|>' is not marked as EOG llm_load_vocab: control token: 128246 '<|place▁holder▁no▁246|>' is not marked as EOG llm_load_vocab: control token: 128077 '<|place▁holder▁no▁77|>' is not marked as EOG llm_load_vocab: control token: 128217 '<|place▁holder▁no▁217|>' is not marked as EOG llm_load_vocab: control token: 128380 '<|place▁holder▁no▁380|>' is not marked as EOG llm_load_vocab: control token: 128567 '<|place▁holder▁no▁567|>' is not marked as EOG llm_load_vocab: control token: 128365 '<|place▁holder▁no▁365|>' is not marked as EOG llm_load_vocab: control token: 128793 '<|place▁holder▁no▁793|>' is not marked as EOG llm_load_vocab: control token: 128547 '<|place▁holder▁no▁547|>' is not marked as EOG llm_load_vocab: control token: 2 '<|▁pad▁|>' is not marked as EOG llm_load_vocab: control token: 128272 '<|place▁holder▁no▁272|>' is not marked as EOG llm_load_vocab: control token: 128633 '<|place▁holder▁no▁633|>' is not marked as EOG llm_load_vocab: control token: 128580 '<|place▁holder▁no▁580|>' is not marked as EOG llm_load_vocab: control token: 128677 '<|place▁holder▁no▁677|>' is not marked as EOG llm_load_vocab: control token: 128255 '<|place▁holder▁no▁255|>' is not marked as EOG llm_load_vocab: control token: 128434 '<|place▁holder▁no▁434|>' is not marked as EOG llm_load_vocab: control token: 128647 '<|place▁holder▁no▁647|>' is not marked as EOG llm_load_vocab: control token: 128656 '<|place▁holder▁no▁656|>' is not marked as EOG llm_load_vocab: control token: 128179 '<|place▁holder▁no▁179|>' is not marked as EOG llm_load_vocab: control token: 128270 '<|place▁holder▁no▁270|>' is not marked as EOG llm_load_vocab: control token: 128342 '<|place▁holder▁no▁342|>' is not marked as EOG llm_load_vocab: control token: 128305 '<|place▁holder▁no▁305|>' is not marked as EOG llm_load_vocab: control token: 128299 '<|place▁holder▁no▁299|>' is not marked as EOG llm_load_vocab: control token: 128431 '<|place▁holder▁no▁431|>' is not marked as EOG llm_load_vocab: control token: 128154 '<|place▁holder▁no▁154|>' is not marked as EOG llm_load_vocab: control token: 128371 '<|place▁holder▁no▁371|>' is not marked as EOG llm_load_vocab: control token: 128244 '<|place▁holder▁no▁244|>' is not marked as EOG llm_load_vocab: control token: 128585 '<|place▁holder▁no▁585|>' is not marked as EOG llm_load_vocab: control token: 128000 '<|place▁holder▁no▁0|>' is not marked as EOG llm_load_vocab: control token: 128669 '<|place▁holder▁no▁669|>' is not marked as EOG llm_load_vocab: control token: 128648 '<|place▁holder▁no▁648|>' is not marked as EOG llm_load_vocab: control token: 128103 '<|place▁holder▁no▁103|>' is not marked as EOG llm_load_vocab: control token: 128737 '<|place▁holder▁no▁737|>' is not marked as EOG llm_load_vocab: control token: 128667 '<|place▁holder▁no▁667|>' is not marked as EOG llm_load_vocab: control token: 128356 '<|place▁holder▁no▁356|>' is not marked as EOG llm_load_vocab: control token: 128261 '<|place▁holder▁no▁261|>' is not marked as EOG llm_load_vocab: control token: 128503 '<|place▁holder▁no▁503|>' is not marked as EOG llm_load_vocab: control token: 128326 '<|place▁holder▁no▁326|>' is not marked as EOG llm_load_vocab: control token: 128671 '<|place▁holder▁no▁671|>' is not marked as EOG llm_load_vocab: control token: 128637 '<|place▁holder▁no▁637|>' is not marked as EOG llm_load_vocab: control token: 128148 '<|place▁holder▁no▁148|>' is not marked as EOG llm_load_vocab: control token: 128229 '<|place▁holder▁no▁229|>' is not marked as EOG llm_load_vocab: control token: 128556 '<|place▁holder▁no▁556|>' is not marked as EOG llm_load_vocab: control token: 128438 '<|place▁holder▁no▁438|>' is not marked as EOG llm_load_vocab: control token: 128315 '<|place▁holder▁no▁315|>' is not marked as EOG llm_load_vocab: control token: 128507 '<|place▁holder▁no▁507|>' is not marked as EOG llm_load_vocab: control token: 128368 '<|place▁holder▁no▁368|>' is not marked as EOG llm_load_vocab: control token: 128814 '<|tool▁sep|>' is not marked as EOG llm_load_vocab: control token: 128117 '<|place▁holder▁no▁117|>' is not marked as EOG llm_load_vocab: control token: 128277 '<|place▁holder▁no▁277|>' is not marked as EOG llm_load_vocab: control token: 128660 '<|place▁holder▁no▁660|>' is not marked as EOG llm_load_vocab: control token: 128310 '<|place▁holder▁no▁310|>' is not marked as EOG llm_load_vocab: control token: 128707 '<|place▁holder▁no▁707|>' is not marked as EOG llm_load_vocab: control token: 128433 '<|place▁holder▁no▁433|>' is not marked as EOG llm_load_vocab: control token: 128177 '<|place▁holder▁no▁177|>' is not marked as EOG llm_load_vocab: control token: 128500 '<|place▁holder▁no▁500|>' is not marked as EOG llm_load_vocab: control token: 128437 '<|place▁holder▁no▁437|>' is not marked as EOG llm_load_vocab: control token: 128031 '<|place▁holder▁no▁31|>' is not marked as EOG llm_load_vocab: control token: 128698 '<|place▁holder▁no▁698|>' is not marked as EOG llm_load_vocab: control token: 128254 '<|place▁holder▁no▁254|>' is not marked as EOG llm_load_vocab: control token: 128445 '<|place▁holder▁no▁445|>' is not marked as EOG llm_load_vocab: control token: 128526 '<|place▁holder▁no▁526|>' is not marked as EOG llm_load_vocab: control token: 128011 '<|place▁holder▁no▁11|>' is not marked as EOG llm_load_vocab: control token: 128304 '<|place▁holder▁no▁304|>' is not marked as EOG llm_load_vocab: control token: 128586 '<|place▁holder▁no▁586|>' is not marked as EOG llm_load_vocab: control token: 128454 '<|place▁holder▁no▁454|>' is not marked as EOG llm_load_vocab: control token: 128189 '<|place▁holder▁no▁189|>' is not marked as EOG llm_load_vocab: control token: 128679 '<|place▁holder▁no▁679|>' is not marked as EOG llm_load_vocab: control token: 128062 '<|place▁holder▁no▁62|>' is not marked as EOG llm_load_vocab: control token: 128318 '<|place▁holder▁no▁318|>' is not marked as EOG llm_load_vocab: control token: 128455 '<|place▁holder▁no▁455|>' is not marked as EOG llm_load_vocab: control token: 128705 '<|place▁holder▁no▁705|>' is not marked as EOG llm_load_vocab: control token: 128747 '<|place▁holder▁no▁747|>' is not marked as EOG llm_load_vocab: control token: 128620 '<|place▁holder▁no▁620|>' is not marked as EOG llm_load_vocab: control token: 128289 '<|place▁holder▁no▁289|>' is not marked as EOG llm_load_vocab: control token: 128802 '<|fim▁end|>' is not marked as EOG llm_load_vocab: control token: 128331 '<|place▁holder▁no▁331|>' is not marked as EOG llm_load_vocab: control token: 128610 '<|place▁holder▁no▁610|>' is not marked as EOG llm_load_vocab: control token: 128025 '<|place▁holder▁no▁25|>' is not marked as EOG llm_load_vocab: control token: 128568 '<|place▁holder▁no▁568|>' is not marked as EOG llm_load_vocab: control token: 128390 '<|place▁holder▁no▁390|>' is not marked as EOG llm_load_vocab: control token: 128066 '<|place▁holder▁no▁66|>' is not marked as EOG llm_load_vocab: control token: 128350 '<|place▁holder▁no▁350|>' is not marked as EOG llm_load_vocab: control token: 128153 '<|place▁holder▁no▁153|>' is not marked as EOG llm_load_vocab: control token: 128629 '<|place▁holder▁no▁629|>' is not marked as EOG llm_load_vocab: control token: 128722 '<|place▁holder▁no▁722|>' is not marked as EOG llm_load_vocab: control token: 128191 '<|place▁holder▁no▁191|>' is not marked as EOG llm_load_vocab: control token: 128607 '<|place▁holder▁no▁607|>' is not marked as EOG llm_load_vocab: control token: 128598 '<|place▁holder▁no▁598|>' is not marked as EOG llm_load_vocab: control token: 128253 '<|place▁holder▁no▁253|>' is not marked as EOG llm_load_vocab: control token: 128100 '<|place▁holder▁no▁100|>' is not marked as EOG llm_load_vocab: control token: 128121 '<|place▁holder▁no▁121|>' is not marked as EOG llm_load_vocab: control token: 128302 '<|place▁holder▁no▁302|>' is not marked as EOG llm_load_vocab: control token: 128175 '<|place▁holder▁no▁175|>' is not marked as EOG llm_load_vocab: control token: 128661 '<|place▁holder▁no▁661|>' is not marked as EOG llm_load_vocab: control token: 128519 '<|place▁holder▁no▁519|>' is not marked as EOG llm_load_vocab: control token: 128250 '<|place▁holder▁no▁250|>' is not marked as EOG llm_load_vocab: control token: 128165 '<|place▁holder▁no▁165|>' is not marked as EOG llm_load_vocab: control token: 128788 '<|place▁holder▁no▁788|>' is not marked as EOG llm_load_vocab: control token: 128489 '<|place▁holder▁no▁489|>' is not marked as EOG llm_load_vocab: control token: 128604 '<|place▁holder▁no▁604|>' is not marked as EOG llm_load_vocab: control token: 128093 '<|place▁holder▁no▁93|>' is not marked as EOG llm_load_vocab: control token: 128730 '<|place▁holder▁no▁730|>' is not marked as EOG llm_load_vocab: control token: 0 '<|begin▁of▁sentence|>' is not marked as EOG llm_load_vocab: control token: 128370 '<|place▁holder▁no▁370|>' is not marked as EOG llm_load_vocab: control token: 128164 '<|place▁holder▁no▁164|>' is not marked as EOG llm_load_vocab: control token: 128007 '<|place▁holder▁no▁7|>' is not marked as EOG llm_load_vocab: control token: 128083 '<|place▁holder▁no▁83|>' is not marked as EOG llm_load_vocab: control token: 128090 '<|place▁holder▁no▁90|>' is not marked as EOG llm_load_vocab: control token: 128334 '<|place▁holder▁no▁334|>' is not marked as EOG llm_load_vocab: control token: 128312 '<|place▁holder▁no▁312|>' is not marked as EOG llm_load_vocab: control token: 128016 '<|place▁holder▁no▁16|>' is not marked as EOG llm_load_vocab: control token: 128256 '<|place▁holder▁no▁256|>' is not marked as EOG llm_load_vocab: control token: 128499 '<|place▁holder▁no▁499|>' is not marked as EOG llm_load_vocab: control token: 128427 '<|place▁holder▁no▁427|>' is not marked as EOG llm_load_vocab: control token: 128426 '<|place▁holder▁no▁426|>' is not marked as EOG llm_load_vocab: control token: 128786 '<|place▁holder▁no▁786|>' is not marked as EOG llm_load_vocab: control token: 128015 '<|place▁holder▁no▁15|>' is not marked as EOG llm_load_vocab: control token: 128379 '<|place▁holder▁no▁379|>' is not marked as EOG llm_load_vocab: control token: 128638 '<|place▁holder▁no▁638|>' is not marked as EOG llm_load_vocab: control token: 128286 '<|place▁holder▁no▁286|>' is not marked as EOG llm_load_vocab: control token: 128247 '<|place▁holder▁no▁247|>' is not marked as EOG llm_load_vocab: control token: 128813 '<|tool▁output▁end|>' is not marked as EOG llm_load_vocab: control token: 1 '<|end▁of▁sentence|>' is not marked as EOG llm_load_vocab: control token: 128800 '<|fim▁hole|>' is not marked as EOG llm_load_vocab: control token: 128027 '<|place▁holder▁no▁27|>' is not marked as EOG llm_load_vocab: control token: 128749 '<|place▁holder▁no▁749|>' is not marked as EOG llm_load_vocab: control token: 128366 '<|place▁holder▁no▁366|>' is not marked as EOG llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 819 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = IQ1_S - 1.5625 bpw llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 130.60 GiB (1.67 BPW) llm_load_print_meta: general.name = DeepSeek R1 BF16 llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 128815 '<|PAD▁TOKEN|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 1024 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead llm_load_tensors: offloading 0 repeating layers to GPU llm_load_tensors: offloaded 0/62 layers to GPU llm_load_tensors: CPU model buffer size = 133730.06 MiB load_all_data: no device found for buffer type CPU for async uploads time=2025-01-29T06:36:08.616-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" time=2025-01-29T06:36:08.868-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.01" time=2025-01-29T06:36:09.119-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.02" time=2025-01-29T06:36:09.370-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.04" time=2025-01-29T06:36:09.622-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.04" time=2025-01-29T06:36:09.874-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.05" time=2025-01-29T06:36:10.126-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.07" time=2025-01-29T06:36:10.377-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.08" time=2025-01-29T06:36:10.628-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.09" time=2025-01-29T06:36:10.879-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.10" time=2025-01-29T06:36:11.131-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.12" time=2025-01-29T06:36:11.381-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.13" time=2025-01-29T06:36:11.633-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.14" time=2025-01-29T06:36:11.884-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.15" time=2025-01-29T06:36:12.135-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.16" time=2025-01-29T06:36:12.386-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.17" time=2025-01-29T06:36:12.637-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.18" time=2025-01-29T06:36:12.888-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.20" time=2025-01-29T06:36:13.138-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.21" time=2025-01-29T06:36:13.389-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.21" time=2025-01-29T06:36:13.640-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.22" time=2025-01-29T06:36:13.890-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.23" time=2025-01-29T06:36:14.141-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.24" time=2025-01-29T06:36:14.392-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.25" time=2025-01-29T06:36:14.643-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.25" time=2025-01-29T06:36:14.894-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.26" time=2025-01-29T06:36:15.145-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.26" time=2025-01-29T06:36:15.396-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.27" time=2025-01-29T06:36:15.648-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.27" time=2025-01-29T06:36:15.899-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.28" time=2025-01-29T06:36:16.150-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.28" time=2025-01-29T06:36:16.401-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.30" time=2025-01-29T06:36:16.652-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.30" time=2025-01-29T06:36:16.902-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.30" time=2025-01-29T06:36:17.153-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.31" time=2025-01-29T06:36:17.404-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.31" time=2025-01-29T06:36:17.655-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.32" time=2025-01-29T06:36:17.906-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.33" time=2025-01-29T06:36:18.157-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.34" time=2025-01-29T06:36:18.408-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.34" time=2025-01-29T06:36:18.659-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.35" time=2025-01-29T06:36:18.910-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.35" time=2025-01-29T06:36:19.161-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.36" time=2025-01-29T06:36:19.413-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.36" time=2025-01-29T06:36:19.663-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.37" time=2025-01-29T06:36:19.914-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.38" time=2025-01-29T06:36:20.165-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.39" time=2025-01-29T06:36:20.416-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.39" time=2025-01-29T06:36:20.667-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.40" time=2025-01-29T06:36:20.919-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.40" time=2025-01-29T06:36:21.170-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.41" time=2025-01-29T06:36:21.421-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.41" time=2025-01-29T06:36:21.672-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.42" time=2025-01-29T06:36:21.924-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.42" time=2025-01-29T06:36:22.175-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.43" time=2025-01-29T06:36:22.426-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.44" time=2025-01-29T06:36:22.677-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.44" time=2025-01-29T06:36:22.928-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.45" time=2025-01-29T06:36:23.180-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.45" time=2025-01-29T06:36:23.432-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.46" time=2025-01-29T06:36:23.934-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.46" time=2025-01-29T06:36:24.185-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.47" time=2025-01-29T06:36:24.438-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.47" time=2025-01-29T06:36:24.690-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.48" time=2025-01-29T06:36:24.940-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.49" time=2025-01-29T06:36:25.192-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.49" time=2025-01-29T06:36:25.447-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.50" time=2025-01-29T06:36:25.949-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.50" time=2025-01-29T06:36:26.200-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.51" time=2025-01-29T06:36:26.451-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.51" time=2025-01-29T06:36:26.701-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.52" time=2025-01-29T06:36:26.951-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.52" time=2025-01-29T06:36:27.206-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.53" time=2025-01-29T06:36:27.456-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.54" time=2025-01-29T06:36:27.707-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.54" time=2025-01-29T06:36:27.958-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.55" time=2025-01-29T06:36:28.208-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.55" time=2025-01-29T06:36:28.459-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.56" time=2025-01-29T06:36:28.722-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.56" time=2025-01-29T06:36:29.224-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.57" time=2025-01-29T06:36:29.476-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.58" time=2025-01-29T06:36:29.727-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.58" time=2025-01-29T06:36:29.978-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.59" time=2025-01-29T06:36:30.230-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.59" time=2025-01-29T06:36:30.481-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.60" time=2025-01-29T06:36:30.732-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.60" time=2025-01-29T06:36:30.983-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.61" time=2025-01-29T06:36:31.484-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.62" time=2025-01-29T06:36:31.986-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.62" time=2025-01-29T06:36:32.238-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.63" time=2025-01-29T06:36:32.489-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.64" time=2025-01-29T06:36:32.740-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.64" time=2025-01-29T06:36:32.992-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.65" time=2025-01-29T06:36:33.243-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.65" time=2025-01-29T06:36:33.744-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.66" time=2025-01-29T06:36:33.995-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.67" time=2025-01-29T06:36:34.246-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.67" time=2025-01-29T06:36:34.497-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.68" time=2025-01-29T06:36:34.748-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.68" time=2025-01-29T06:36:34.998-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.69" time=2025-01-29T06:36:35.249-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.69" time=2025-01-29T06:36:35.501-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.70" time=2025-01-29T06:36:35.752-05:00 level=DEBUG source=server.go:600 msg="model load progress 0.70" time=2025-01-29T06:36:36.204-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server not responding" time=2025-01-29T06:36:38.264-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: killed" time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:458 msg="triggering expiration for failed load" model=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=server.go:1079 msg="stopping llama server" time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:380 msg="runner released" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/Users/fserb/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 time=2025-01-29T06:36:38.265-05:00 level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests" [GIN] 2025/01/29 - 06:36:38 | 500 | 29.935871625s | 127.0.0.1 | POST "/api/generate" ``` </details>
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

time=2025-01-29T06:36:08.359-05:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="50.1 GiB" free_swap="0 B"

time=2025-01-29T06:36:08.360-05:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=0 layers.model=62 layers.offload=0 layers.split="" memory.available="[50.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="172.7 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[990.7 MiB]" memory.weights.total="167.5 GiB" memory.weights.repeating="166.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"

The system has 50G free RAM and 0G free swap and wants to use 172G to load the model. I had a look at the code and it turns out that MacOS explicitly doesn't do the calculation I mentioned earlier, because "Darwin has fully dynamic swap".

The runner didn't log anything about running out of memory and the server just noted that "llama runner process has terminated: signal: killed", ie the runner received a SIGKILL signal. This leads me to think that an external actor killed the runner, for example the OS may have decided that the runner was asking for too much memory, or it wasn't able to expand the dynamic swap fast enough, or some other kernel level mechanism kicked in and decided to terminate the runner. These sorts of events are usually logged somewhere, in MacOS it might be in the Console.

<!-- gh-comment-id:2621476895 --> @rick-github commented on GitHub (Jan 29, 2025): ``` time=2025-01-29T06:36:08.359-05:00 level=INFO source=server.go:104 msg="system memory" total="64.0 GiB" free="50.1 GiB" free_swap="0 B" time=2025-01-29T06:36:08.360-05:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=0 layers.model=62 layers.offload=0 layers.split="" memory.available="[50.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="172.7 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[990.7 MiB]" memory.weights.total="167.5 GiB" memory.weights.repeating="166.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" ``` The system has 50G free RAM and 0G free swap and wants to use 172G to load the model. I had a look at the code and it turns out that MacOS explicitly doesn't do the calculation I mentioned earlier, because "Darwin has fully dynamic swap". The runner didn't log anything about running out of memory and the server just noted that "llama runner process has terminated: signal: killed", ie the runner received a SIGKILL signal. This leads me to think that an external actor killed the runner, for example the OS may have decided that the runner was asking for too much memory, or it wasn't able to expand the dynamic swap fast enough, or some other kernel level mechanism kicked in and decided to terminate the runner. These sorts of events are usually logged somewhere, in MacOS it might be in the [Console](https://support.apple.com/guide/console/welcome/mac).
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

Yeah, yeah. That's definitely true. It does get killed by the OS due to memory usage.

But the issue is that llama-cpp realizes that and doesn't try to load the whole model in RAM? Also it uses the KV-cache?

<!-- gh-comment-id:2621487828 --> @fserb commented on GitHub (Jan 29, 2025): Yeah, yeah. That's definitely true. It does get killed by the OS due to memory usage. But the issue is that [llama-cpp realizes that](https://github.com/ollama/ollama/issues/8571#issuecomment-2620257072) and doesn't try to load the whole model in RAM? Also it uses the KV-cache?
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

ollama is loading the model no-mmap. Try

curl localhost:11434/api/generate -d '{"model":"deepseek-r1:iq1_s","options":{"num_gpu":0,"use_mmap":true}}'
<!-- gh-comment-id:2621501812 --> @rick-github commented on GitHub (Jan 29, 2025): ollama is loading the model no-mmap. Try ``` curl localhost:11434/api/generate -d '{"model":"deepseek-r1:iq1_s","options":{"num_gpu":0,"use_mmap":true}}' ```
Author
Owner

@teekuningas commented on GitHub (Jan 29, 2025):

Just to hop in with my 250G RAM:

> curl http//localhost:11434/api/generate -d '{"model":"DeepSeek-V3-Q2_K_L.gguf","options":{"num_gpu":0,"use_mmap":true}}'    

{"error":"model requires more system memory (276.3 GiB) than is available (247.0 GiB)"}

This and much bigger models run fine with llama.cpp. I guess it does the memory mapping from the .gguf file and does not load all the weights simultaneously on RAM?

Just to ask, is it by design that ollama has this kind of hard limit for memory or am I just missing something? I really love ollama for it's ability to juggle with different models and the fact that llama.cpp is used underneath would suggest that there would be a chance for ollama to use the same smart mechanism llama.cpp uses with limited RAM.

"@ duttaoindril what outcome do your expect? You have way to little memory for this huge model. Running it with using swap works in theory, but inference will be slow to the extreme, completely unusable. You are wasting your (and everyone else's) time."

@neuhaus: generating with a model that does not fit into RAM is not useless. It might take like 2 tokens / s, but all tasks aren't like code completion where you need lightning speed. For some tasks you may be happy to wait for a few minutes to get a good result from a big model instead of getting a bad result with high speed from a lesser model.

<!-- gh-comment-id:2621516032 --> @teekuningas commented on GitHub (Jan 29, 2025): Just to hop in with my 250G RAM: ``` > curl http//localhost:11434/api/generate -d '{"model":"DeepSeek-V3-Q2_K_L.gguf","options":{"num_gpu":0,"use_mmap":true}}' {"error":"model requires more system memory (276.3 GiB) than is available (247.0 GiB)"} ``` This and much bigger models run fine with llama.cpp. I guess it does the memory mapping from the .gguf file and does not load all the weights simultaneously on RAM? Just to ask, is it by design that ollama has this kind of hard limit for memory or am I just missing something? I really love ollama for it's ability to juggle with different models and the fact that llama.cpp is used underneath would suggest that there would be a chance for ollama to use the same smart mechanism llama.cpp uses with limited RAM. "@ duttaoindril what outcome do your expect? You have way to little memory for this huge model. Running it with using swap works in theory, but inference will be slow to the extreme, completely unusable. You are wasting your (and everyone else's) time." @neuhaus: generating with a model that does not fit into RAM is not useless. It might take like 2 tokens / s, but all tasks aren't like code completion where you need lightning speed. For some tasks you may be happy to wait for a few minutes to get a good result from a big model instead of getting a bad result with high speed from a lesser model.
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

$ ollama create deepseek-r1:iq1_s
$ curl localhost:11434/api/generate -d '{"model":"deepseek-r1:iq1_s","options":{"num_gpu":0,"use_mmap":true}}'
{"model":"deepseek-r1:iq1_s","created_at":"2025-01-29T15:49:05.942618Z","response":"","done":true,"done_reason":"load"}

and I could seed on the logs

llama_kv_cache_init:        CPU KV buffer size = 39040.00 MiB
llama_new_context_with_model: KV self size  = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.08 MiB
llama_new_context_with_model:        CPU compute buffer size =  2218.01 MiB
llama_new_context_with_model: graph nodes  = 5025
llama_new_context_with_model: graph splits = 1
time=2025-01-29T10:49:05.942-05:00 level=INFO source=server.go:594 msg="llama runner started in 3.02 seconds"

after that ollama run deepseek-r1:iq1_s worked (as in, it executed without crashing). The mmap only works for num_gpu:0, is that WAI?

<!-- gh-comment-id:2622026783 --> @fserb commented on GitHub (Jan 29, 2025): ``` $ ollama create deepseek-r1:iq1_s $ curl localhost:11434/api/generate -d '{"model":"deepseek-r1:iq1_s","options":{"num_gpu":0,"use_mmap":true}}' {"model":"deepseek-r1:iq1_s","created_at":"2025-01-29T15:49:05.942618Z","response":"","done":true,"done_reason":"load"} ``` and I could seed on the logs ``` llama_kv_cache_init: CPU KV buffer size = 39040.00 MiB llama_new_context_with_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB llama_new_context_with_model: CPU output buffer size = 2.08 MiB llama_new_context_with_model: CPU compute buffer size = 2218.01 MiB llama_new_context_with_model: graph nodes = 5025 llama_new_context_with_model: graph splits = 1 time=2025-01-29T10:49:05.942-05:00 level=INFO source=server.go:594 msg="llama runner started in 3.02 seconds" ``` after that `ollama run deepseek-r1:iq1_s` worked (as in, it executed without crashing). The mmap only works for `num_gpu:0`, is that WAI?
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

The mmap only works for num_gpu:0, is that WAI?

As in, with num_gpu:1 the server uses mmap (ie, no --no-mmap on the command line) but is still killed by the OS? I don't think that is WAI, but I'd have to check.

The way that ollama uses mmap has always bugged me and I've been meaning to go through the code for ages, but never got around to it. The arrival of these ginormous models has given me extra incentive.

<!-- gh-comment-id:2622071840 --> @rick-github commented on GitHub (Jan 29, 2025): > The mmap only works for num_gpu:0, is that WAI? As in, with `num_gpu:1` the server uses mmap (ie, no --no-mmap on the command line) but is still killed by the OS? I don't think that is WAI, but I'd have to check. The way that ollama uses mmap has always bugged me and I've been meaning to go through the code for ages, but never got around to it. The arrival of these ginormous models has given me extra incentive.
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

It doesn't seem to use mmap at all. It goes back to llm_load_tensors: CPU model buffer size = 133730.06 MiB and something small on the GPU. Then starts loading and crashes the same way as before (at "model load progress 0.70").

<!-- gh-comment-id:2622083299 --> @fserb commented on GitHub (Jan 29, 2025): It doesn't seem to use mmap at all. It goes back to `llm_load_tensors: CPU model buffer size = 133730.06 MiB` and something small on the GPU. Then starts loading and crashes the same way as before (at `"model load progress 0.70"`).
Author
Owner

@fserb commented on GitHub (Jan 29, 2025):

Also, maybe I'm being weird, but the output of the model seems, hmmm, a bit off?

ollama run deepseek-r1:iq1_s
>>> hello
: hello.cpp
	clang++ -std=c++17 hello.cpp -o hello

clean:
	rm -f hello

or:

ollama run deepseek-r1:iq1_s
>>> how many letter 'r' are in the word 'strawberry'?
 - english

Answers: 2 Show answers Another question on English. The answer to How many letters in "Answer" riddle is Four. In 
this instance, the word "answer" contains four letters. Answer has six letters; we just need to count them and^C

<!-- gh-comment-id:2622088226 --> @fserb commented on GitHub (Jan 29, 2025): Also, maybe I'm being weird, but the output of the model seems, hmmm, a bit off? ``` ollama run deepseek-r1:iq1_s >>> hello : hello.cpp clang++ -std=c++17 hello.cpp -o hello clean: rm -f hello ``` or: ``` ollama run deepseek-r1:iq1_s >>> how many letter 'r' are in the word 'strawberry'? - english Answers: 2 Show answers Another question on English. The answer to How many letters in "Answer" riddle is Four. In this instance, the word "answer" contains four letters. Answer has six letters; we just need to count them and^C ```
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

iq1_s is a lot of quantization so I wouldn't be surprised by random output. However, the examples you give are indicative of a missing template: the prompt being sent to the model doesn't have the <User> and <Assistant> tokens that guide its output. Earlier you showed ollama create deepseek-r1:iq1_s, so it looks like you have a custom Modelfile. Does it have a TEMPLATE field, and if so, what's in it?

For comparison, I ran iq1_s out of swap and it worked fine, if slowly.

FROM /root/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6
TEMPLATE """{{- if .System }}{{ .System }}{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1}}
{{- if eq .Role "user" }}<|User|>{{ .Content }}
{{- else if eq .Role "assistant" }}<|Assistant|>{{ .Content }}{{- if not $last }}<|end▁of▁sentence|>{{- end }}
{{- end }}
{{- if and $last (ne .Role "assistant") }}<|Assistant|>{{- end }}
{{- end }}"""
PARAMETER num_ctx 8192
PARAMETER num_gpu 4
PARAMETER stop <|begin▁of▁sentence|>
PARAMETER stop <|end▁of▁sentence|>
PARAMETER stop <|User|>
PARAMETER stop <|Assistant|>
<!-- gh-comment-id:2622118005 --> @rick-github commented on GitHub (Jan 29, 2025): iq1_s is a lot of quantization so I wouldn't be surprised by random output. However, the examples you give are indicative of a missing template: the prompt being sent to the model doesn't have the \<User\> and \<Assistant\> tokens that guide its output. Earlier you showed `ollama create deepseek-r1:iq1_s`, so it looks like you have a custom Modelfile. Does it have a TEMPLATE field, and if so, what's in it? For comparison, I ran [iq1_s](https://github.com/ollama/ollama/issues/8624#issuecomment-2620184689) out of swap and it worked fine, if slowly. ``` FROM /root/.ollama/models/blobs/sha256-a542caee8df72af41ad48d75b94adacb5fbc61856930460bd599d835400fb3b6 TEMPLATE """{{- if .System }}{{ .System }}{{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1}} {{- if eq .Role "user" }}<|User|>{{ .Content }} {{- else if eq .Role "assistant" }}<|Assistant|>{{ .Content }}{{- if not $last }}<|end▁of▁sentence|>{{- end }} {{- end }} {{- if and $last (ne .Role "assistant") }}<|Assistant|>{{- end }} {{- end }}""" PARAMETER num_ctx 8192 PARAMETER num_gpu 4 PARAMETER stop <|begin▁of▁sentence|> PARAMETER stop <|end▁of▁sentence|> PARAMETER stop <|User|> PARAMETER stop <|Assistant|> ```
Author
Owner

@sharpe5 commented on GitHub (Feb 1, 2025):

Your best bet is to run DeepSeek R1 Dynamic 1.58-bit as it will fit into 128GB of RAM on a Mac.

This model was released 4 days ago. It selectively quantises some layers to 1.58 bit to generate a model that is 131GB which is an 80% reduction in size. It leaves some layers at 6 bit to avoid model collapse. You are probably having problems as the model is too large to fit into RAM. Instructions to run on Mac:

Running on Mac / Apple devices
For Apple Metal devices, be careful of --n-gpu-layers. If you find the machine going out of memory, reduce it. For a 128GB unified memory machine, you should be able to offload 59 layers or so:

./llama.cpp/llama-cli \
    --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
    --cache-type-k q4_0 \
    --threads 16 \
    --prio 2 \
    --temp 0.6 \
    --ctx-size 8192 \
    --seed 3407 \
    --n-gpu-layers 59 \
    -no-cnv \
    --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>"

I just ran this same model using LM Studio on Windows 10 on a E5-2695v3 Xeon with 256GB RAM. There is a version of LM Studio for Mac as well. It worked nicely, and used the advertised amount of RAM. It was running at 1.01 tokens/sec. Offloading 9 layers to my RTX 3090 with 24GB RAM increased this to 1.10 tokens/sec. I prefer the free version of ChatBox as a front end.

How many tokens/sec do you get on your Mac with 128GB RAM, and what is the exact spec?

<!-- gh-comment-id:2629033520 --> @sharpe5 commented on GitHub (Feb 1, 2025): Your best bet is to run [DeepSeek R1 Dynamic 1.58-bit](https://unsloth.ai/blog/deepseekr1-dynamic) as it will fit into 128GB of RAM on a Mac. This model was released 4 days ago. It selectively quantises some layers to 1.58 bit to generate a model that is 131GB which is an 80% reduction in size. It leaves some layers at 6 bit to avoid model collapse. You are probably having problems as the model is too large to fit into RAM. Instructions to run on Mac: > Running on Mac / Apple devices For Apple Metal devices, be careful of --n-gpu-layers. If you find the machine going out of memory, reduce it. For a 128GB unified memory machine, you should be able to offload 59 layers or so: ``` ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 16 \ --prio 2 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --n-gpu-layers 59 \ -no-cnv \ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>" ``` I just ran this same model using [LM Studio](https://lmstudio.ai/) on Windows 10 on a E5-2695v3 Xeon with 256GB RAM. There is a version of [LM Studio for Mac](https://lmstudio.ai/) as well. It worked nicely, and used the advertised amount of RAM. It was running at 1.01 tokens/sec. Offloading 9 layers to my RTX 3090 with 24GB RAM increased this to 1.10 tokens/sec. I prefer the free version of [ChatBox](https://chatboxai.app/en) as a front end. How many tokens/sec do you get on your Mac with 128GB RAM, and what is the exact spec?
Author
Owner

@ZJL0111 commented on GitHub (Feb 8, 2025):

What is the issue?

after waiting all day for the model to download, ollama run deepseek-r1:671b fails to run with the error Error: llama runner process has terminated: signal: killed.

I can run the deepseek-r1:70b llama model just fine.

I'm running a Macbook M3 Pro 64GB ram, I'm assuming it's failing due to lack of memory?

  • how do I know the real memory requirements for a model? i don't think it's obvious on the ollama page.
  • any way to fix this at all? I tried it on my 128GB M1 Ultra Mac Studio and got the same error. I'd really love to run this locally, so would appreciate any help!

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

have you solved it? i encountered the same errror ;my device is 8*A800

Image
<!-- gh-comment-id:2644714355 --> @ZJL0111 commented on GitHub (Feb 8, 2025): > ### What is the issue? > after waiting all day for the model to download, `ollama run deepseek-r1:671b` fails to run with the error `Error: llama runner process has terminated: signal: killed`. > > I can run the deepseek-r1:70b llama model just fine. > > I'm running a Macbook M3 Pro 64GB ram, I'm assuming it's failing due to lack of memory? > > * how do I know the real memory requirements for a model? i don't think it's obvious on the ollama page. > * any way to fix this at all? I tried it on my 128GB M1 Ultra Mac Studio and got the same error. I'd really love to run this locally, so would appreciate any help! > > ### OS > macOS > > ### GPU > Apple > > ### CPU > Apple > > ### Ollama version > 0.5.7 have you solved it? i encountered the same errror ;my device is 8*A800 <img width="1074" alt="Image" src="https://github.com/user-attachments/assets/46dee204-6097-4376-9a89-cd721aa339f2" />
Author
Owner

@rick-github commented on GitHub (Feb 8, 2025):

https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2644744565 --> @rick-github commented on GitHub (Feb 8, 2025): https://github.com/ollama/ollama/issues/5975
Author
Owner

@afsara-ben commented on GitHub (Mar 17, 2025):

i am still getting Error: POST predict: Post "http://127.0.0.1:49185/completion": EOF on my 192GB mac ultra, how to fix this?

<!-- gh-comment-id:2730490418 --> @afsara-ben commented on GitHub (Mar 17, 2025): i am still getting `Error: POST predict: Post "http://127.0.0.1:49185/completion": EOF` on my 192GB mac ultra, how to fix this?
Author
Owner

@rick-github commented on GitHub (Mar 17, 2025):

https://github.com/ollama/ollama/issues/5975#issuecomment-2306851184

<!-- gh-comment-id:2730663651 --> @rick-github commented on GitHub (Mar 17, 2025): https://github.com/ollama/ollama/issues/5975#issuecomment-2306851184
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67592