[GH-ISSUE #9029] Low GPU Utilization on Multi-GPU NVLink Setup with Ollama #5873

Open
opened 2026-04-12 17:12:42 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @HelloaZelda on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9029

Originally assigned to: @mxyng on GitHub.

What is the issue?

I am running Ollama on a system with four A40 GPUs, configured in pairs using NVLink. When loading models, the memory is well distributed across all four GPUs. However, I am observing low GPU utilization during inference, which significantly impacts performance.

Additionally, I have noticed that when running maryasov/qwen2.5-coder-cline:32b(This model has a large context length), the model successfully utilizes all four GPUs. However, when running deepseek-r1:70b, the model only loads onto a single GPU, which further limits performance.

Image****
This one is about when I run maryasov/qwen2.5-coder-cline:32b

Image This one is about when I run deepseek-r1:70b

Relevant log output

2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name     = Qwen2.5 Coder 32B Instruct
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading 64 repeating layers to GPU
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading output layer to GPU
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloaded 65/65 layers to GPU
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors:   CPU_Mapped model buffer size =   417.66 MiB
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors:        CUDA0 model buffer size =  4844.72 MiB
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors:        CUDA1 model buffer size =  4366.53 MiB
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors:        CUDA2 model buffer size =  4366.53 MiB
2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors:        CUDA3 model buffer size =  4930.57 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_seq_max     = 4
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx         = 131072
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq = 32768
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_batch       = 2048
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ubatch      = 512
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: flash_attn    = 0
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_base     = 1000000.0
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_scale    = 1
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: kv_size = 131072, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init:      CUDA0 KV buffer size =  8704.00 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init:      CUDA1 KV buffer size =  8192.00 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init:      CUDA2 KV buffer size =  8192.00 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init:      CUDA3 KV buffer size =  7680.00 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: KV self size  = 32768.00 MiB, K (f16): 16384.00 MiB, V (f16): 16384.00 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.40 MiB
2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:      CUDA0 compute buffer size = 11344.01 MiB
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:      CUDA1 compute buffer size = 11344.01 MiB
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:      CUDA2 compute buffer size = 11344.01 MiB
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:      CUDA3 compute buffer size = 11344.02 MiB
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:  CUDA_Host compute buffer size =  1034.02 MiB
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph nodes  = 2246
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph splits = 5
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:06:31.065+08:00 level=INFO source=server.go:594 msg="llama runner started in 5.52 seconds"
2月 12 11:06:31 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:06:31 | 200 |   7.36286695s |       127.0.0.1 | POST     "/api/generate"
2月 12 11:06:40 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:06:40 | 200 |  765.561979ms |       127.0.0.1 | POST     "/api/chat"
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 34 key-value pairs and 771 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ac3d1ba8aa77755dab3806d9024e9c385ea0d5b412d6bdf9157f8a4a7e9fc0d9 (version GGUF V3 (latest))
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   1:                               general.type str              = model
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 32B Instruct
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   5:                         general.size_label str              = 32B
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 32B
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 64
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 27648
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  33:               general.quantization_version u32              = 2
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - type  f32:  321 tensors
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K:  385 tensors
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K:   65 tensors
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 22
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.9310 MB
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format           = GGUF V3 (latest)
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch             = qwen2
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type       = BPE
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab          = 152064
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges         = 151387
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only       = 1
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type       = ?B
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype      = all F32
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params     = 32.76 B
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size       = 18.48 GiB (4.85 BPW)
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name     = Qwen2.5 Coder 32B Instruct
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256
2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_load: vocab only - skipping tensors
2月 12 11:07:05 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:07:05 | 200 | 12.605399631s |       127.0.0.1 | POST     "/api/chat"
2月 12 11:08:00 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:00 | 200 | 35.550752915s |       127.0.0.1 | POST     "/api/chat"
2月 12 11:08:18 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:18 | 200 |     185.987µs |       127.0.0.1 | HEAD     "/"
2月 12 11:08:18 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:18 | 200 |     878.022µs |       127.0.0.1 | POST     "/api/generate"
2月 12 11:08:22 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:22 | 200 |      47.048µs |       127.0.0.1 | HEAD     "/"
2月 12 11:08:22 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:22 | 200 |     705.726µs |       127.0.0.1 | GET      "/api/tags"
2月 12 11:08:30 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:30 | 200 |      27.343µs |       127.0.0.1 | HEAD     "/"
2月 12 11:08:30 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:30 | 200 |   23.472096ms |       127.0.0.1 | POST     "/api/show"
2月 12 11:08:30 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:30.985+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 gpu=GPU-8eb0da23-25f0-0802-4d9f-0e1bab4c58d2 parallel=4 available=47326625792 required="43.6 GiB"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.841+08:00 level=INFO source=server.go:104 msg="system memory" total="125.9 GiB" free="122.7 GiB" free_swap="2.0 GiB"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.842+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[44.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="43.6 GiB" memory.required.partial="43.6 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[43.6 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 32 --parallel 4 --port 46587"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.881+08:00 level=INFO source=runner.go:936 msg="starting go runner"
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: ggml_cuda_init: found 1 CUDA devices:
2月 12 11:08:31 husteic-virtual-machine ollama[13754]:   Device 0: NVIDIA A40, compute capability 8.6, VMM: yes
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.938+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=32
2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.939+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46587"
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 45134 MiB free
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:32.094+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   0:                       general.architecture str              = llama
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   1:                               general.type str              = model
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   4:                         general.size_label str              = 70B
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   5:                          llama.block_count u32              = 80
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   7:                     llama.embedding_length u32              = 8192
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 28672
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 64
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  13:                 llama.attention.key_length u32              = 128
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  15:                          general.file_type u32              = 15
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  16:                           llama.vocab_size u32              = 128256
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = gpt2
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = llama-bpe
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  22:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 128000
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 128001
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 128001
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  26:               tokenizer.ggml.add_bos_token bool             = true
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  27:               tokenizer.ggml.add_eos_token bool             = false
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  29:               general.quantization_version u32              = 2
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type  f32:  162 tensors
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K:  441 tensors
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q5_K:   40 tensors
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K:   81 tensors
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 256
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.7999 MB
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format           = GGUF V3 (latest)
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch             = llama
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type       = BPE
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab          = 128256
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges         = 280147
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only       = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_train      = 131072
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd           = 8192
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_layer          = 80
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head           = 64
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head_kv        = 8
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_rot            = 128
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_swa            = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_k    = 128
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_v    = 128
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_gqa            = 8
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_k_gqa     = 1024
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_v_gqa     = 1024
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_eps       = 0.0e+00
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_logit_scale    = 0.0e+00
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ff             = 28672
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert         = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert_used    = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: causal attn      = 1
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: pooling type     = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope type        = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope scaling     = linear
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_base_train  = 500000.0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_scale_train = 1
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope_finetuned   = unknown
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_conv       = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_inner      = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_state      = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_rank      = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type       = 70B
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype      = Q4_K - Medium
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params     = 70.55 B
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size       = 39.59 GiB (4.82 BPW)
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name     = DeepSeek R1 Distill Llama 70B
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token        = 128000 '<|begin▁of▁sentence|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token         = 128 'Ä'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256
2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading 80 repeating layers to GPU
2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading output layer to GPU
2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloaded 81/81 layers to GPU
2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors:   CPU_Mapped model buffer size =   563.62 MiB
2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors:        CUDA0 model buffer size = 39979.48 MiB
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_seq_max     = 4
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx         = 8192
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq = 2048
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_batch       = 2048
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ubatch      = 512
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: flash_attn    = 0
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_base     = 500000.0
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_scale    = 1
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_kv_cache_init:      CUDA0 KV buffer size =  2560.00 MiB
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: KV self size  = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.08 MiB
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:      CUDA0 compute buffer size =  1104.00 MiB
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:  CUDA_Host compute buffer size =    32.01 MiB
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph nodes  = 2566
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph splits = 2
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:39.617+08:00 level=INFO source=server.go:594 msg="llama runner started in 7.77 seconds"
2月 12 11:08:39 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:39 | 200 |  9.565055444s |       127.0.0.1 | POST     "/api/generate"
2月 12 11:10:37 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:10:37 | 200 |         1m28s |       127.0.0.1 | POST     "/api/chat"
2月 12 11:17:41 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:41.644+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 gpu=GPU-8eb0da23-25f0-0802-4d9f-0e1bab4c58d2 parallel=4 available=47326625792 required="43.6 GiB"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.506+08:00 level=INFO source=server.go:104 msg="system memory" total="125.9 GiB" free="122.6 GiB" free_swap="2.0 GiB"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[44.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="43.6 GiB" memory.required.partial="43.6 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[43.6 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 32 --parallel 4 --port 40317"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.508+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.545+08:00 level=INFO source=runner.go:936 msg="starting go runner"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: ggml_cuda_init: found 1 CUDA devices:
2月 12 11:17:42 husteic-virtual-machine ollama[13754]:   Device 0: NVIDIA A40, compute capability 8.6, VMM: yes
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.609+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=32
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.610+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:40317"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 45134 MiB free
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.759+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   0:                       general.architecture str              = llama
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   1:                               general.type str              = model
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   4:                         general.size_label str              = 70B
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   5:                          llama.block_count u32              = 80
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   7:                     llama.embedding_length u32              = 8192
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 28672
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 64
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  13:                 llama.attention.key_length u32              = 128
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  15:                          general.file_type u32              = 15
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  16:                           llama.vocab_size u32              = 128256
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = gpt2
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = llama-bpe
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  22:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 128000
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 128001
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 128001
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  26:               tokenizer.ggml.add_bos_token bool             = true
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  27:               tokenizer.ggml.add_eos_token bool             = false
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  29:               general.quantization_version u32              = 2
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type  f32:  162 tensors
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K:  441 tensors
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q5_K:   40 tensors
2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K:   81 tensors
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 256
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.7999 MB
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format           = GGUF V3 (latest)
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch             = llama
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type       = BPE
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab          = 128256
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges         = 280147
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only       = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_train      = 131072
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd           = 8192
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_layer          = 80
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head           = 64
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head_kv        = 8
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_rot            = 128
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_swa            = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_k    = 128
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_v    = 128
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_gqa            = 8
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_k_gqa     = 1024
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_v_gqa     = 1024
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_eps       = 0.0e+00
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_logit_scale    = 0.0e+00
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ff             = 28672
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert         = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert_used    = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: causal attn      = 1
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: pooling type     = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope type        = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope scaling     = linear
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_base_train  = 500000.0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_scale_train = 1
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope_finetuned   = unknown
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_conv       = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_inner      = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_state      = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_rank      = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type       = 70B
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype      = Q4_K - Medium
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params     = 70.55 B
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size       = 39.59 GiB (4.82 BPW)
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name     = DeepSeek R1 Distill Llama 70B
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token        = 128000 '<|begin▁of▁sentence|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token         = 128 'Ä'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256
2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading 80 repeating layers to GPU
2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading output layer to GPU
2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloaded 81/81 layers to GPU
2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors:   CPU_Mapped model buffer size =   563.62 MiB
2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors:        CUDA0 model buffer size = 39979.48 MiB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_seq_max     = 4
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx         = 8192
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq = 2048
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_batch       = 2048
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ubatch      = 512
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: flash_attn    = 0
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_base     = 500000.0
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_scale    = 1
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_kv_cache_init:      CUDA0 KV buffer size =  2560.00 MiB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: KV self size  = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.08 MiB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:      CUDA0 compute buffer size =  1104.00 MiB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model:  CUDA_Host compute buffer size =    32.01 MiB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph nodes  = 2566
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph splits = 2
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:51.285+08:00 level=INFO source=server.go:594 msg="llama runner started in 8.78 seconds"
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   0:                       general.architecture str              = llama
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   1:                               general.type str              = model
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   4:                         general.size_label str              = 70B
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   5:                          llama.block_count u32              = 80
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   7:                     llama.embedding_length u32              = 8192
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 28672
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 64
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  13:                 llama.attention.key_length u32              = 128
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  15:                          general.file_type u32              = 15
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  16:                           llama.vocab_size u32              = 128256
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = gpt2
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = llama-bpe
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  22:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 128000
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 128001
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 128001
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  26:               tokenizer.ggml.add_bos_token bool             = true
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  27:               tokenizer.ggml.add_eos_token bool             = false
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv  29:               general.quantization_version u32              = 2
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type  f32:  162 tensors
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K:  441 tensors
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q5_K:   40 tensors
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K:   81 tensors
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 256
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.7999 MB
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format           = GGUF V3 (latest)
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch             = llama
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type       = BPE
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab          = 128256
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges         = 280147
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only       = 1
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type       = ?B
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype      = all F32
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params     = 70.55 B
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size       = 39.59 GiB (4.82 BPW)
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name     = DeepSeek R1 Distill Llama 70B
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token        = 128000 '<|begin▁of▁sentence|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token         = 128 'Ä'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128001 '<|end▁of▁sentence|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256
2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_load: vocab only - skipping tensors
2月 12 11:21:41 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:21:41 | 200 |          4m0s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @HelloaZelda on GitHub (Feb 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9029 Originally assigned to: @mxyng on GitHub. ### What is the issue? I am running Ollama on a system with four A40 GPUs, configured in pairs using NVLink. When loading models, the memory is well distributed across all four GPUs. However, I am observing low GPU utilization during inference, which significantly impacts performance. Additionally, I have noticed that when running **maryasov/qwen2.5-coder-cline:32b**(This model has a large context length), the model successfully utilizes all four GPUs. However, when running **deepseek-r1:70b**, the model only loads onto **a single GPU**, which further limits performance. <img width="1382" alt="Image" src="https://github.com/user-attachments/assets/bc2e86a7-87ec-4938-9874-7f21dcd584ad" />**** This one is about when I run maryasov/qwen2.5-coder-cline:32b <img width="1365" alt="Image" src="https://github.com/user-attachments/assets/dbc958b2-d452-48b4-8888-35c46ab3ed35" /> This one is about when I run deepseek-r1:70b ### Relevant log output ```shell 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name = Qwen2.5 Coder 32B Instruct 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token = 148848 'ÄĬ' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' 2月 12 11:06:26 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading 64 repeating layers to GPU 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading output layer to GPU 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloaded 65/65 layers to GPU 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: CPU_Mapped model buffer size = 417.66 MiB 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: CUDA0 model buffer size = 4844.72 MiB 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: CUDA1 model buffer size = 4366.53 MiB 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: CUDA2 model buffer size = 4366.53 MiB 2月 12 11:06:27 husteic-virtual-machine ollama[13754]: llm_load_tensors: CUDA3 model buffer size = 4930.57 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_seq_max = 4 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx = 131072 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq = 32768 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_batch = 2048 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ubatch = 512 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: flash_attn = 0 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_base = 1000000.0 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_scale = 1 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: kv_size = 131072, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: CUDA0 KV buffer size = 8704.00 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: CUDA1 KV buffer size = 8192.00 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: CUDA2 KV buffer size = 8192.00 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: CUDA3 KV buffer size = 7680.00 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: KV self size = 32768.00 MiB, K (f16): 16384.00 MiB, V (f16): 16384.00 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA_Host output buffer size = 2.40 MiB 2月 12 11:06:30 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA0 compute buffer size = 11344.01 MiB 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA1 compute buffer size = 11344.01 MiB 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA2 compute buffer size = 11344.01 MiB 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA3 compute buffer size = 11344.02 MiB 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA_Host compute buffer size = 1034.02 MiB 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph nodes = 2246 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph splits = 5 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:06:31.065+08:00 level=INFO source=server.go:594 msg="llama runner started in 5.52 seconds" 2月 12 11:06:31 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:06:31 | 200 | 7.36286695s | 127.0.0.1 | POST "/api/generate" 2月 12 11:06:40 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:06:40 | 200 | 765.561979ms | 127.0.0.1 | POST "/api/chat" 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 34 key-value pairs and 771 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ac3d1ba8aa77755dab3806d9024e9c385ea0d5b412d6bdf9157f8a4a7e9fc0d9 (version GGUF V3 (latest)) 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 0: general.architecture str = qwen2 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 1: general.type str = model 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 32B Instruct 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 3: general.finetune str = Instruct 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 5: general.size_label str = 32B 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 6: general.license str = apache-2.0 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 32B 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 22: general.file_type u32 = 15 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - type f32: 321 tensors 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K: 385 tensors 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K: 65 tensors 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 22 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.9310 MB 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format = GGUF V3 (latest) 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch = qwen2 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type = BPE 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab = 152064 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges = 151387 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only = 1 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type = ?B 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype = all F32 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params = 32.76 B 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size = 18.48 GiB (4.85 BPW) 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name = Qwen2.5 Coder 32B Instruct 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token = 148848 'ÄĬ' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256 2月 12 11:06:53 husteic-virtual-machine ollama[13754]: llama_model_load: vocab only - skipping tensors 2月 12 11:07:05 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:07:05 | 200 | 12.605399631s | 127.0.0.1 | POST "/api/chat" 2月 12 11:08:00 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:00 | 200 | 35.550752915s | 127.0.0.1 | POST "/api/chat" 2月 12 11:08:18 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:18 | 200 | 185.987µs | 127.0.0.1 | HEAD "/" 2月 12 11:08:18 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:18 | 200 | 878.022µs | 127.0.0.1 | POST "/api/generate" 2月 12 11:08:22 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:22 | 200 | 47.048µs | 127.0.0.1 | HEAD "/" 2月 12 11:08:22 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:22 | 200 | 705.726µs | 127.0.0.1 | GET "/api/tags" 2月 12 11:08:30 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:30 | 200 | 27.343µs | 127.0.0.1 | HEAD "/" 2月 12 11:08:30 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:30 | 200 | 23.472096ms | 127.0.0.1 | POST "/api/show" 2月 12 11:08:30 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:30.985+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 gpu=GPU-8eb0da23-25f0-0802-4d9f-0e1bab4c58d2 parallel=4 available=47326625792 required="43.6 GiB" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.841+08:00 level=INFO source=server.go:104 msg="system memory" total="125.9 GiB" free="122.7 GiB" free_swap="2.0 GiB" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.842+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[44.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="43.6 GiB" memory.required.partial="43.6 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[43.6 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 32 --parallel 4 --port 46587" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.843+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.881+08:00 level=INFO source=runner.go:936 msg="starting go runner" 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: ggml_cuda_init: found 1 CUDA devices: 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: Device 0: NVIDIA A40, compute capability 8.6, VMM: yes 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.938+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=32 2月 12 11:08:31 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:31.939+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46587" 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 45134 MiB free 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:32.094+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 0: general.architecture str = llama 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 1: general.type str = model 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 4: general.size_label str = 70B 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 5: llama.block_count u32 = 80 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 6: llama.context_length u32 = 131072 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 15: general.file_type u32 = 15 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type f32: 162 tensors 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K: 441 tensors 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q5_K: 40 tensors 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K: 81 tensors 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 256 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.7999 MB 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format = GGUF V3 (latest) 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch = llama 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type = BPE 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab = 128256 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges = 280147 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_train = 131072 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd = 8192 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_layer = 80 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head = 64 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head_kv = 8 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_rot = 128 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_swa = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_k = 128 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_v = 128 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_gqa = 8 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_k_gqa = 1024 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_v_gqa = 1024 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_eps = 0.0e+00 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_logit_scale = 0.0e+00 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ff = 28672 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert_used = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: causal attn = 1 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: pooling type = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope type = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope scaling = linear 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_base_train = 500000.0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_scale_train = 1 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_orig_yarn = 131072 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope_finetuned = unknown 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_conv = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_inner = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_state = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_rank = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_b_c_rms = 0 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type = 70B 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype = Q4_K - Medium 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params = 70.55 B 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token = 128000 '<|begin▁of▁sentence|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token = 128001 '<|end▁of▁sentence|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token = 128001 '<|end▁of▁sentence|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token = 128 'Ä' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128001 '<|end▁of▁sentence|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' 2月 12 11:08:32 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256 2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading 80 repeating layers to GPU 2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading output layer to GPU 2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloaded 81/81 layers to GPU 2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB 2月 12 11:08:34 husteic-virtual-machine ollama[13754]: llm_load_tensors: CUDA0 model buffer size = 39979.48 MiB 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_seq_max = 4 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx = 8192 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq = 2048 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_batch = 2048 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ubatch = 512 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: flash_attn = 0 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_base = 500000.0 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_scale = 1 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: CUDA0 KV buffer size = 2560.00 MiB 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA0 compute buffer size = 1104.00 MiB 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA_Host compute buffer size = 32.01 MiB 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph nodes = 2566 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph splits = 2 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:08:39.617+08:00 level=INFO source=server.go:594 msg="llama runner started in 7.77 seconds" 2月 12 11:08:39 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:08:39 | 200 | 9.565055444s | 127.0.0.1 | POST "/api/generate" 2月 12 11:10:37 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:10:37 | 200 | 1m28s | 127.0.0.1 | POST "/api/chat" 2月 12 11:17:41 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:41.644+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 gpu=GPU-8eb0da23-25f0-0802-4d9f-0e1bab4c58d2 parallel=4 available=47326625792 required="43.6 GiB" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.506+08:00 level=INFO source=server.go:104 msg="system memory" total="125.9 GiB" free="122.6 GiB" free_swap="2.0 GiB" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[44.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="43.6 GiB" memory.required.partial="43.6 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[43.6 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 32 --parallel 4 --port 40317" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.507+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.508+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.545+08:00 level=INFO source=runner.go:936 msg="starting go runner" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: ggml_cuda_init: found 1 CUDA devices: 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: Device 0: NVIDIA A40, compute capability 8.6, VMM: yes 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.609+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=32 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.610+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:40317" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 45134 MiB free 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:42.759+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 0: general.architecture str = llama 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 1: general.type str = model 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 4: general.size_label str = 70B 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 5: llama.block_count u32 = 80 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 6: llama.context_length u32 = 131072 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 15: general.file_type u32 = 15 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type f32: 162 tensors 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K: 441 tensors 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q5_K: 40 tensors 2月 12 11:17:42 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K: 81 tensors 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 256 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.7999 MB 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format = GGUF V3 (latest) 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch = llama 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type = BPE 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab = 128256 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges = 280147 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_train = 131072 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd = 8192 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_layer = 80 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head = 64 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_head_kv = 8 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_rot = 128 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_swa = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_k = 128 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_head_v = 128 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_gqa = 8 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_k_gqa = 1024 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_embd_v_gqa = 1024 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_eps = 0.0e+00 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: f_logit_scale = 0.0e+00 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ff = 28672 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_expert_used = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: causal attn = 1 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: pooling type = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope type = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope scaling = linear 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_base_train = 500000.0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: freq_scale_train = 1 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_ctx_orig_yarn = 131072 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: rope_finetuned = unknown 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_conv = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_inner = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_d_state = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_rank = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: ssm_dt_b_c_rms = 0 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type = 70B 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype = Q4_K - Medium 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params = 70.55 B 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token = 128000 '<|begin▁of▁sentence|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token = 128001 '<|end▁of▁sentence|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token = 128001 '<|end▁of▁sentence|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token = 128 'Ä' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128001 '<|end▁of▁sentence|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' 2月 12 11:17:43 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256 2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading 80 repeating layers to GPU 2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloading output layer to GPU 2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: offloaded 81/81 layers to GPU 2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB 2月 12 11:17:45 husteic-virtual-machine ollama[13754]: llm_load_tensors: CUDA0 model buffer size = 39979.48 MiB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_seq_max = 4 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx = 8192 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq = 2048 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_batch = 2048 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ubatch = 512 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: flash_attn = 0 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_base = 500000.0 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: freq_scale = 1 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_kv_cache_init: CUDA0 KV buffer size = 2560.00 MiB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA0 compute buffer size = 1104.00 MiB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: CUDA_Host compute buffer size = 32.01 MiB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph nodes = 2566 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_new_context_with_model: graph splits = 2 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: time=2025-02-12T11:17:51.285+08:00 level=INFO source=server.go:594 msg="llama runner started in 8.78 seconds" 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 0: general.architecture str = llama 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 1: general.type str = model 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 4: general.size_label str = 70B 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 5: llama.block_count u32 = 80 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 6: llama.context_length u32 = 131072 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 15: general.file_type u32 = 15 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type f32: 162 tensors 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q4_K: 441 tensors 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q5_K: 40 tensors 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_loader: - type q6_K: 81 tensors 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_vocab: special tokens cache size = 256 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_vocab: token to piece cache size = 0.7999 MB 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: format = GGUF V3 (latest) 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: arch = llama 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab type = BPE 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_vocab = 128256 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: n_merges = 280147 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: vocab_only = 1 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model type = ?B 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model ftype = all F32 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model params = 70.55 B 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: BOS token = 128000 '<|begin▁of▁sentence|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOS token = 128001 '<|end▁of▁sentence|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: PAD token = 128001 '<|end▁of▁sentence|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: LF token = 128 'Ä' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128001 '<|end▁of▁sentence|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llm_load_print_meta: max token length = 256 2月 12 11:17:51 husteic-virtual-machine ollama[13754]: llama_model_load: vocab only - skipping tensors 2月 12 11:21:41 husteic-virtual-machine ollama[13754]: [GIN] 2025/02/12 - 11:21:41 | 200 | 4m0s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:12:42 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 12, 2025):

https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990

<!-- gh-comment-id:2653058898 --> @rick-github commented on GitHub (Feb 12, 2025): https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5873