[GH-ISSUE #9708] Why can't my ollama run on GPU? #6341

Closed
opened 2026-04-12 17:51:18 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @liaoyu-qing on GitHub (Mar 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9708

What is the issue?

I see that the log shows that the device can be found, but it is actually using the CPU for inference.

root@sdt-B550MXC-PRO:/home/sdt# ollama serve
2025/03/13 10:32:18 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-13T10:32:18.467+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-03-13T10:32:18.467+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-13T10:32:18.467+08:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)"
time=2025-03-13T10:32:18.467+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-13T10:32:18.729+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4070" total="11.6 GiB" available="11.0 GiB"
[GIN] 2025/03/13 - 10:32:21 | 200 | 46.831µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/03/13 - 10:32:21 | 200 | 275.688µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/03/13 - 10:32:34 | 200 | 22.574µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/03/13 - 10:32:34 | 200 | 17.581856ms | 127.0.0.1 | POST "/api/show"
time=2025-03-13T10:32:34.950+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-13T10:32:34.950+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-13T10:32:34.950+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 parallel=4 available=11816665088 required="10.8 GiB"
time=2025-03-13T10:32:35.083+08:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="13.2 GiB" free_swap="1.9 GiB"
time=2025-03-13T10:32:35.083+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-13T10:32:35.083+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-13T10:32:35.083+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 14B
llama_model_loader: - kv 5: qwen2.block_count u32 = 48
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 14.77 B
print_info: general.name = DeepSeek R1 Distill Qwen 14B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token = 151643 '<|end▁of▁sentence|>'
print_info: EOT token = 151643 '<|end▁of▁sentence|>'
print_info: PAD token = 151643 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|end▁of▁sentence|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-03-13T10:32:35.249+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 4 --port 40895"
time=2025-03-13T10:32:35.249+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-13T10:32:35.249+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-13T10:32:35.249+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-13T10:32:35.262+08:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-13T10:32:35.262+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-03-13T10:32:35.284+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:40895"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 14B
llama_model_loader: - kv 5: qwen2.block_count u32 = 48
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 5120
print_info: n_layer = 48
print_info: n_head = 40
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 5
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 13824
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 14B
print_info: model params = 14.77 B
print_info: general.name = DeepSeek R1 Distill Qwen 14B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token = 151643 '<|end▁of▁sentence|>'
print_info: EOT token = 151643 '<|end▁of▁sentence|>'
print_info: PAD token = 151643 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|end▁of▁sentence|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-03-13T10:32:35.500+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
load_tensors: CPU_Mapped model buffer size = 8566.04 MiB
llama_init_from_model: n_seq_max = 4
llama_init_from_model: n_ctx = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch = 2048
llama_init_from_model: n_ubatch = 512
llama_init_from_model: flash_attn = 0
llama_init_from_model: freq_base = 1000000.0
llama_init_from_model: freq_scale = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB
llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB
llama_init_from_model: CPU output buffer size = 2.40 MiB
llama_init_from_model: CPU compute buffer size = 696.01 MiB
llama_init_from_model: graph nodes = 1686
llama_init_from_model: graph splits = 1
time=2025-03-13T10:32:37.004+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.76 seconds"
[GIN] 2025/03/13 - 10:32:37 | 200 | 2.23785455s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/03/13 - 10:33:13 | 200 | 18.663991257s | 127.0.0.1 | POST "/api/chat"

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @liaoyu-qing on GitHub (Mar 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9708 ### What is the issue? I see that the log shows that the device can be found, but it is actually using the CPU for inference. root@sdt-B550MXC-PRO:/home/sdt# ollama serve 2025/03/13 10:32:18 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-13T10:32:18.467+08:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-03-13T10:32:18.467+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-13T10:32:18.467+08:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)" time=2025-03-13T10:32:18.467+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-13T10:32:18.729+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4070" total="11.6 GiB" available="11.0 GiB" [GIN] 2025/03/13 - 10:32:21 | 200 | 46.831µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/13 - 10:32:21 | 200 | 275.688µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/13 - 10:32:34 | 200 | 22.574µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/13 - 10:32:34 | 200 | 17.581856ms | 127.0.0.1 | POST "/api/show" time=2025-03-13T10:32:34.950+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-13T10:32:34.950+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-13T10:32:34.950+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 parallel=4 available=11816665088 required="10.8 GiB" time=2025-03-13T10:32:35.083+08:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="13.2 GiB" free_swap="1.9 GiB" time=2025-03-13T10:32:35.083+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-13T10:32:35.083+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-13T10:32:35.083+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-03-13T10:32:35.249+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 4 --port 40895" time=2025-03-13T10:32:35.249+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-13T10:32:35.249+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-13T10:32:35.249+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-13T10:32:35.262+08:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-13T10:32:35.262+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-03-13T10:32:35.284+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:40895" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-03-13T10:32:35.500+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 1000000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_init_from_model: CPU output buffer size = 2.40 MiB llama_init_from_model: CPU compute buffer size = 696.01 MiB llama_init_from_model: graph nodes = 1686 llama_init_from_model: graph splits = 1 time=2025-03-13T10:32:37.004+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.76 seconds" [GIN] 2025/03/13 - 10:32:37 | 200 | 2.23785455s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/13 - 10:33:13 | 200 | 18.663991257s | 127.0.0.1 | POST "/api/chat" ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 17:51:18 -05:00
Author
Owner

@liaoyu-qing commented on GitHub (Mar 13, 2025):

DEBUG log:

root@sdt-B550MXC-PRO:/home/sdt# ollama serve
2025/03/13 10:51:52 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-13T10:51:52.727+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-03-13T10:51:52.727+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-13T10:51:52.727+08:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)"
time=2025-03-13T10:51:52.728+08:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-13T10:51:52.728+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-13T10:51:52.752+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-13T10:51:52.752+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so

time=2025-03-13T10:51:52.752+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /home/sdt/libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-13T10:51:52.759+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04 /usr/lib32/libcuda.so.570.124.04]"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04
dlsym: cuInit - 0x7f2dfbe76e60
dlsym: cuDriverGetVersion - 0x7f2dfbe76e80
dlsym: cuDeviceGetCount - 0x7f2dfbe76ec0
dlsym: cuDeviceGet - 0x7f2dfbe76ea0
dlsym: cuDeviceGetAttribute - 0x7f2dfbe76fa0
dlsym: cuDeviceGetUuid - 0x7f2dfbe76f00
dlsym: cuDeviceGetName - 0x7f2dfbe76ee0
dlsym: cuCtxCreate_v3 - 0x7f2dfbe77180
dlsym: cuMemGetInfo_v2 - 0x7f2dfbe77900
dlsym: cuCtxDestroy - 0x7f2dfbed5a80
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-03-13T10:51:52.846+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04
[GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76] CUDA totalMem 11882 mb
[GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76] CUDA freeMem 11171 mb
[GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76] Compute Capability 8.9
time=2025-03-13T10:51:52.984+08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-03-13T10:51:52.984+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4070" total="11.6 GiB" available="10.9 GiB"
time=2025-03-13T10:52:05.304+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="13.4 GiB" before.free_swap="1.8 GiB" now.total="31.3 GiB" now.free="13.3 GiB" now.free_swap="1.8 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04
dlsym: cuInit - 0x7f2dfbe76e60
dlsym: cuDriverGetVersion - 0x7f2dfbe76e80
dlsym: cuDeviceGetCount - 0x7f2dfbe76ec0
dlsym: cuDeviceGet - 0x7f2dfbe76ea0
dlsym: cuDeviceGetAttribute - 0x7f2dfbe76fa0
dlsym: cuDeviceGetUuid - 0x7f2dfbe76f00
dlsym: cuDeviceGetName - 0x7f2dfbe76ee0
dlsym: cuCtxCreate_v3 - 0x7f2dfbe77180
dlsym: cuMemGetInfo_v2 - 0x7f2dfbe77900
dlsym: cuCtxDestroy - 0x7f2dfbed5a80
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-03-13T10:52:05.453+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 name="NVIDIA GeForce RTX 4070" overhead="0 B" before.total="11.6 GiB" before.free="10.9 GiB" now.total="11.6 GiB" now.free="10.9 GiB" now.used="719.2 MiB"
releasing cuda driver library
time=2025-03-13T10:52:05.453+08:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-03-13T10:52:05.476+08:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
time=2025-03-13T10:52:05.476+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[10.9 GiB]"
time=2025-03-13T10:52:05.476+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-13T10:52:05.476+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-13T10:52:05.477+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[10.9 GiB]"
time=2025-03-13T10:52:05.477+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-13T10:52:05.477+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-13T10:52:05.477+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 parallel=1 available=11705319424 required="9.2 GiB"
time=2025-03-13T10:52:05.477+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="13.3 GiB" before.free_swap="1.8 GiB" now.total="31.3 GiB" now.free="13.3 GiB" now.free_swap="1.8 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04
dlsym: cuInit - 0x7f2dfbe76e60
dlsym: cuDriverGetVersion - 0x7f2dfbe76e80
dlsym: cuDeviceGetCount - 0x7f2dfbe76ec0
dlsym: cuDeviceGet - 0x7f2dfbe76ea0
dlsym: cuDeviceGetAttribute - 0x7f2dfbe76fa0
dlsym: cuDeviceGetUuid - 0x7f2dfbe76f00
dlsym: cuDeviceGetName - 0x7f2dfbe76ee0
dlsym: cuCtxCreate_v3 - 0x7f2dfbe77180
dlsym: cuMemGetInfo_v2 - 0x7f2dfbe77900
dlsym: cuCtxDestroy - 0x7f2dfbed5a80
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-03-13T10:52:05.618+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 name="NVIDIA GeForce RTX 4070" overhead="0 B" before.total="11.6 GiB" before.free="10.9 GiB" now.total="11.6 GiB" now.free="10.9 GiB" now.used="711.2 MiB"
releasing cuda driver library
time=2025-03-13T10:52:05.618+08:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="13.3 GiB" free_swap="1.8 GiB"
time=2025-03-13T10:52:05.618+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[10.9 GiB]"
time=2025-03-13T10:52:05.618+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-13T10:52:05.618+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-13T10:52:05.618+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.2 GiB" memory.required.partial="9.2 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-03-13T10:52:05.618+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 14B
llama_model_loader: - kv 5: qwen2.block_count u32 = 48
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151645 '<|Assistant|>' is not marked as EOG
load: control token: 151644 '<|User|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG
load: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151647 '<|EOT|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 14.77 B
print_info: general.name = DeepSeek R1 Distill Qwen 14B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token = 151643 '<|end▁of▁sentence|>'
print_info: EOT token = 151643 '<|end▁of▁sentence|>'
print_info: PAD token = 151643 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|end▁of▁sentence|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-03-13T10:52:05.797+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 2048 --batch-size 512 --n-gpu-layers 49 --verbose --threads 8 --parallel 1 --port 45501"
time=2025-03-13T10:52:05.797+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[LD_LIBRARY_PATH=/usr/local/cuda/lib64::/usr/local/lib/ollama PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76]"
time=2025-03-13T10:52:05.797+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-13T10:52:05.797+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-13T10:52:05.798+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-13T10:52:05.810+08:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-13T10:52:05.810+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/cuda/lib64
time=2025-03-13T10:52:05.810+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/home/sdt
time=2025-03-13T10:52:05.810+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/local/lib/ollama
time=2025-03-13T10:52:05.810+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-03-13T10:52:05.835+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:45501"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 14B
llama_model_loader: - kv 5: qwen2.block_count u32 = 48
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151645 '<|Assistant|>' is not marked as EOG
load: control token: 151644 '<|User|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG
load: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151647 '<|EOT|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 5120
print_info: n_layer = 48
print_info: n_head = 40
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 5
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 13824
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 14B
print_info: model params = 14.77 B
print_info: general.name = DeepSeek R1 Distill Qwen 14B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token = 151643 '<|end▁of▁sentence|>'
print_info: EOT token = 151643 '<|end▁of▁sentence|>'
print_info: PAD token = 151643 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|end▁of▁sentence|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer 0 assigned to device CPU
load_tensors: layer 1 assigned to device CPU
load_tensors: layer 2 assigned to device CPU
load_tensors: layer 3 assigned to device CPU
load_tensors: layer 4 assigned to device CPU
load_tensors: layer 5 assigned to device CPU
load_tensors: layer 6 assigned to device CPU
load_tensors: layer 7 assigned to device CPU
load_tensors: layer 8 assigned to device CPU
load_tensors: layer 9 assigned to device CPU
load_tensors: layer 10 assigned to device CPU
load_tensors: layer 11 assigned to device CPU
load_tensors: layer 12 assigned to device CPU
load_tensors: layer 13 assigned to device CPU
load_tensors: layer 14 assigned to device CPU
load_tensors: layer 15 assigned to device CPU
load_tensors: layer 16 assigned to device CPU
load_tensors: layer 17 assigned to device CPU
load_tensors: layer 18 assigned to device CPU
load_tensors: layer 19 assigned to device CPU
load_tensors: layer 20 assigned to device CPU
load_tensors: layer 21 assigned to device CPU
load_tensors: layer 22 assigned to device CPU
load_tensors: layer 23 assigned to device CPU
load_tensors: layer 24 assigned to device CPU
load_tensors: layer 25 assigned to device CPU
load_tensors: layer 26 assigned to device CPU
load_tensors: layer 27 assigned to device CPU
load_tensors: layer 28 assigned to device CPU
load_tensors: layer 29 assigned to device CPU
load_tensors: layer 30 assigned to device CPU
load_tensors: layer 31 assigned to device CPU
load_tensors: layer 32 assigned to device CPU
load_tensors: layer 33 assigned to device CPU
load_tensors: layer 34 assigned to device CPU
load_tensors: layer 35 assigned to device CPU
load_tensors: layer 36 assigned to device CPU
load_tensors: layer 37 assigned to device CPU
load_tensors: layer 38 assigned to device CPU
load_tensors: layer 39 assigned to device CPU
load_tensors: layer 40 assigned to device CPU
load_tensors: layer 41 assigned to device CPU
load_tensors: layer 42 assigned to device CPU
load_tensors: layer 43 assigned to device CPU
load_tensors: layer 44 assigned to device CPU
load_tensors: layer 45 assigned to device CPU
load_tensors: layer 46 assigned to device CPU
load_tensors: layer 47 assigned to device CPU
load_tensors: layer 48 assigned to device CPU
time=2025-03-13T10:52:06.049+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
load_tensors: CPU_Mapped model buffer size = 8566.04 MiB
llama_init_from_model: n_seq_max = 1
llama_init_from_model: n_ctx = 2048
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch = 512
llama_init_from_model: n_ubatch = 512
llama_init_from_model: flash_attn = 0
llama_init_from_model: freq_base = 1000000.0
llama_init_from_model: freq_scale = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
time=2025-03-13T10:52:06.550+08:00 level=DEBUG source=server.go:630 msg="model load progress 1.00"
llama_kv_cache_init: CPU KV buffer size = 384.00 MiB
llama_init_from_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB
llama_init_from_model: CPU output buffer size = 0.60 MiB
llama_init_from_model: CPU compute buffer size = 307.00 MiB
llama_init_from_model: graph nodes = 1686
llama_init_from_model: graph splits = 1
time=2025-03-13T10:52:06.801+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.00 seconds"
time=2025-03-13T10:52:06.801+08:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
time=2025-03-13T10:52:06.817+08:00 level=DEBUG source=routes.go:1516 msg="chat request" images=0 prompt="<|User|>你好<|Assistant|>\n\n\n\n你好!很高兴见到你,有什么我可以帮忙的吗?<|end▁of▁sentence|><|User|>你好<|Assistant|>\n今天,我收到一条用户的消息:“你好”。看起来和之前的“你好”一样。我要分析一下为什么用户会再次发送相同的问候。\n\n首先,用户之前已经发过一次“你好”,我回应了问候并询问是否有帮助的需求。现在用户又发了一次“你好”,可能有几种情况:\n\n1. 重复问候:有些用户可能会多次发送同样的问候,可能是习惯性动作或者是想确认对方是否在线。\n\n2. 测试反应:用户可能在测试我的反应,看看我会不会每次都回复相同的问候。\n\n3. 开始对话:有时候,用户可能只是想通过再次问候来开启新的对话,或者表达某种情感。\n\n考虑到这些可能性,我需要决定如何回应。保持友好和开放是关键,所以我应该再次欢迎用户,并询问是否有具体的问题或帮助需求。这样不仅回应了问候,还引导用户进一步交流。\n\n同时,我要确保我的回复简洁明了,不会让用户感到被忽视或不耐烦。因此,我会用中文表示欢迎,并提供进一步的帮助。\n\n\n你好!很高兴见到你,有什么我可以帮忙的吗?<|end▁of▁sentence|><|User|>你好<|Assistant|>"
time=2025-03-13T10:52:06.825+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=257 used=0 remaining=257

<!-- gh-comment-id:2719696542 --> @liaoyu-qing commented on GitHub (Mar 13, 2025): DEBUG log: root@sdt-B550MXC-PRO:/home/sdt# ollama serve 2025/03/13 10:51:52 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-13T10:51:52.727+08:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-03-13T10:51:52.727+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-13T10:51:52.727+08:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)" time=2025-03-13T10:51:52.728+08:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-13T10:51:52.728+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-13T10:51:52.752+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-13T10:51:52.752+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-13T10:51:52.752+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /home/sdt/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-13T10:51:52.759+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04 /usr/lib32/libcuda.so.570.124.04]" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04 dlsym: cuInit - 0x7f2dfbe76e60 dlsym: cuDriverGetVersion - 0x7f2dfbe76e80 dlsym: cuDeviceGetCount - 0x7f2dfbe76ec0 dlsym: cuDeviceGet - 0x7f2dfbe76ea0 dlsym: cuDeviceGetAttribute - 0x7f2dfbe76fa0 dlsym: cuDeviceGetUuid - 0x7f2dfbe76f00 dlsym: cuDeviceGetName - 0x7f2dfbe76ee0 dlsym: cuCtxCreate_v3 - 0x7f2dfbe77180 dlsym: cuMemGetInfo_v2 - 0x7f2dfbe77900 dlsym: cuCtxDestroy - 0x7f2dfbed5a80 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-03-13T10:51:52.846+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04 [GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76] CUDA totalMem 11882 mb [GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76] CUDA freeMem 11171 mb [GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76] Compute Capability 8.9 time=2025-03-13T10:51:52.984+08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-03-13T10:51:52.984+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4070" total="11.6 GiB" available="10.9 GiB" time=2025-03-13T10:52:05.304+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="13.4 GiB" before.free_swap="1.8 GiB" now.total="31.3 GiB" now.free="13.3 GiB" now.free_swap="1.8 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04 dlsym: cuInit - 0x7f2dfbe76e60 dlsym: cuDriverGetVersion - 0x7f2dfbe76e80 dlsym: cuDeviceGetCount - 0x7f2dfbe76ec0 dlsym: cuDeviceGet - 0x7f2dfbe76ea0 dlsym: cuDeviceGetAttribute - 0x7f2dfbe76fa0 dlsym: cuDeviceGetUuid - 0x7f2dfbe76f00 dlsym: cuDeviceGetName - 0x7f2dfbe76ee0 dlsym: cuCtxCreate_v3 - 0x7f2dfbe77180 dlsym: cuMemGetInfo_v2 - 0x7f2dfbe77900 dlsym: cuCtxDestroy - 0x7f2dfbed5a80 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-03-13T10:52:05.453+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 name="NVIDIA GeForce RTX 4070" overhead="0 B" before.total="11.6 GiB" before.free="10.9 GiB" now.total="11.6 GiB" now.free="10.9 GiB" now.used="719.2 MiB" releasing cuda driver library time=2025-03-13T10:52:05.453+08:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-03-13T10:52:05.476+08:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e time=2025-03-13T10:52:05.476+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[10.9 GiB]" time=2025-03-13T10:52:05.476+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-13T10:52:05.476+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-13T10:52:05.477+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[10.9 GiB]" time=2025-03-13T10:52:05.477+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-13T10:52:05.477+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-13T10:52:05.477+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 parallel=1 available=11705319424 required="9.2 GiB" time=2025-03-13T10:52:05.477+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="13.3 GiB" before.free_swap="1.8 GiB" now.total="31.3 GiB" now.free="13.3 GiB" now.free_swap="1.8 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.04 dlsym: cuInit - 0x7f2dfbe76e60 dlsym: cuDriverGetVersion - 0x7f2dfbe76e80 dlsym: cuDeviceGetCount - 0x7f2dfbe76ec0 dlsym: cuDeviceGet - 0x7f2dfbe76ea0 dlsym: cuDeviceGetAttribute - 0x7f2dfbe76fa0 dlsym: cuDeviceGetUuid - 0x7f2dfbe76f00 dlsym: cuDeviceGetName - 0x7f2dfbe76ee0 dlsym: cuCtxCreate_v3 - 0x7f2dfbe77180 dlsym: cuMemGetInfo_v2 - 0x7f2dfbe77900 dlsym: cuCtxDestroy - 0x7f2dfbed5a80 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-03-13T10:52:05.618+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76 name="NVIDIA GeForce RTX 4070" overhead="0 B" before.total="11.6 GiB" before.free="10.9 GiB" now.total="11.6 GiB" now.free="10.9 GiB" now.used="711.2 MiB" releasing cuda driver library time=2025-03-13T10:52:05.618+08:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="13.3 GiB" free_swap="1.8 GiB" time=2025-03-13T10:52:05.618+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[10.9 GiB]" time=2025-03-13T10:52:05.618+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-13T10:52:05.618+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-13T10:52:05.618+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.2 GiB" memory.required.partial="9.2 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" time=2025-03-13T10:52:05.618+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151645 '<|Assistant|>' is not marked as EOG load: control token: 151644 '<|User|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG load: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151647 '<|EOT|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-03-13T10:52:05.797+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 2048 --batch-size 512 --n-gpu-layers 49 --verbose --threads 8 --parallel 1 --port 45501" time=2025-03-13T10:52:05.797+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[LD_LIBRARY_PATH=/usr/local/cuda/lib64::/usr/local/lib/ollama PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-a2d60931-8dc3-3728-0b58-ab97a460bc76]" time=2025-03-13T10:52:05.797+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-13T10:52:05.797+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-13T10:52:05.798+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-13T10:52:05.810+08:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-13T10:52:05.810+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/cuda/lib64 time=2025-03-13T10:52:05.810+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/home/sdt time=2025-03-13T10:52:05.810+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/local/lib/ollama time=2025-03-13T10:52:05.810+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-03-13T10:52:05.835+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:45501" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151645 '<|Assistant|>' is not marked as EOG load: control token: 151644 '<|User|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG load: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151647 '<|EOT|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device CPU load_tensors: layer 1 assigned to device CPU load_tensors: layer 2 assigned to device CPU load_tensors: layer 3 assigned to device CPU load_tensors: layer 4 assigned to device CPU load_tensors: layer 5 assigned to device CPU load_tensors: layer 6 assigned to device CPU load_tensors: layer 7 assigned to device CPU load_tensors: layer 8 assigned to device CPU load_tensors: layer 9 assigned to device CPU load_tensors: layer 10 assigned to device CPU load_tensors: layer 11 assigned to device CPU load_tensors: layer 12 assigned to device CPU load_tensors: layer 13 assigned to device CPU load_tensors: layer 14 assigned to device CPU load_tensors: layer 15 assigned to device CPU load_tensors: layer 16 assigned to device CPU load_tensors: layer 17 assigned to device CPU load_tensors: layer 18 assigned to device CPU load_tensors: layer 19 assigned to device CPU load_tensors: layer 20 assigned to device CPU load_tensors: layer 21 assigned to device CPU load_tensors: layer 22 assigned to device CPU load_tensors: layer 23 assigned to device CPU load_tensors: layer 24 assigned to device CPU load_tensors: layer 25 assigned to device CPU load_tensors: layer 26 assigned to device CPU load_tensors: layer 27 assigned to device CPU load_tensors: layer 28 assigned to device CPU load_tensors: layer 29 assigned to device CPU load_tensors: layer 30 assigned to device CPU load_tensors: layer 31 assigned to device CPU load_tensors: layer 32 assigned to device CPU load_tensors: layer 33 assigned to device CPU load_tensors: layer 34 assigned to device CPU load_tensors: layer 35 assigned to device CPU load_tensors: layer 36 assigned to device CPU load_tensors: layer 37 assigned to device CPU load_tensors: layer 38 assigned to device CPU load_tensors: layer 39 assigned to device CPU load_tensors: layer 40 assigned to device CPU load_tensors: layer 41 assigned to device CPU load_tensors: layer 42 assigned to device CPU load_tensors: layer 43 assigned to device CPU load_tensors: layer 44 assigned to device CPU load_tensors: layer 45 assigned to device CPU load_tensors: layer 46 assigned to device CPU load_tensors: layer 47 assigned to device CPU load_tensors: layer 48 assigned to device CPU time=2025-03-13T10:52:06.049+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 2048 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 512 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 1000000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 time=2025-03-13T10:52:06.550+08:00 level=DEBUG source=server.go:630 msg="model load progress 1.00" llama_kv_cache_init: CPU KV buffer size = 384.00 MiB llama_init_from_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_init_from_model: CPU output buffer size = 0.60 MiB llama_init_from_model: CPU compute buffer size = 307.00 MiB llama_init_from_model: graph nodes = 1686 llama_init_from_model: graph splits = 1 time=2025-03-13T10:52:06.801+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.00 seconds" time=2025-03-13T10:52:06.801+08:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e time=2025-03-13T10:52:06.817+08:00 level=DEBUG source=routes.go:1516 msg="chat request" images=0 prompt="<|User|>你好<|Assistant|><think>\n\n</think>\n\n你好!很高兴见到你,有什么我可以帮忙的吗?<|end▁of▁sentence|><|User|>你好<|Assistant|><think>\n今天,我收到一条用户的消息:“你好”。看起来和之前的“你好”一样。我要分析一下为什么用户会再次发送相同的问候。\n\n首先,用户之前已经发过一次“你好”,我回应了问候并询问是否有帮助的需求。现在用户又发了一次“你好”,可能有几种情况:\n\n1. **重复问候**:有些用户可能会多次发送同样的问候,可能是习惯性动作或者是想确认对方是否在线。\n\n2. **测试反应**:用户可能在测试我的反应,看看我会不会每次都回复相同的问候。\n\n3. **开始对话**:有时候,用户可能只是想通过再次问候来开启新的对话,或者表达某种情感。\n\n考虑到这些可能性,我需要决定如何回应。保持友好和开放是关键,所以我应该再次欢迎用户,并询问是否有具体的问题或帮助需求。这样不仅回应了问候,还引导用户进一步交流。\n\n同时,我要确保我的回复简洁明了,不会让用户感到被忽视或不耐烦。因此,我会用中文表示欢迎,并提供进一步的帮助。\n</think>\n\n你好!很高兴见到你,有什么我可以帮忙的吗?<|end▁of▁sentence|><|User|>你好<|Assistant|>" time=2025-03-13T10:52:06.825+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=257 used=0 remaining=257
Author
Owner

@Jay021 commented on GitHub (Mar 16, 2025):

修改后的中文内容:

我的Ollama在运行Ollama ps命令时,虽然显示GPU使用率为100%,但实际运行大模型时,CPU运转飞快,而NVIDIA MSI显示GPU占用率为0%,显存也未占用。我尝试过卸载并重新安装Ollama,但问题依旧。直到我看到了这个帖子:

#9266
by https://github.com/Hsq12138

我在PATH中添加了以下路径(请根据自己的电脑用户名修改PC部分):
C:\Users\<你的用户名>\AppData\Local\Programs\Ollama\lib\ollama

完美解决了问题,你可以试试。


修改后的英文翻译:

When running the Ollama ps command, my Ollama shows 100% GPU usage, but in reality, the CPU is running at full speed while the NVIDIA MSI shows 0% GPU usage and no VRAM is being utilized. I tried uninstalling and reinstalling Ollama, but the issue persisted. Until I came across this post:

#9266
by https://github.com/Hsq12138

I added the following path to the PATH environment variable (please replace <YourUsername> with your actual computer username):
C:\Users\<YourUsername>\AppData\Local\Programs\Ollama\lib\ollama

This perfectly resolved the issue. You can give it a try.

<!-- gh-comment-id:2727175732 --> @Jay021 commented on GitHub (Mar 16, 2025): ### 修改后的中文内容: 我的Ollama在运行`Ollama ps`命令时,虽然显示GPU使用率为100%,但实际运行大模型时,CPU运转飞快,而NVIDIA MSI显示GPU占用率为0%,显存也未占用。我尝试过卸载并重新安装Ollama,但问题依旧。直到我看到了这个帖子: #9266 by https://github.com/Hsq12138 我在PATH中添加了以下路径(请根据自己的电脑用户名修改`PC`部分): `C:\Users\<你的用户名>\AppData\Local\Programs\Ollama\lib\ollama` 完美解决了问题,你可以试试。 --- ### 修改后的英文翻译: When running the `Ollama ps` command, my Ollama shows 100% GPU usage, but in reality, the CPU is running at full speed while the NVIDIA MSI shows 0% GPU usage and no VRAM is being utilized. I tried uninstalling and reinstalling Ollama, but the issue persisted. Until I came across this post: #9266 by https://github.com/Hsq12138 I added the following path to the PATH environment variable (please replace `<YourUsername>` with your actual computer username): `C:\Users\<YourUsername>\AppData\Local\Programs\Ollama\lib\ollama` This perfectly resolved the issue. You can give it a try.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6341