[GH-ISSUE #9248] When running DEEPSEEK with Ollama, model crashes occur #52538

Closed
opened 2026-04-28 23:37:39 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @mjdp168 on GitHub (Feb 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9248

What is the issue?

During the execution of DEEPSEEK, the following error occurred:
Error: An error was encountered while running the model: read tcp 127.0.0.1:55564->127.0.0.1:54784: wsarecv: An existing connection was forcibly closed by the remote host.
Observing that memory usage dropped to 0, which indicates a model crash. Could you suggest possible solutions?
Thank.

Relevant log output

time=2025-02-20T19:36:37.412+08:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2025-02-20T19:36:37.424+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\mj\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-20T19:36:37.455+08:00 level=INFO source=server.go:182 msg="unable to connect to server"
time=2025-02-20T19:36:37.455+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-02-20T19:36:37.461+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 16424"
time=2025-02-20T19:36:37.461+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\mj\\AppData\\Local\\Ollama\\server.log"

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.511

Originally created by @mjdp168 on GitHub (Feb 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9248 ### What is the issue? During the execution of DEEPSEEK, the following error occurred: Error: An error was encountered while running the model: read tcp 127.0.0.1:55564->127.0.0.1:54784: wsarecv: An existing connection was forcibly closed by the remote host. Observing that memory usage dropped to 0, which indicates a model crash. Could you suggest possible solutions? Thank. ### Relevant log output ```shell time=2025-02-20T19:36:37.412+08:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-02-20T19:36:37.424+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\mj\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-20T19:36:37.455+08:00 level=INFO source=server.go:182 msg="unable to connect to server" time=2025-02-20T19:36:37.455+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-02-20T19:36:37.461+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 16424" time=2025-02-20T19:36:37.461+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\mj\\AppData\\Local\\Ollama\\server.log" ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.511
GiteaMirror added the bug label 2026-04-28 23:37:39 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

More log will show the problem, but if it's not a distilled deepseek, it's probably https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2671385146 --> @rick-github commented on GitHub (Feb 20, 2025): More log will show the problem, but if it's not a distilled deepseek, it's probably https://github.com/ollama/ollama/issues/5975
Author
Owner

@mjdp168 commented on GitHub (Feb 20, 2025):

2025/02/20 19:36:37 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\mj\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-20T19:36:37.533+08:00 level=INFO source=images.go:432 msg="total blobs: 7"
time=2025-02-20T19:36:37.534+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-20T19:36:37.534+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)"
time=2025-02-20T19:36:37.534+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-20T19:36:37.535+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-20T19:36:37.535+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=64
time=2025-02-20T19:36:37.547+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-20T19:36:37.547+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="511.9 GiB" available="496.6 GiB"
[GIN] 2025/02/20 - 19:36:37 | 200 | 724.5µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/20 - 19:36:37 | 200 | 19.1974ms | 127.0.0.1 | POST "/api/show"
time=2025-02-20T19:36:37.957+08:00 level=INFO source=server.go:100 msg="system memory" total="511.9 GiB" free="496.6 GiB" free_swap="528.0 GiB"
time=2025-02-20T19:36:37.959+08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[496.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="417.4 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[417.4 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-20T19:36:37.967+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\mj\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\mj\.ollama\models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 4 --port 50738"
time=2025-02-20T19:36:37.972+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-20T19:36:37.972+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-20T19:36:37.973+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-20T19:36:38.004+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-20T19:36:38.012+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=64
time=2025-02-20T19:36:38.013+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:50738"
load_backend: loaded CPU backend from C:\Users\mj\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from C:\Users\mj.ollama\models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 256x20B
llama_model_loader: - kv 3: deepseek2.block_count u32 = 61
llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840
llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168
llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432
llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128
llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128
llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8
llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3
llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280
llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536
llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192
llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128
llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048
llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256
llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1
llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000
llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true
llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2
llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000
llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3
llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "< ...
llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 40: general.quantization_version u32 = 2
llama_model_loader: - kv 41: general.file_type u32 = 15
llama_model_loader: - type f32: 361 tensors
llama_model_loader: - type q4_K: 606 tensors
llama_model_loader: - type q6_K: 58 tensors
time=2025-02-20T19:36:38.224+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 818
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = deepseek2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 129280
llm_load_print_meta: n_merges = 127741
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 163840
llm_load_print_meta: n_embd = 7168
llm_load_print_meta: n_layer = 61
llm_load_print_meta: n_head = 128
llm_load_print_meta: n_head_kv = 128
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 192
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 24576
llm_load_print_meta: n_embd_v_gqa = 16384
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18432
llm_load_print_meta: n_expert = 256
llm_load_print_meta: n_expert_used = 8
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = yarn
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 671B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 671.03 B
llm_load_print_meta: model size = 376.65 GiB (4.82 BPW)
llm_load_print_meta: general.name = n/a
llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 131 'Ä'
llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead = 3
llm_load_print_meta: n_lora_q = 1536
llm_load_print_meta: n_lora_kv = 512
llm_load_print_meta: n_ff_exp = 2048
llm_load_print_meta: n_expert_shared = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm = 1
llm_load_print_meta: expert_gating_func = sigmoid
llm_load_print_meta: rope_yarn_log_mul = 0.1000
llm_load_tensors: CPU model buffer size = 385689.62 MiB
[GIN] 2025/02/20 - 19:37:21 | 200 | 9.9941ms | 192.168.192.82 | GET "/api/tags"
[GIN] 2025/02/20 - 19:37:23 | 200 | 2.0603ms | 192.168.192.82 | GET "/api/tags"
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0
llama_kv_cache_init: CPU KV buffer size = 39040.00 MiB
llama_new_context_with_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.08 MiB
llama_new_context_with_model: CPU compute buffer size = 2218.01 MiB
llama_new_context_with_model: graph nodes = 5025
llama_new_context_with_model: graph splits = 1
time=2025-02-20T19:42:05.348+08:00 level=INFO source=server.go:596 msg="llama runner started in 327.38 seconds"
[GIN] 2025/02/20 - 19:42:05 | 200 | 5m27s | 127.0.0.1 | POST "/api/generate"
llama.cpp:11942: The current context does not support K-shift
[GIN] 2025/02/20 - 20:03:44 | 200 | 20m58s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/20 - 20:12:27 | 200 | 536.8µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/20 - 20:12:27 | 200 | 19.9133ms | 127.0.0.1 | POST "/api/show"
time=2025-02-20T20:12:27.344+08:00 level=INFO source=server.go:100 msg="system memory" total="511.9 GiB" free="496.6 GiB" free_swap="527.6 GiB"
time=2025-02-20T20:12:27.346+08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[496.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="417.4 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[417.4 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-20T20:12:27.351+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\mj\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\mj\.ollama\models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 4 --port 52444"
time=2025-02-20T20:12:27.355+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-20T20:12:27.355+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-20T20:12:27.356+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-20T20:12:27.385+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-20T20:12:27.395+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=64
time=2025-02-20T20:12:27.396+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:52444"
load_backend: loaded CPU backend from C:\Users\mj\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from C:\Users\mj.ollama\models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 256x20B
llama_model_loader: - kv 3: deepseek2.block_count u32 = 61
llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840
llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168
llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432
llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128
llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128
llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8
llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3
llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280
llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536
llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192
llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128
llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048
llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256
llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1
llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000
llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true
llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2
llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000
llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3
llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "< ...
llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 40: general.quantization_version u32 = 2
llama_model_loader: - kv 41: general.file_type u32 = 15
llama_model_loader: - type f32: 361 tensors
llama_model_loader: - type q4_K: 606 tensors
llama_model_loader: - type q6_K: 58 tensors
time=2025-02-20T20:12:27.607+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 818
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = deepseek2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 129280
llm_load_print_meta: n_merges = 127741
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 163840
llm_load_print_meta: n_embd = 7168
llm_load_print_meta: n_layer = 61
llm_load_print_meta: n_head = 128
llm_load_print_meta: n_head_kv = 128
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 192
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 24576
llm_load_print_meta: n_embd_v_gqa = 16384
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18432
llm_load_print_meta: n_expert = 256
llm_load_print_meta: n_expert_used = 8
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = yarn
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 671B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 671.03 B
llm_load_print_meta: model size = 376.65 GiB (4.82 BPW)
llm_load_print_meta: general.name = n/a
llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 131 'Ä'
llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead = 3
llm_load_print_meta: n_lora_q = 1536
llm_load_print_meta: n_lora_kv = 512
llm_load_print_meta: n_ff_exp = 2048
llm_load_print_meta: n_expert_shared = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm = 1
llm_load_print_meta: expert_gating_func = sigmoid
llm_load_print_meta: rope_yarn_log_mul = 0.1000
llm_load_tensors: CPU model buffer size = 385689.62 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0
llama_kv_cache_init: CPU KV buffer size = 39040.00 MiB
llama_new_context_with_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.08 MiB
llama_new_context_with_model: CPU compute buffer size = 2218.01 MiB
llama_new_context_with_model: graph nodes = 5025
llama_new_context_with_model: graph splits = 1
time=2025-02-20T20:17:59.268+08:00 level=INFO source=server.go:596 msg="llama runner started in 331.91 seconds"
[GIN] 2025/02/20 - 20:17:59 | 200 | 5m31s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/02/20 - 20:19:03 | 200 | 4.2787302s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/20 - 20:28:19 | 200 | 0s | 127.0.0.1 | GET "/api/version"

<!-- gh-comment-id:2671388528 --> @mjdp168 commented on GitHub (Feb 20, 2025): 2025/02/20 19:36:37 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\mj\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-20T19:36:37.533+08:00 level=INFO source=images.go:432 msg="total blobs: 7" time=2025-02-20T19:36:37.534+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-20T19:36:37.534+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)" time=2025-02-20T19:36:37.534+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-20T19:36:37.535+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-20T19:36:37.535+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=64 time=2025-02-20T19:36:37.547+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-02-20T19:36:37.547+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="511.9 GiB" available="496.6 GiB" [GIN] 2025/02/20 - 19:36:37 | 200 | 724.5µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/20 - 19:36:37 | 200 | 19.1974ms | 127.0.0.1 | POST "/api/show" time=2025-02-20T19:36:37.957+08:00 level=INFO source=server.go:100 msg="system memory" total="511.9 GiB" free="496.6 GiB" free_swap="528.0 GiB" time=2025-02-20T19:36:37.959+08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[496.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="417.4 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[417.4 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-02-20T19:36:37.967+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\mj\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\mj\\.ollama\\models\\blobs\\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 4 --port 50738" time=2025-02-20T19:36:37.972+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-20T19:36:37.972+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-20T19:36:37.973+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-20T19:36:38.004+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-20T19:36:38.012+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=64 time=2025-02-20T19:36:38.013+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:50738" load_backend: loaded CPU backend from C:\Users\mj\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from C:\Users\mj\.ollama\models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "< ... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors time=2025-02-20T19:36:38.224+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: CPU model buffer size = 385689.62 MiB [GIN] 2025/02/20 - 19:37:21 | 200 | 9.9941ms | 192.168.192.82 | GET "/api/tags" [GIN] 2025/02/20 - 19:37:23 | 200 | 2.0603ms | 192.168.192.82 | GET "/api/tags" llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0 llama_kv_cache_init: CPU KV buffer size = 39040.00 MiB llama_new_context_with_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB llama_new_context_with_model: CPU output buffer size = 2.08 MiB llama_new_context_with_model: CPU compute buffer size = 2218.01 MiB llama_new_context_with_model: graph nodes = 5025 llama_new_context_with_model: graph splits = 1 time=2025-02-20T19:42:05.348+08:00 level=INFO source=server.go:596 msg="llama runner started in 327.38 seconds" [GIN] 2025/02/20 - 19:42:05 | 200 | 5m27s | 127.0.0.1 | POST "/api/generate" llama.cpp:11942: The current context does not support K-shift [GIN] 2025/02/20 - 20:03:44 | 200 | 20m58s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/20 - 20:12:27 | 200 | 536.8µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/20 - 20:12:27 | 200 | 19.9133ms | 127.0.0.1 | POST "/api/show" time=2025-02-20T20:12:27.344+08:00 level=INFO source=server.go:100 msg="system memory" total="511.9 GiB" free="496.6 GiB" free_swap="527.6 GiB" time=2025-02-20T20:12:27.346+08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[496.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="417.4 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[417.4 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-02-20T20:12:27.351+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\mj\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\mj\\.ollama\\models\\blobs\\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 4 --port 52444" time=2025-02-20T20:12:27.355+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-20T20:12:27.355+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-20T20:12:27.356+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-20T20:12:27.385+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-20T20:12:27.395+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=64 time=2025-02-20T20:12:27.396+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:52444" load_backend: loaded CPU backend from C:\Users\mj\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from C:\Users\mj\.ollama\models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "< ... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors time=2025-02-20T20:12:27.607+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: CPU model buffer size = 385689.62 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0 llama_kv_cache_init: CPU KV buffer size = 39040.00 MiB llama_new_context_with_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB llama_new_context_with_model: CPU output buffer size = 2.08 MiB llama_new_context_with_model: CPU compute buffer size = 2218.01 MiB llama_new_context_with_model: graph nodes = 5025 llama_new_context_with_model: graph splits = 1 time=2025-02-20T20:17:59.268+08:00 level=INFO source=server.go:596 msg="llama runner started in 331.91 seconds" [GIN] 2025/02/20 - 20:17:59 | 200 | 5m31s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/20 - 20:19:03 | 200 | 4.2787302s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/20 - 20:28:19 | 200 | 0s | 127.0.0.1 | GET "/api/version"
Author
Owner

@mjdp168 commented on GitHub (Feb 20, 2025):

更多日志将显示问题,但如果它不是经过提炼的深度搜索,则可能是#5975

Logs have been attached. Could you kindly assist with this issue?
Thank

<!-- gh-comment-id:2671416606 --> @mjdp168 commented on GitHub (Feb 20, 2025): > 更多日志将显示问题,但如果它不是经过提炼的深度搜索,则可能是[#5975](https://github.com/ollama/ollama/issues/5975) Logs have been attached. Could you kindly assist with this issue? Thank
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

The logs don't show a failure, but the model is not a distilled deepseek, so it's probably https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2671424515 --> @rick-github commented on GitHub (Feb 20, 2025): The logs don't show a failure, but the model is not a distilled deepseek, so it's probably https://github.com/ollama/ollama/issues/5975
Author
Owner

@mjdp168 commented on GitHub (Feb 20, 2025):

thank ,I cry.

<!-- gh-comment-id:2671571495 --> @mjdp168 commented on GitHub (Feb 20, 2025): thank ,I cry.
Author
Owner

@XIONGPEILIN commented on GitHub (Feb 20, 2025):

Is there any other solution that does not modify the model?

<!-- gh-comment-id:2671580046 --> @XIONGPEILIN commented on GitHub (Feb 20, 2025): Is there any other solution that does not modify the model?
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

Keep (length of input tokens) + (length of output tokens) < (size of context).

<!-- gh-comment-id:2671592265 --> @rick-github commented on GitHub (Feb 20, 2025): Keep (length of input tokens) + (length of output tokens) < (size of context).
Author
Owner

@XIONGPEILIN commented on GitHub (Feb 21, 2025):

Cry, I want to use deepseek to process documents

<!-- gh-comment-id:2674239919 --> @XIONGPEILIN commented on GitHub (Feb 21, 2025): Cry, I want to use deepseek to process documents
Author
Owner

@rick-github commented on GitHub (Feb 21, 2025):

I want to use deepseek to process documents

You can, just keep (length of input tokens) + (length of output tokens) < (size of context).

<!-- gh-comment-id:2674246896 --> @rick-github commented on GitHub (Feb 21, 2025): > I want to use deepseek to process documents You can, just keep (length of input tokens) + (length of output tokens) < (size of context).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52538