[GH-ISSUE #8930] Deepseek-R1:671b no response #67850

Closed
opened 2026-05-04 11:52:16 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @chthub on GitHub (Feb 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8930

What is the issue?

I deployed the Deepseek-R1:671b using ollama on the linux server. I use chatbox AI locally to connect the ollama. I found a problem is that deepseek don't have any response after I input the message.
here is the ollama log:

Feb 07 09:41:01 ollama[2202418]: [GIN] 2025/02/07 - 09:41:01 | 200 |  1.675384074s |       127.0.0.1 | POST     "/api/chat"
Feb 07 09:43:21 ollama[2202418]: [GIN] 2025/02/07 - 09:43:21 | 200 |      51.896µs |       127.0.0.1 | HEAD     "/"
Feb 07 09:43:22 ollama[2202418]: [GIN] 2025/02/07 - 09:43:22 | 200 |  684.650527ms |       127.0.0.1 | POST     "/api/generate"
Feb 07 09:43:41 ollama[2202418]: [GIN] 2025/02/07 - 09:43:41 | 200 |      39.503µs |       127.0.0.1 | HEAD     "/"
Feb 07 09:43:41 ollama[2202418]: [GIN] 2025/02/07 - 09:43:41 | 200 |      24.045µs |       127.0.0.1 | GET      "/api/ps"
Feb 07 09:44:12 ollama[2202418]: [GIN] 2025/02/07 - 09:44:12 | 200 |   41.599354ms |       127.0.0.1 | GET      "/api/tags"
Feb 07 09:44:12 ollama[2202418]: [GIN] 2025/02/07 - 09:44:12 | 200 |    68.69749ms |       127.0.0.1 | GET      "/api/tags"
Feb 07 09:44:30 ollama[2202418]: [GIN] 2025/02/07 - 09:44:30 | 200 |      35.716µs |       127.0.0.1 | HEAD     "/"
Feb 07 09:44:30 ollama[2202418]: [GIN] 2025/02/07 - 09:44:30 | 200 |      14.417µs |       127.0.0.1 | GET      "/api/ps"
Feb 07 09:44:40 ollama[2202418]: [GIN] 2025/02/07 - 09:44:40 | 200 |  131.079764ms |       127.0.0.1 | GET      "/api/tags"
Feb 07 09:44:41 ollama[2202418]: [GIN] 2025/02/07 - 09:44:41 | 200 |   98.437715ms |       127.0.0.1 | GET      "/api/tags"
Feb 07 09:45:24 ollama[2202418]: time=2025-02-07T09:45:24.367-05:00 level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=deepseek/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 library=cuda parallel=4 required="449.4 GiB"
Feb 07 09:45:26 ollama[2202418]: time=2025-02-07T09:45:26.542-05:00 level=INFO source=server.go:104 msg="system memory" total="1007.1 GiB" free="589.7 GiB" free_swap="0 B"
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.743-05:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=62 layers.split=8,8,8,8,8,8,7,7 memory.available="[78.8 GiB 78.8 GiB 78.8 GiB 78.8 GiB 78.8 GiB 78.8 GiB 76.0 GiB 75.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="449.4 GiB" memory.required.partial="449.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[54.7 GiB 54.7 GiB 54.7 GiB 61.3 GiB 61.3 GiB 55.3 GiB 53.7 GiB 53.7 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.743-05:00 level=WARN source=server.go:216 msg="flash attention enabled but not supported by model"
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.744-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model deepseek/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 128 --parallel 4 --tensor-split 8,8,8,8,8,8,7,7 --port 36293"
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.750-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.751-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.756-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.804-05:00 level=INFO source=runner.go:936 msg="starting go runner"
Feb 07 09:45:29 ollama[2202418]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 07 09:45:29 ollama[2202418]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 07 09:45:29 ollama[2202418]: ggml_cuda_init: found 8 CUDA devices:
Feb 07 09:45:29 ollama[2202418]:   Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 1: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 2: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 3: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 4: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 5: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 6: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:29 ollama[2202418]:   Device 7: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Feb 07 09:45:31 ollama[2202418]: time=2025-02-07T09:45:31.217-05:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=128
Feb 07 09:45:31 ollama[2202418]: time=2025-02-07T09:45:31.217-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:36293"
Feb 07 09:45:31 ollama[2202418]: time=2025-02-07T09:45:31.302-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA0 (NVIDIA A100-SXM4-80GB) - 80567 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA1 (NVIDIA A100-SXM4-80GB) - 80567 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA2 (NVIDIA A100-SXM4-80GB) - 80567 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA3 (NVIDIA A100-SXM4-80GB) - 80567 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA4 (NVIDIA A100-SXM4-80GB) - 80567 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA5 (NVIDIA A100-SXM4-80GB) - 80567 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA6 (NVIDIA A100-SXM4-80GB) - 77757 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA7 (NVIDIA A100-SXM4-80GB) - 76687 MiB free
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from deepseek/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
Feb 07 09:45:31 ollama[2202418]: [132B blob data]
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  40:               general.quantization_version u32              = 2
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv  41:                          general.file_type u32              = 15
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - type  f32:  361 tensors
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - type q4_K:  606 tensors
Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - type q6_K:   58 tensors
Feb 07 09:45:31 ollama[2202418]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 07 09:45:31 ollama[2202418]: llm_load_vocab: special tokens cache size = 818
Feb 07 09:45:32 ollama[2202418]: llm_load_vocab: token to piece cache size = 0.8223 MB
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: arch             = deepseek2
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: vocab type       = BPE
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_vocab          = 129280
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_merges         = 127741
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: vocab_only       = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ctx_train      = 163840
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd           = 7168
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_layer          = 61
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_head           = 128
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_head_kv        = 128
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_rot            = 64
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_swa            = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_head_k    = 192
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_head_v    = 128
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_gqa            = 1
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_k_gqa     = 24576
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_v_gqa     = 16384
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ff             = 18432
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_expert         = 256
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_expert_used    = 8
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: causal attn      = 1
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: pooling type     = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope type        = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope scaling     = yarn
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: freq_base_train  = 10000.0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: freq_scale_train = 0.025
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope_finetuned   = unknown
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_d_conv       = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_d_inner      = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_d_state      = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model type       = 671B
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model ftype      = Q4_K - Medium
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model params     = 671.03 B
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: general.name     = n/a
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: LF token         = 131 'Ä'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: max token length = 256
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_layer_dense_lead   = 3
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_lora_q             = 1536
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_lora_kv            = 512
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ff_exp             = 2048
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_expert_shared      = 1
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: expert_weights_scale = 2.5
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: expert_weights_norm  = 1
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: expert_gating_func   = sigmoid
Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope_yarn_log_mul    = 0.1000
stuck here, no any another response.

run ollama ps:

NAME                ID              SIZE      PROCESSOR    UNTIL
deepseek-r1:671b    739e1b229ad7    482 GB    100% GPU     Forever

run nvidia-smi:

Fri Feb  7 09:50:24 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.30.02              Driver Version: 530.30.02    CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-SXM4-80GB           On | 00000000:07:00.0 Off |                  Off |
| N/A   31C    P0               67W / 400W|   3334MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-SXM4-80GB           On | 00000000:0B:00.0 Off |                  Off |
| N/A   30C    P0               65W / 400W|    524MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-SXM4-80GB           On | 00000000:48:00.0 Off |                  Off |
| N/A   28C    P0               68W / 400W|    524MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-SXM4-80GB           On | 00000000:4C:00.0 Off |                  Off |
| N/A   30C    P0               66W / 400W|    524MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA A100-SXM4-80GB           On | 00000000:88:00.0 Off |                    0 |
| N/A   29C    P0               67W / 400W|   4404MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA A100-SXM4-80GB           On | 00000000:8B:00.0 Off |                    0 |
| N/A   31C    P0               67W / 400W|    524MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   6  NVIDIA A100-SXM4-80GB           On | 00000000:C8:00.0 Off |                  Off |
| N/A   29C    P0               65W / 400W|    524MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   7  NVIDIA A100-SXM4-80GB           On | 00000000:CB:00.0 Off |                  Off |
| N/A   29C    P0               66W / 400W|    524MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   3394606      C   python     2810MiB |
|    0   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    1   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    2   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    3   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    4   N/A  N/A   1455918      C   python     1888MiB |
|    4   N/A  N/A   2523747      C   python     1990MiB |
|    4   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    5   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    6   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
|    7   N/A  N/A   3658045      C   ...rs/cuda_v12_avx/ollama_llama_server      522MiB |
+---------------------------------------------------------------------------------------+

Here is the config:

[Service]
Environment="OLLAMA_MODELS=/deepseek"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_LOAD_TIMEOUT=30m"

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.5.7

Originally created by @chthub on GitHub (Feb 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8930 ### What is the issue? I deployed the Deepseek-R1:671b using ollama on the linux server. I use chatbox AI locally to connect the ollama. I found a problem is that deepseek don't have any response after I input the message. here is the ollama log: ``` Feb 07 09:41:01 ollama[2202418]: [GIN] 2025/02/07 - 09:41:01 | 200 | 1.675384074s | 127.0.0.1 | POST "/api/chat" Feb 07 09:43:21 ollama[2202418]: [GIN] 2025/02/07 - 09:43:21 | 200 | 51.896µs | 127.0.0.1 | HEAD "/" Feb 07 09:43:22 ollama[2202418]: [GIN] 2025/02/07 - 09:43:22 | 200 | 684.650527ms | 127.0.0.1 | POST "/api/generate" Feb 07 09:43:41 ollama[2202418]: [GIN] 2025/02/07 - 09:43:41 | 200 | 39.503µs | 127.0.0.1 | HEAD "/" Feb 07 09:43:41 ollama[2202418]: [GIN] 2025/02/07 - 09:43:41 | 200 | 24.045µs | 127.0.0.1 | GET "/api/ps" Feb 07 09:44:12 ollama[2202418]: [GIN] 2025/02/07 - 09:44:12 | 200 | 41.599354ms | 127.0.0.1 | GET "/api/tags" Feb 07 09:44:12 ollama[2202418]: [GIN] 2025/02/07 - 09:44:12 | 200 | 68.69749ms | 127.0.0.1 | GET "/api/tags" Feb 07 09:44:30 ollama[2202418]: [GIN] 2025/02/07 - 09:44:30 | 200 | 35.716µs | 127.0.0.1 | HEAD "/" Feb 07 09:44:30 ollama[2202418]: [GIN] 2025/02/07 - 09:44:30 | 200 | 14.417µs | 127.0.0.1 | GET "/api/ps" Feb 07 09:44:40 ollama[2202418]: [GIN] 2025/02/07 - 09:44:40 | 200 | 131.079764ms | 127.0.0.1 | GET "/api/tags" Feb 07 09:44:41 ollama[2202418]: [GIN] 2025/02/07 - 09:44:41 | 200 | 98.437715ms | 127.0.0.1 | GET "/api/tags" Feb 07 09:45:24 ollama[2202418]: time=2025-02-07T09:45:24.367-05:00 level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=deepseek/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 library=cuda parallel=4 required="449.4 GiB" Feb 07 09:45:26 ollama[2202418]: time=2025-02-07T09:45:26.542-05:00 level=INFO source=server.go:104 msg="system memory" total="1007.1 GiB" free="589.7 GiB" free_swap="0 B" Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.743-05:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=62 layers.split=8,8,8,8,8,8,7,7 memory.available="[78.8 GiB 78.8 GiB 78.8 GiB 78.8 GiB 78.8 GiB 78.8 GiB 76.0 GiB 75.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="449.4 GiB" memory.required.partial="449.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[54.7 GiB 54.7 GiB 54.7 GiB 61.3 GiB 61.3 GiB 55.3 GiB 53.7 GiB 53.7 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.743-05:00 level=WARN source=server.go:216 msg="flash attention enabled but not supported by model" Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.744-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model deepseek/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 128 --parallel 4 --tensor-split 8,8,8,8,8,8,7,7 --port 36293" Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.750-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.751-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.756-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Feb 07 09:45:28 ollama[2202418]: time=2025-02-07T09:45:28.804-05:00 level=INFO source=runner.go:936 msg="starting go runner" Feb 07 09:45:29 ollama[2202418]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 07 09:45:29 ollama[2202418]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 07 09:45:29 ollama[2202418]: ggml_cuda_init: found 8 CUDA devices: Feb 07 09:45:29 ollama[2202418]: Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 1: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 2: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 3: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 4: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 5: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 6: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:29 ollama[2202418]: Device 7: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Feb 07 09:45:31 ollama[2202418]: time=2025-02-07T09:45:31.217-05:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=128 Feb 07 09:45:31 ollama[2202418]: time=2025-02-07T09:45:31.217-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:36293" Feb 07 09:45:31 ollama[2202418]: time=2025-02-07T09:45:31.302-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA0 (NVIDIA A100-SXM4-80GB) - 80567 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA1 (NVIDIA A100-SXM4-80GB) - 80567 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA2 (NVIDIA A100-SXM4-80GB) - 80567 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA3 (NVIDIA A100-SXM4-80GB) - 80567 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA4 (NVIDIA A100-SXM4-80GB) - 80567 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA5 (NVIDIA A100-SXM4-80GB) - 80567 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA6 (NVIDIA A100-SXM4-80GB) - 77757 MiB free Feb 07 09:45:31 ollama[2202418]: llama_load_model_from_file: using device CUDA7 (NVIDIA A100-SXM4-80GB) - 76687 MiB free Feb 07 09:45:31 ollama[2202418]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from deepseek/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) Feb 07 09:45:31 ollama[2202418]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 0: general.architecture str = deepseek2 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 1: general.type str = model Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 2: general.size_label str = 256x20B Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 Feb 07 09:45:31 ollama[2202418]: [132B blob data] Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 40: general.quantization_version u32 = 2 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - kv 41: general.file_type u32 = 15 Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - type f32: 361 tensors Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - type q4_K: 606 tensors Feb 07 09:45:31 ollama[2202418]: llama_model_loader: - type q6_K: 58 tensors Feb 07 09:45:31 ollama[2202418]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 07 09:45:31 ollama[2202418]: llm_load_vocab: special tokens cache size = 818 Feb 07 09:45:32 ollama[2202418]: llm_load_vocab: token to piece cache size = 0.8223 MB Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: format = GGUF V3 (latest) Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: arch = deepseek2 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: vocab type = BPE Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_vocab = 129280 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_merges = 127741 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: vocab_only = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ctx_train = 163840 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd = 7168 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_layer = 61 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_head = 128 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_head_kv = 128 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_rot = 64 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_swa = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_head_k = 192 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_head_v = 128 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_gqa = 1 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_k_gqa = 24576 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_embd_v_gqa = 16384 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ff = 18432 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_expert = 256 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_expert_used = 8 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: causal attn = 1 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: pooling type = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope type = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope scaling = yarn Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: freq_base_train = 10000.0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: freq_scale_train = 0.025 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope_finetuned = unknown Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_d_conv = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_d_inner = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_d_state = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_dt_rank = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model type = 671B Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model ftype = Q4_K - Medium Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model params = 671.03 B Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: general.name = n/a Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: LF token = 131 'Ä' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: max token length = 256 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_layer_dense_lead = 3 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_lora_q = 1536 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_lora_kv = 512 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_ff_exp = 2048 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: n_expert_shared = 1 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: expert_weights_scale = 2.5 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: expert_weights_norm = 1 Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: expert_gating_func = sigmoid Feb 07 09:45:32 ollama[2202418]: llm_load_print_meta: rope_yarn_log_mul = 0.1000 stuck here, no any another response. ``` run `ollama ps`: ``` NAME ID SIZE PROCESSOR UNTIL deepseek-r1:671b 739e1b229ad7 482 GB 100% GPU Forever ``` run `nvidia-smi`: ``` Fri Feb 7 09:50:24 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100-SXM4-80GB On | 00000000:07:00.0 Off | Off | | N/A 31C P0 67W / 400W| 3334MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM4-80GB On | 00000000:0B:00.0 Off | Off | | N/A 30C P0 65W / 400W| 524MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA A100-SXM4-80GB On | 00000000:48:00.0 Off | Off | | N/A 28C P0 68W / 400W| 524MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA A100-SXM4-80GB On | 00000000:4C:00.0 Off | Off | | N/A 30C P0 66W / 400W| 524MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 4 NVIDIA A100-SXM4-80GB On | 00000000:88:00.0 Off | 0 | | N/A 29C P0 67W / 400W| 4404MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 5 NVIDIA A100-SXM4-80GB On | 00000000:8B:00.0 Off | 0 | | N/A 31C P0 67W / 400W| 524MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 6 NVIDIA A100-SXM4-80GB On | 00000000:C8:00.0 Off | Off | | N/A 29C P0 65W / 400W| 524MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 7 NVIDIA A100-SXM4-80GB On | 00000000:CB:00.0 Off | Off | | N/A 29C P0 66W / 400W| 524MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 3394606 C python 2810MiB | | 0 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 1 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 2 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 3 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 4 N/A N/A 1455918 C python 1888MiB | | 4 N/A N/A 2523747 C python 1990MiB | | 4 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 5 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 6 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | | 7 N/A N/A 3658045 C ...rs/cuda_v12_avx/ollama_llama_server 522MiB | +---------------------------------------------------------------------------------------+ ``` Here is the config: ``` [Service] Environment="OLLAMA_MODELS=/deepseek" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_LOAD_TIMEOUT=30m" ``` ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.5.7
GiteaMirror added the bug label 2026-05-04 11:52:16 -05:00
Author
Owner

@chthub commented on GitHub (Feb 7, 2025):

The strange thing is that sometime I can get the response from deepseek-r1:671b. It works good. But sometimes no response. I don't know why.

<!-- gh-comment-id:2643205302 --> @chthub commented on GitHub (Feb 7, 2025): The strange thing is that sometime I can get the response from deepseek-r1:671b. It works good. But sometimes no response. I don't know why.
Author
Owner

@rick-github commented on GitHub (Feb 7, 2025):

Server logs with OLLAMA_DEBUG=1 may aid in debugging.

<!-- gh-comment-id:2643326893 --> @rick-github commented on GitHub (Feb 7, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) with `OLLAMA_DEBUG=1` may aid in debugging.
Author
Owner

@chthub commented on GitHub (Feb 8, 2025):

Server logs with OLLAMA_DEBUG=1 may aid in debugging.

Thank you. It just takes a long time to load.

<!-- gh-comment-id:2644433366 --> @chthub commented on GitHub (Feb 8, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) with `OLLAMA_DEBUG=1` may aid in debugging. Thank you. It just takes a long time to load.
Author
Owner

@ice6 commented on GitHub (Feb 26, 2025):

@chthub do you know how to keep it in the memory?

<!-- gh-comment-id:2685107499 --> @ice6 commented on GitHub (Feb 26, 2025): @chthub do you know how to keep it in the memory?
Author
Owner

@chthub commented on GitHub (Feb 26, 2025):

@chthub do you know how to keep it in the memory?

@ice6 You can use the parameter keep_alive, refer to this tutorial: https://github.com/ollama/ollama/blob/main/docs/faq.md

<!-- gh-comment-id:2685368223 --> @chthub commented on GitHub (Feb 26, 2025): > [@chthub](https://github.com/chthub) do you know how to keep it in the memory? @ice6 You can use the parameter keep_alive, refer to this tutorial: https://github.com/ollama/ollama/blob/main/docs/faq.md
Author
Owner

@ice6 commented on GitHub (Feb 27, 2025):

@chthub thanks for you reply, in my situation.

the problem root is : llama.cpp:11942: The current context does not support K-shift.

Feb 27 09:10:34 web ollama[195429]: llama.cpp:11942: The current context does not support K-shift
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xfe4c88)[0x555ef95b6c88]
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xfe534d)[0x555ef95b734d]
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xf5fcb4)[0x555ef9531cb4]
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xf62b61)[0x555ef9534b61]
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xf63777)[0x555ef9535777]
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xe8d3e2)[0x555ef945f3e2]
Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0x2f4381)[0x555ef88c6381]
Feb 27 09:10:46 web ollama[195429]: SIGABRT: abort
...
<!-- gh-comment-id:2687073866 --> @ice6 commented on GitHub (Feb 27, 2025): @chthub thanks for you reply, in my situation. the problem root is : `llama.cpp:11942: The current context does not support K-shift`. ``` Feb 27 09:10:34 web ollama[195429]: llama.cpp:11942: The current context does not support K-shift Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xfe4c88)[0x555ef95b6c88] Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xfe534d)[0x555ef95b734d] Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xf5fcb4)[0x555ef9531cb4] Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xf62b61)[0x555ef9534b61] Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xf63777)[0x555ef9535777] Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0xe8d3e2)[0x555ef945f3e2] Feb 27 09:10:46 web ollama[195429]: /usr/bin/ollama(+0x2f4381)[0x555ef88c6381] Feb 27 09:10:46 web ollama[195429]: SIGABRT: abort ... ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67850