[GH-ISSUE #7548] Deepseek2 does not support K-shift #4803

Closed
opened 2026-04-12 15:46:45 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @CROprogrammer on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7548

What is the issue?

Hi,
after some time of running my tests that send requests to Ollama's API /api/chat, I'm getting /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:17994: Deepseek2 does not support K-shift error, and after that error llama runner process is no longer running.
Also while Ollama is working, my token per second metric is not very fast, around 5 tokens per second, could someone explain how can I make it faster.

I'm using model: deepseek-coder-v2:236b-instruct-fp16
Also when I run ollama ps I can see that is it 100% loaded in GPU.

My logs:

Nov 05 13:50:04 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:50:04 | 200 | 18m17s | 127.0.0.1 | POST "/api/pull"
Nov 05 13:50:04 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:50:04 | 200 | 14.473964ms | 127.0.0.1 | POST "/api/show"
Nov 05 13:50:06 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:06.626Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-eef1e52aabb10215c155b0e5191bf8dd85dbd8e6fb0a54d75ff4c2fd4ab2fb71 library=cuda parallel=4 required="512.9 GiB"
Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.925Z level=INFO source=server.go:105 msg="system memory" total="1771.7 GiB" free="1757.2 GiB" free_swap="0 B"
Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.927Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=61 layers.offload=61 layers.split=8,8,8,8,8,7,7,7 memory.available="[78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="512.9 GiB" memory.required.partial="512.9 GiB" memory.required.kv="37.5 GiB" memory.required.allocations="[62.1 GiB 68.8 GiB 68.8 GiB 68.8 GiB 61.8 GiB 60.8 GiB 60.8 GiB 60.8 GiB]" memory.weights.total="474.7 GiB" memory.weights.repeating="473.8 GiB" memory.weights.nonrepeating="1000.0 MiB" memory.graph.full="2.9 GiB" memory.graph.partial="2.9 GiB"
Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.928Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama2820923659/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-eef1e52aabb10215c155b0e5191bf8dd85dbd8e6fb0a54d75ff4c2fd4ab2fb71 --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 61 --threads 1 --parallel 4 --tensor-split 8,8,8,8,8,7,7,7 --port 39351"
Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.929Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.929Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.929Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] starting c++ runner | tid="126159684960256" timestamp=1730814608
Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] build info | build=10 commit="3a8c75e" tid="126159684960256" timestamp=1730814608
Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] system info | n_threads=1 n_threads_batch=1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="126159684960256" timestamp=1730814608 total_threads=240
Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="239" port="39351" tid="126159684960256" timestamp=1730814608
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: loaded meta data with 39 key-value pairs and 959 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-eef1e52aabb10215c155b0e5191bf8dd85dbd8e6fb0a54d75ff4c2fd4ab2fb71 (version GGUF V3 (latest))
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 0: general.architecture str = deepseek2
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Instruct
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 2: deepseek2.block_count u32 = 60
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 5120
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 12288
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 128
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 128
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 11: general.file_type u32 = 1
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 1536
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 19: deepseek2.expert_count u32 = 160
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 2
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 16.000000
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 22: deepseek2.rope.dimension_count u32 = 64
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 23: deepseek2.rope.scaling.type str = yarn
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 24: deepseek2.rope.scaling.factor f32 = 40.000000
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 25: deepseek2.rope.scaling.original_context_length u32 = 4096
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 26: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 28: tokenizer.ggml.pre str = deepseek-llm
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,102400] = ["!", """, "#", "$", "%", "&", "'", ...
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 100000
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 100001
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 34: tokenizer.ggml.padding_token_id u32 = 100001
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 35: tokenizer.ggml.add_bos_token bool = true
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 36: tokenizer.ggml.add_eos_token bool = false
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 37: tokenizer.chat_template str = {% if not add_generation_prompt is de...
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 38: general.quantization_version u32 = 2
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - type f32: 300 tensors
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - type f16: 659 tensors
Nov 05 13:50:09 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:09.181Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_vocab: special tokens cache size = 2400
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_vocab: token to piece cache size = 0.6661 MB
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: arch = deepseek2
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: vocab type = BPE
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_vocab = 102400
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_merges = 99757
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: vocab_only = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ctx_train = 163840
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd = 5120
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_layer = 60
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_head = 128
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_head_kv = 128
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_rot = 64
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_swa = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_head_k = 192
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_head_v = 128
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_gqa = 1
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_k_gqa = 24576
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_v_gqa = 16384
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_norm_eps = 0.0e+00
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ff = 12288
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_expert = 160
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_expert_used = 6
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: causal attn = 1
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: pooling type = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope type = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope scaling = yarn
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: freq_base_train = 10000.0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: freq_scale_train = 0.025
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ctx_orig_yarn = 4096
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope_finetuned = unknown
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_d_conv = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_d_inner = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_d_state = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_dt_rank = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model type = 236B
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model ftype = F16
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model params = 235.74 B
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model size = 439.19 GiB (16.00 BPW)
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: general.name = DeepSeek-Coder-V2-Instruct
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>'
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>'
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>'
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: LF token = 126 'Ä'
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: EOG token = 100001 '<|end▁of▁sentence|>'
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: max token length = 256
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_layer_dense_lead = 1
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_lora_q = 1536
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_lora_kv = 512
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ff_exp = 1536
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_expert_shared = 2
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: expert_weights_scale = 16.0
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope_yarn_log_mul = 0.1000
Nov 05 13:50:09 164-152-104-213 ollama[10076]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 05 13:50:09 164-152-104-213 ollama[10076]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 05 13:50:09 164-152-104-213 ollama[10076]: ggml_cuda_init: found 8 CUDA devices:
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 1: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 2: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 3: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 4: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 5: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 6: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 7: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_tensors: ggml ctx size = 3.60 MiB
Nov 05 13:50:10 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:10.636Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Nov 05 13:50:58 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:58.253Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: offloading 60 repeating layers to GPU
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: offloading non-repeating layers to GPU
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: offloaded 61/61 layers to GPU
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CPU buffer size = 1000.00 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA0 buffer size = 53689.25 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA1 buffer size = 60622.38 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA2 buffer size = 60622.38 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA3 buffer size = 60622.38 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA4 buffer size = 60622.38 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA5 buffer size = 53044.58 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA6 buffer size = 53044.58 MiB
Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA7 buffer size = 46466.80 MiB
Nov 05 13:51:31 164-152-104-213 ollama[10076]: time=2024-11-05T13:51:31.091Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: n_ctx = 8192
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: n_batch = 512
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: n_ubatch = 512
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: flash_attn = 0
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: freq_base = 10000.0
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: freq_scale = 0.025
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA1 KV buffer size = 5120.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA2 KV buffer size = 5120.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA3 KV buffer size = 5120.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA4 KV buffer size = 5120.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA5 KV buffer size = 4480.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA6 KV buffer size = 4480.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA7 KV buffer size = 3840.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: KV self size = 38400.00 MiB, K (f16): 23040.00 MiB, V (f16): 15360.00 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA_Host output buffer size = 1.64 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Nov 05 13:51:50 164-152-104-213 ollama[10076]: time=2024-11-05T13:51:50.764Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA0 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA1 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA2 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA3 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA4 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA5 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA6 compute buffer size = 2294.01 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA7 compute buffer size = 2294.02 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA_Host compute buffer size = 74.02 MiB
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: graph nodes = 4480
Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: graph splits = 9
Nov 05 13:51:51 164-152-104-213 ollama[10824]: INFO [main] model loaded | tid="126159684960256" timestamp=1730814711
Nov 05 13:51:51 164-152-104-213 ollama[10076]: time=2024-11-05T13:51:51.517Z level=INFO source=server.go:626 msg="llama runner started in 102.59 seconds"
Nov 05 13:51:51 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:51:51 | 200 | 1m47s | 127.0.0.1 | POST "/api/generate"
Nov 05 13:53:48 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:53:48 | 200 | 13.62207ms | 127.0.0.1 | POST "/api/show"
Nov 05 13:55:05 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:55:09 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:55:09 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:55:18 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:58:12 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:12 | 200 | 3m6s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:12 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:12 | 200 | 3m7s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:12 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:58:12 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:58:25 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:25 | 200 | 3m19s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:25 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Nov 05 13:58:37 164-152-104-213 ollama[10076]: /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:17994: Deepseek2 does not support K-shift
Nov 05 13:58:38 164-152-104-213 ollama[10076]: Could not attach to process. If your uid matches the uid of the target
Nov 05 13:58:38 164-152-104-213 ollama[10076]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
Nov 05 13:58:38 164-152-104-213 ollama[10076]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
Nov 05 13:58:38 164-152-104-213 ollama[10076]: ptrace: Inappropriate ioctl for device.
Nov 05 13:58:38 164-152-104-213 ollama[10076]: No stack.
Nov 05 13:58:38 164-152-104-213 ollama[10076]: The program is not being run.
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 27.032786722s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 26.624928215s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.616Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"
Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 14.031356121s | 127.0.0.1 | POST "/api/chat"
Nov 05 13:58:40 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:40.086Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.14

Originally created by @CROprogrammer on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7548 ### What is the issue? Hi, after some time of running my tests that send requests to Ollama's API /api/chat, I'm getting `/go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:17994: Deepseek2 does not support K-shift` error, and after that error llama runner process is no longer running. Also while Ollama is working, my token per second metric is not very fast, around 5 tokens per second, could someone explain how can I make it faster. I'm using model: `deepseek-coder-v2:236b-instruct-fp16` Also when I run `ollama ps` I can see that is it 100% loaded in GPU. My logs: Nov 05 13:50:04 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:50:04 | 200 | 18m17s | 127.0.0.1 | POST "/api/pull" Nov 05 13:50:04 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:50:04 | 200 | 14.473964ms | 127.0.0.1 | POST "/api/show" Nov 05 13:50:06 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:06.626Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-eef1e52aabb10215c155b0e5191bf8dd85dbd8e6fb0a54d75ff4c2fd4ab2fb71 library=cuda parallel=4 required="512.9 GiB" Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.925Z level=INFO source=server.go:105 msg="system memory" total="1771.7 GiB" free="1757.2 GiB" free_swap="0 B" Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.927Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=61 layers.offload=61 layers.split=8,8,8,8,8,7,7,7 memory.available="[78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB 78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="512.9 GiB" memory.required.partial="512.9 GiB" memory.required.kv="37.5 GiB" memory.required.allocations="[62.1 GiB 68.8 GiB 68.8 GiB 68.8 GiB 61.8 GiB 60.8 GiB 60.8 GiB 60.8 GiB]" memory.weights.total="474.7 GiB" memory.weights.repeating="473.8 GiB" memory.weights.nonrepeating="1000.0 MiB" memory.graph.full="2.9 GiB" memory.graph.partial="2.9 GiB" Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.928Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama2820923659/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-eef1e52aabb10215c155b0e5191bf8dd85dbd8e6fb0a54d75ff4c2fd4ab2fb71 --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 61 --threads 1 --parallel 4 --tensor-split 8,8,8,8,8,7,7,7 --port 39351" Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.929Z level=INFO source=sched.go:449 msg="loaded runners" count=1 Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.929Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" Nov 05 13:50:08 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:08.929Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] starting c++ runner | tid="126159684960256" timestamp=1730814608 Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] build info | build=10 commit="3a8c75e" tid="126159684960256" timestamp=1730814608 Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] system info | n_threads=1 n_threads_batch=1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="126159684960256" timestamp=1730814608 total_threads=240 Nov 05 13:50:08 164-152-104-213 ollama[10824]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="239" port="39351" tid="126159684960256" timestamp=1730814608 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: loaded meta data with 39 key-value pairs and 959 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-eef1e52aabb10215c155b0e5191bf8dd85dbd8e6fb0a54d75ff4c2fd4ab2fb71 (version GGUF V3 (latest)) Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 0: general.architecture str = deepseek2 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Instruct Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 2: deepseek2.block_count u32 = 60 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 5120 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 12288 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 128 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 128 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 11: general.file_type u32 = 1 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 1536 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 19: deepseek2.expert_count u32 = 160 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 2 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 16.000000 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 22: deepseek2.rope.dimension_count u32 = 64 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 23: deepseek2.rope.scaling.type str = yarn Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 24: deepseek2.rope.scaling.factor f32 = 40.000000 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 25: deepseek2.rope.scaling.original_context_length u32 = 4096 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 26: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 28: tokenizer.ggml.pre str = deepseek-llm Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 100000 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 100001 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 34: tokenizer.ggml.padding_token_id u32 = 100001 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 35: tokenizer.ggml.add_bos_token bool = true Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 36: tokenizer.ggml.add_eos_token bool = false Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 37: tokenizer.chat_template str = {% if not add_generation_prompt is de... Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - kv 38: general.quantization_version u32 = 2 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - type f32: 300 tensors Nov 05 13:50:09 164-152-104-213 ollama[10076]: llama_model_loader: - type f16: 659 tensors Nov 05 13:50:09 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:09.181Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_vocab: special tokens cache size = 2400 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_vocab: token to piece cache size = 0.6661 MB Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: format = GGUF V3 (latest) Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: arch = deepseek2 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: vocab type = BPE Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_vocab = 102400 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_merges = 99757 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: vocab_only = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ctx_train = 163840 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd = 5120 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_layer = 60 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_head = 128 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_head_kv = 128 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_rot = 64 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_swa = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_head_k = 192 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_head_v = 128 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_gqa = 1 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_k_gqa = 24576 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_embd_v_gqa = 16384 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_norm_eps = 0.0e+00 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: f_logit_scale = 0.0e+00 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ff = 12288 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_expert = 160 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_expert_used = 6 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: causal attn = 1 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: pooling type = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope type = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope scaling = yarn Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: freq_base_train = 10000.0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: freq_scale_train = 0.025 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope_finetuned = unknown Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_d_conv = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_d_inner = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_d_state = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_dt_rank = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model type = 236B Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model ftype = F16 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model params = 235.74 B Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: model size = 439.19 GiB (16.00 BPW) Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: general.name = DeepSeek-Coder-V2-Instruct Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: LF token = 126 'Ä' Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: EOG token = 100001 '<|end▁of▁sentence|>' Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: max token length = 256 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_layer_dense_lead = 1 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_lora_q = 1536 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_lora_kv = 512 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_ff_exp = 1536 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: n_expert_shared = 2 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: expert_weights_scale = 16.0 Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_print_meta: rope_yarn_log_mul = 0.1000 Nov 05 13:50:09 164-152-104-213 ollama[10076]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 05 13:50:09 164-152-104-213 ollama[10076]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 05 13:50:09 164-152-104-213 ollama[10076]: ggml_cuda_init: found 8 CUDA devices: Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 1: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 2: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 3: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 4: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 5: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 6: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: Device 7: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes Nov 05 13:50:09 164-152-104-213 ollama[10076]: llm_load_tensors: ggml ctx size = 3.60 MiB Nov 05 13:50:10 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:10.636Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" Nov 05 13:50:58 164-152-104-213 ollama[10076]: time=2024-11-05T13:50:58.253Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: offloading 60 repeating layers to GPU Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: offloading non-repeating layers to GPU Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: offloaded 61/61 layers to GPU Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CPU buffer size = 1000.00 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA0 buffer size = 53689.25 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA1 buffer size = 60622.38 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA2 buffer size = 60622.38 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA3 buffer size = 60622.38 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA4 buffer size = 60622.38 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA5 buffer size = 53044.58 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA6 buffer size = 53044.58 MiB Nov 05 13:50:59 164-152-104-213 ollama[10076]: llm_load_tensors: CUDA7 buffer size = 46466.80 MiB Nov 05 13:51:31 164-152-104-213 ollama[10076]: time=2024-11-05T13:51:31.091Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: n_ctx = 8192 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: n_batch = 512 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: n_ubatch = 512 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: flash_attn = 0 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: freq_base = 10000.0 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: freq_scale = 0.025 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA1 KV buffer size = 5120.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA2 KV buffer size = 5120.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA3 KV buffer size = 5120.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA4 KV buffer size = 5120.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA5 KV buffer size = 4480.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA6 KV buffer size = 4480.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_kv_cache_init: CUDA7 KV buffer size = 3840.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: KV self size = 38400.00 MiB, K (f16): 23040.00 MiB, V (f16): 15360.00 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA_Host output buffer size = 1.64 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Nov 05 13:51:50 164-152-104-213 ollama[10076]: time=2024-11-05T13:51:50.764Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA0 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA1 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA2 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA3 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA4 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA5 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA6 compute buffer size = 2294.01 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA7 compute buffer size = 2294.02 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: CUDA_Host compute buffer size = 74.02 MiB Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: graph nodes = 4480 Nov 05 13:51:50 164-152-104-213 ollama[10076]: llama_new_context_with_model: graph splits = 9 Nov 05 13:51:51 164-152-104-213 ollama[10824]: INFO [main] model loaded | tid="126159684960256" timestamp=1730814711 Nov 05 13:51:51 164-152-104-213 ollama[10076]: time=2024-11-05T13:51:51.517Z level=INFO source=server.go:626 msg="llama runner started in 102.59 seconds" Nov 05 13:51:51 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:51:51 | 200 | 1m47s | 127.0.0.1 | POST "/api/generate" Nov 05 13:53:48 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:53:48 | 200 | 13.62207ms | 127.0.0.1 | POST "/api/show" Nov 05 13:55:05 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:55:09 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:55:09 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:55:18 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:58:12 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:12 | 200 | 3m6s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:12 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:12 | 200 | 3m7s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:12 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:58:12 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:58:25 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:25 | 200 | 3m19s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:25 164-152-104-213 ollama[10076]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Nov 05 13:58:37 164-152-104-213 ollama[10076]: /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:17994: Deepseek2 does not support K-shift Nov 05 13:58:38 164-152-104-213 ollama[10076]: Could not attach to process. If your uid matches the uid of the target Nov 05 13:58:38 164-152-104-213 ollama[10076]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try Nov 05 13:58:38 164-152-104-213 ollama[10076]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf Nov 05 13:58:38 164-152-104-213 ollama[10076]: ptrace: Inappropriate ioctl for device. Nov 05 13:58:38 164-152-104-213 ollama[10076]: No stack. Nov 05 13:58:38 164-152-104-213 ollama[10076]: The program is not being run. Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 3m34s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 27.032786722s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.615Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 26.624928215s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:39 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:39.616Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" Nov 05 13:58:39 164-152-104-213 ollama[10076]: [GIN] 2024/11/05 - 13:58:39 | 500 | 14.031356121s | 127.0.0.1 | POST "/api/chat" Nov 05 13:58:40 164-152-104-213 ollama[10076]: time=2024-11-05T13:58:40.086Z level=WARN source=server.go:507 msg="llama runner process no longer running" sys=6 string="signal: aborted" ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.14
GiteaMirror added the bug label 2026-04-12 15:46:45 -05:00
Author
Owner
<!-- gh-comment-id:2462155394 --> @rick-github commented on GitHub (Nov 7, 2024): https://github.com/ollama/ollama/issues?q=is%3Aissue+is%3Aopen+Deepseek2+does+not+support+K-shift
Author
Owner

@CROprogrammer commented on GitHub (Nov 7, 2024):

I've tried that solution, but then my model does not start at all

<!-- gh-comment-id:2462195066 --> @CROprogrammer commented on GitHub (Nov 7, 2024): I've tried that solution, but then my model does not start at all
Author
Owner

@rick-github commented on GitHub (Nov 7, 2024):

What solution did you try, and what errors did you receive?

<!-- gh-comment-id:2462198113 --> @rick-github commented on GitHub (Nov 7, 2024): What solution did you try, and what errors did you receive?
Author
Owner

@dhiltgen commented on GitHub (Nov 7, 2024):

This should no longer crash on 0.4.0, but I believe large context support on deepseek still needs work.

<!-- gh-comment-id:2463330574 --> @dhiltgen commented on GitHub (Nov 7, 2024): This should no longer crash on 0.4.0, but I believe large context support on deepseek still needs work.
Author
Owner

@dhiltgen commented on GitHub (Nov 8, 2024):

Let's track this with #5975

<!-- gh-comment-id:2465845252 --> @dhiltgen commented on GitHub (Nov 8, 2024): Let's track this with #5975
Author
Owner

@FireAngelx commented on GitHub (Dec 17, 2024):

@dhiltgen I get the same result, I deploy deepseekv2.5-236b-q4_0 with 2*A100 80GB. It would crash by the K-Shift error

<!-- gh-comment-id:2548859904 --> @FireAngelx commented on GitHub (Dec 17, 2024): @dhiltgen I get the same result, I deploy deepseekv2.5-236b-q4_0 with 2*A100 80GB. It would crash by the K-Shift error
Author
Owner

@FireAngelx commented on GitHub (Dec 17, 2024):

@dhiltgen I found if it needs to response long text, the error would appear.

<!-- gh-comment-id:2548864230 --> @FireAngelx commented on GitHub (Dec 17, 2024): @dhiltgen I found if it needs to response long text, the error would appear.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4803