[GH-ISSUE #5113] DeepSeek-Coder-V2-Lite-Instruct out of memory #65263

Closed
opened 2026-05-03 20:15:01 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @tincore on GitHub (Jun 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5113

What is the issue?

Hi,

Thanks for the great project.

I get a crash (OOM) when trying to load new deepseek-coder-v2. Other models work fine. I've just upgraded to latest pre-release just in case but same behavior.

jun 18 13:23:17 ollama[26949]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="41023" tid="137762752704512" timestamp=1718709797
jun 18 13:23:17 ollama[22256]: llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5ff0abeeac1d2dbdd5455c0b49ba3b29a9ce3c1fb181b2eef2e948689d55d046 (version GGUF V3 (latest))
jun 18 13:23:17 ollama[22256]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   1:                               general.name str              = DeepSeek-Coder-V2-Lite-Instruct
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   2:                      deepseek2.block_count u32              = 27
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   3:                   deepseek2.context_length u32              = 163840
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   4:                 deepseek2.embedding_length u32              = 2048
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   5:              deepseek2.feed_forward_length u32              = 10944
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   6:             deepseek2.attention.head_count u32              = 16
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   7:          deepseek2.attention.head_count_kv u32              = 16
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   8:                   deepseek2.rope.freq_base f32              = 10000.000000
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv   9: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  10:                deepseek2.expert_used_count u32              = 6
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  11:                          general.file_type u32              = 2
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 1
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 102400
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  14:           deepseek2.attention.kv_lora_rank u32              = 512
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  15:             deepseek2.attention.key_length u32              = 192
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  16:           deepseek2.attention.value_length u32              = 128
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  17:       deepseek2.expert_feed_forward_length u32              = 1408
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  18:                     deepseek2.expert_count u32              = 64
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  19:              deepseek2.expert_shared_count u32              = 2
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  20:             deepseek2.expert_weights_scale f32              = 1.000000
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  21:             deepseek2.rope.dimension_count u32              = 64
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  22:                deepseek2.rope.scaling.type str              = yarn
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  23:              deepseek2.rope.scaling.factor f32              = 40.000000
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  24: deepseek2.rope.scaling.original_context_length u32              = 4096
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  25: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.070700
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = deepseek-llm
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,102400]  = ["!", "\"", "#", "$", "%", "&", "'", ...
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,102400]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,99757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 100000
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 100001
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 100001
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  35:               tokenizer.ggml.add_eos_token bool             = false
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  36:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv  37:               general.quantization_version u32              = 2
jun 18 13:23:17 ollama[22256]: llama_model_loader: - type  f32:  108 tensors
jun 18 13:23:17 ollama[22256]: llama_model_loader: - type q4_0:  268 tensors
jun 18 13:23:17 ollama[22256]: llama_model_loader: - type q6_K:    1 tensors
jun 18 13:23:17 ollama[22256]: llm_load_vocab: special tokens cache size = 2400
jun 18 13:23:17 ollama[22256]: time=2024-06-18T13:23:17.498+02:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server loading model"
jun 18 13:23:17 ollama[22256]: llm_load_vocab: token to piece cache size = 0.6661 MB
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: format           = GGUF V3 (latest)
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: arch             = deepseek2
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: vocab type       = BPE
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_vocab          = 102400
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_merges         = 99757
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ctx_train      = 163840
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd           = 2048
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_head           = 16
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_head_kv        = 16
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_layer          = 27
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_rot            = 64
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_head_k    = 192
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_head_v    = 128
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_gqa            = 1
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_k_gqa     = 3072
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_v_gqa     = 2048
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_norm_eps       = 0.0e+00
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_logit_scale    = 0.0e+00
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ff             = 10944
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert         = 64
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert_used    = 6
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: causal attn      = 1
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: pooling type     = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope type        = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope scaling     = yarn
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: freq_base_train  = 10000.0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: freq_scale_train = 0.025
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope_finetuned   = unknown
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_conv       = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_inner      = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_state      = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_dt_rank      = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model type       = 16B
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model ftype      = Q4_0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model params     = 15.71 B
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model size       = 8.29 GiB (4.53 BPW)
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: general.name     = DeepSeek-Coder-V2-Lite-Instruct
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: BOS token        = 100000 '<|begin▁of▁sentence|>'
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: EOS token        = 100001 '<|end▁of▁sentence|>'
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: PAD token        = 100001 '<|end▁of▁sentence|>'
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: LF token         = 126 'Ä'
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_layer_dense_lead   = 1
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_lora_q             = 0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_lora_kv            = 512
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ff_exp             = 1408
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert_shared      = 2
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: expert_weights_scale = 1.0
jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope_yarn_log_mul    = 0.0707
jun 18 13:23:17 ollama[22256]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
jun 18 13:23:17 ollama[22256]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
jun 18 13:23:17 ollama[22256]: ggml_cuda_init: found 1 CUDA devices:
jun 18 13:23:17 ollama[22256]:   Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes
jun 18 13:23:17 ollama[22256]: llm_load_tensors: ggml ctx size =    0.35 MiB
jun 18 13:23:18 ollama[22256]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8376.27 MiB on device 0: cudaMalloc failed: out of memory
jun 18 13:23:18 ollama[22256]: llama_model_load: error loading model: unable to allocate backend buffer
jun 18 13:23:18 ollama[22256]: llama_load_model_from_file: exception loading model
jun 18 13:23:18 ollama[22256]: terminate called after throwing an instance of 'std::runtime_error'
jun 18 13:23:18 ollama[22256]:   what():  unable to allocate backend buffer
jun 18 13:23:18 ollama[22256]: time=2024-06-18T13:23:18.757+02:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server error"
jun 18 13:23:19 ollama[22256]: time=2024-06-18T13:23:19.008+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) cudaMalloc failed: out of memory"
jun 18 13:23:19 ollama[22256]: [GIN] 2024/06/18 - 13:23:19 | 500 |  3.168403177s |       127.0.0.1 | POST     "/api/chat"
jun 18 13:23:24 ollama[22256]: time=2024-06-18T13:23:24.231+02:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.223479956 model=/usr/share/ollama/.ollama/models/blobs/sha256-5ff0abeeac1d2dbdd5455c0b49ba3b29a9ce3c1fb181b2eef2e948689d55d046
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.52.04              Driver Version: 555.52.04      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3070 ...    Off |   00000000:01:00.0  On |                  N/A |
| N/A   60C    P0             35W /  115W |     168MiB /   8192MiB |     32%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.1.45-rc2, 0.1.44

Originally created by @tincore on GitHub (Jun 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5113 ### What is the issue? Hi, Thanks for the great project. I get a crash (OOM) when trying to load new deepseek-coder-v2. Other models work fine. I've just upgraded to latest pre-release just in case but same behavior. ``` jun 18 13:23:17 ollama[26949]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="41023" tid="137762752704512" timestamp=1718709797 jun 18 13:23:17 ollama[22256]: llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5ff0abeeac1d2dbdd5455c0b49ba3b29a9ce3c1fb181b2eef2e948689d55d046 (version GGUF V3 (latest)) jun 18 13:23:17 ollama[22256]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 0: general.architecture str = deepseek2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 2: deepseek2.block_count u32 = 27 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 11: general.file_type u32 = 2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001 jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de... jun 18 13:23:17 ollama[22256]: llama_model_loader: - kv 37: general.quantization_version u32 = 2 jun 18 13:23:17 ollama[22256]: llama_model_loader: - type f32: 108 tensors jun 18 13:23:17 ollama[22256]: llama_model_loader: - type q4_0: 268 tensors jun 18 13:23:17 ollama[22256]: llama_model_loader: - type q6_K: 1 tensors jun 18 13:23:17 ollama[22256]: llm_load_vocab: special tokens cache size = 2400 jun 18 13:23:17 ollama[22256]: time=2024-06-18T13:23:17.498+02:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server loading model" jun 18 13:23:17 ollama[22256]: llm_load_vocab: token to piece cache size = 0.6661 MB jun 18 13:23:17 ollama[22256]: llm_load_print_meta: format = GGUF V3 (latest) jun 18 13:23:17 ollama[22256]: llm_load_print_meta: arch = deepseek2 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: vocab type = BPE jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_vocab = 102400 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_merges = 99757 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ctx_train = 163840 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd = 2048 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_head = 16 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_head_kv = 16 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_layer = 27 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_rot = 64 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_head_k = 192 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_head_v = 128 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_gqa = 1 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_k_gqa = 3072 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_embd_v_gqa = 2048 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_norm_eps = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: f_logit_scale = 0.0e+00 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ff = 10944 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert = 64 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert_used = 6 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: causal attn = 1 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: pooling type = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope type = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope scaling = yarn jun 18 13:23:17 ollama[22256]: llm_load_print_meta: freq_base_train = 10000.0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: freq_scale_train = 0.025 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ctx_orig_yarn = 4096 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope_finetuned = unknown jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_conv = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_inner = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_d_state = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: ssm_dt_rank = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model type = 16B jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model ftype = Q4_0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model params = 15.71 B jun 18 13:23:17 ollama[22256]: llm_load_print_meta: model size = 8.29 GiB (4.53 BPW) jun 18 13:23:17 ollama[22256]: llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct jun 18 13:23:17 ollama[22256]: llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: LF token = 126 'Ä' jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_layer_dense_lead = 1 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_lora_q = 0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_lora_kv = 512 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_ff_exp = 1408 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: n_expert_shared = 2 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: expert_weights_scale = 1.0 jun 18 13:23:17 ollama[22256]: llm_load_print_meta: rope_yarn_log_mul = 0.0707 jun 18 13:23:17 ollama[22256]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes jun 18 13:23:17 ollama[22256]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no jun 18 13:23:17 ollama[22256]: ggml_cuda_init: found 1 CUDA devices: jun 18 13:23:17 ollama[22256]: Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes jun 18 13:23:17 ollama[22256]: llm_load_tensors: ggml ctx size = 0.35 MiB jun 18 13:23:18 ollama[22256]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8376.27 MiB on device 0: cudaMalloc failed: out of memory jun 18 13:23:18 ollama[22256]: llama_model_load: error loading model: unable to allocate backend buffer jun 18 13:23:18 ollama[22256]: llama_load_model_from_file: exception loading model jun 18 13:23:18 ollama[22256]: terminate called after throwing an instance of 'std::runtime_error' jun 18 13:23:18 ollama[22256]: what(): unable to allocate backend buffer jun 18 13:23:18 ollama[22256]: time=2024-06-18T13:23:18.757+02:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server error" jun 18 13:23:19 ollama[22256]: time=2024-06-18T13:23:19.008+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) cudaMalloc failed: out of memory" jun 18 13:23:19 ollama[22256]: [GIN] 2024/06/18 - 13:23:19 | 500 | 3.168403177s | 127.0.0.1 | POST "/api/chat" jun 18 13:23:24 ollama[22256]: time=2024-06-18T13:23:24.231+02:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.223479956 model=/usr/share/ollama/.ollama/models/blobs/sha256-5ff0abeeac1d2dbdd5455c0b49ba3b29a9ce3c1fb181b2eef2e948689d55d046 ``` ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.52.04 Driver Version: 555.52.04 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3070 ... Off | 00000000:01:00.0 On | N/A | | N/A 60C P0 35W / 115W | 168MiB / 8192MiB | 32% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.45-rc2, 0.1.44
GiteaMirror added the memorybug labels 2026-05-03 20:15:01 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65263