[GH-ISSUE #8885] deepseek-r1 model also exited a few minutes after ubuntu terminal logout #5760

Closed
opened 2026-04-12 17:05:03 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @longdexin on GitHub (Feb 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8885

What is the issue?

  1. OS: Ubuntu 24.04.1 LTS (GNU/Linux 6.8.0-1021-aws x86_64);
  2. I installed ollama 0.5.7 by running 'curl -fsSL https://ollama.com/install.sh | sh';
  3. I ran the model:
$ ollama run deepseek-r1:1.5b
pulling manifest 
pulling aabd4debf0c8... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB                         
pulling 369ca498f347... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  387 B                         
pulling 6e4c38e1172f... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 KB                         
pulling f4d24e9138dd... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  148 B                         
pulling a85fe2a2e58e... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  487 B                         
verifying sha256 digest 
writing manifest 
success 
>>> Send a message (/? for help
  1. ctl+D twice to logout the Ubuntu terminal and wait 5 minutes;
  2. when I logged in Ubuntu again and ran 'ollama ps':
$ ollama ps
NAME    ID    SIZE    PROCESSOR    UNTIL
  1. My model was no longer seen, but why?

Relevant log output

Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: time=2025-02-06T11:46:38.642Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:39795"
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: time=2025-02-06T11:46:38.755Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_load_model_from_file: using device CUDA0 (NVIDIA A10G) - 22342 MiB free
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 1.5B
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   4:                         general.size_label str              = 1.5B
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   5:                          qwen2.block_count u32              = 28
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 1536
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 8960
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 12
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 2
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  13:                          general.file_type u32              = 15
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - type  f32:  141 tensors
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - type q4_K:  169 tensors
Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - type q6_K:   29 tensors
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_vocab: special tokens cache size = 22
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_vocab: token to piece cache size = 0.9310 MB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: arch             = qwen2
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: vocab type       = BPE
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_vocab          = 151936
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_merges         = 151387
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: vocab_only       = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_ctx_train      = 131072
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd           = 1536
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_layer          = 28
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_head           = 12
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_head_kv        = 2
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_rot            = 128
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_swa            = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_head_k    = 128
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_head_v    = 128
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_gqa            = 6
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_k_gqa     = 256
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_v_gqa     = 256
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_ff             = 8960
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_expert         = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_expert_used    = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: causal attn      = 1
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: pooling type     = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: rope type        = 2
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: rope scaling     = linear
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: freq_base_train  = 10000.0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: freq_scale_train = 1
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: rope_finetuned   = unknown
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_d_conv       = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_d_inner      = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_d_state      = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model type       = 1.5B
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model ftype      = Q4_K - Medium
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model params     = 1.78 B
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model size       = 1.04 GiB (5.00 BPW)
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 1.5B
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: max token length = 256
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: offloading 28 repeating layers to GPU
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: offloading output layer to GPU
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: offloaded 29/29 layers to GPU
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors:   CPU_Mapped model buffer size =   125.19 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors:        CUDA0 model buffer size =   934.70 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_seq_max     = 4
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ctx         = 8192
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ctx_per_seq = 2048
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_batch       = 2048
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ubatch      = 512
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: flash_attn    = 0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: freq_base     = 10000.0
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: freq_scale    = 1
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_kv_cache_init:      CUDA0 KV buffer size =   224.00 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: KV self size  =  224.00 MiB, K (f16):  112.00 MiB, V (f16):  112.00 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.34 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model:      CUDA0 compute buffer size =   299.75 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model:  CUDA_Host compute buffer size =    19.01 MiB
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: graph nodes  = 986
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: graph splits = 2
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: time=2025-02-06T11:46:39.759Z level=INFO source=server.go:594 msg="llama runner started in 1.26 seconds"
Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:46:39 | 200 |  4.329075938s |       127.0.0.1 | POST     "/api/generate"
Feb 06 11:48:14 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:48:14 | 200 |       38.99µs |       127.0.0.1 | HEAD     "/"
Feb 06 11:48:14 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:48:14 | 200 |      67.461µs |       127.0.0.1 | GET      "/api/ps"
Feb 06 11:49:49 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:49:49 | 200 |      53.451µs |       127.0.0.1 | HEAD     "/"
Feb 06 11:49:49 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:49:49 | 200 |      99.002µs |       127.0.0.1 | GET      "/api/ps"
Feb 06 11:51:00 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:51:00 | 200 |       41.43µs |       127.0.0.1 | HEAD     "/"
Feb 06 11:51:00 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:51:00 | 200 |      62.081µs |       127.0.0.1 | GET      "/api/ps"
Feb 06 11:52:56 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:52:56 | 200 |      30.771µs |       127.0.0.1 | HEAD     "/"
Feb 06 11:52:56 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:52:56 | 200 |       43.09µs |       127.0.0.1 | GET      "/api/ps"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @longdexin on GitHub (Feb 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8885 ### What is the issue? 1. OS: Ubuntu 24.04.1 LTS (GNU/Linux 6.8.0-1021-aws x86_64); 2. I installed ollama 0.5.7 by running 'curl -fsSL https://ollama.com/install.sh | sh'; 3. I ran the model: ``` shell $ ollama run deepseek-r1:1.5b pulling manifest pulling aabd4debf0c8... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB pulling 369ca498f347... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 387 B pulling 6e4c38e1172f... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 KB pulling f4d24e9138dd... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 148 B pulling a85fe2a2e58e... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B verifying sha256 digest writing manifest success >>> Send a message (/? for help ``` 4. ctl+D twice to logout the Ubuntu terminal and wait 5 minutes; 5. when I logged in Ubuntu again and ran 'ollama ps': ``` $ ollama ps NAME ID SIZE PROCESSOR UNTIL ``` 6. My model was no longer seen, but why? ### Relevant log output ```shell Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: time=2025-02-06T11:46:38.642Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:39795" Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: time=2025-02-06T11:46:38.755Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_load_model_from_file: using device CUDA0 (NVIDIA A10G) - 22342 MiB free Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 0: general.architecture str = qwen2 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 1: general.type str = model Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 4: general.size_label str = 1.5B Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 13: general.file_type u32 = 15 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - type f32: 141 tensors Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - type q4_K: 169 tensors Feb 06 11:46:38 ip-10-106-3-105 ollama[2648]: llama_model_loader: - type q6_K: 29 tensors Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_vocab: special tokens cache size = 22 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_vocab: token to piece cache size = 0.9310 MB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: format = GGUF V3 (latest) Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: arch = qwen2 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: vocab type = BPE Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_vocab = 151936 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_merges = 151387 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: vocab_only = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_ctx_train = 131072 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd = 1536 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_layer = 28 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_head = 12 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_head_kv = 2 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_rot = 128 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_swa = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_head_k = 128 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_head_v = 128 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_gqa = 6 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_k_gqa = 256 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_embd_v_gqa = 256 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_ff = 8960 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_expert = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_expert_used = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: causal attn = 1 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: pooling type = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: rope type = 2 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: rope scaling = linear Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: freq_base_train = 10000.0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: freq_scale_train = 1 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: rope_finetuned = unknown Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_d_conv = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_d_inner = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_d_state = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_dt_rank = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model type = 1.5B Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model ftype = Q4_K - Medium Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model params = 1.78 B Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: model size = 1.04 GiB (5.00 BPW) Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: LF token = 148848 'ÄĬ' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_print_meta: max token length = 256 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: offloading 28 repeating layers to GPU Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: offloading output layer to GPU Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: offloaded 29/29 layers to GPU Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: CPU_Mapped model buffer size = 125.19 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llm_load_tensors: CUDA0 model buffer size = 934.70 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_seq_max = 4 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ctx = 8192 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ctx_per_seq = 2048 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_batch = 2048 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ubatch = 512 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: flash_attn = 0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: freq_base = 10000.0 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: freq_scale = 1 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_kv_cache_init: CUDA0 KV buffer size = 224.00 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: CUDA_Host output buffer size = 2.34 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: CUDA0 compute buffer size = 299.75 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: CUDA_Host compute buffer size = 19.01 MiB Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: graph nodes = 986 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: llama_new_context_with_model: graph splits = 2 Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: time=2025-02-06T11:46:39.759Z level=INFO source=server.go:594 msg="llama runner started in 1.26 seconds" Feb 06 11:46:39 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:46:39 | 200 | 4.329075938s | 127.0.0.1 | POST "/api/generate" Feb 06 11:48:14 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:48:14 | 200 | 38.99µs | 127.0.0.1 | HEAD "/" Feb 06 11:48:14 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:48:14 | 200 | 67.461µs | 127.0.0.1 | GET "/api/ps" Feb 06 11:49:49 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:49:49 | 200 | 53.451µs | 127.0.0.1 | HEAD "/" Feb 06 11:49:49 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:49:49 | 200 | 99.002µs | 127.0.0.1 | GET "/api/ps" Feb 06 11:51:00 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:51:00 | 200 | 41.43µs | 127.0.0.1 | HEAD "/" Feb 06 11:51:00 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:51:00 | 200 | 62.081µs | 127.0.0.1 | GET "/api/ps" Feb 06 11:52:56 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:52:56 | 200 | 30.771µs | 127.0.0.1 | HEAD "/" Feb 06 11:52:56 ip-10-106-3-105 ollama[2648]: [GIN] 2025/02/06 - 11:52:56 | 200 | 43.09µs | 127.0.0.1 | GET "/api/ps" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:05:03 -05:00
Author
Owner
<!-- gh-comment-id:2639658091 --> @rick-github commented on GitHub (Feb 6, 2025): https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately
Author
Owner

@longdexin commented on GitHub (Feb 6, 2025):

@rick-github , thank you so much, I will try it.

<!-- gh-comment-id:2639710352 --> @longdexin commented on GitHub (Feb 6, 2025): @rick-github , thank you so much, I will try it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5760