[GH-ISSUE #6031] Timeout to start model too little - progress stalls at 100% for 5 minutes when loading with swap #3776

Closed
opened 2026-04-12 14:36:29 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @forReason on GitHub (Jul 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6031

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I am trying to run llama:405b on a hardware with only little power and through a swap file.
Im not concerned about its speed.
Though, the Model cant load because:

ollama run llama3.1:405b --keepalive 5h
Error: timed out waiting for llama runner to start - progress 1.00 -

is it possible to disable this timeout, or increase it? Im quite certain it would load after (a long) while

OS

Linux

GPU

Other

CPU

Other

Ollama version

0.3.0

Originally created by @forReason on GitHub (Jul 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6031 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I am trying to run llama:405b on a hardware with only little power and through a swap file. Im not concerned about its speed. Though, the Model cant load because: ``` ollama run llama3.1:405b --keepalive 5h Error: timed out waiting for llama runner to start - progress 1.00 - ``` is it possible to disable this timeout, or increase it? Im quite certain it would load after (a long) while ### OS Linux ### GPU Other ### CPU Other ### Ollama version 0.3.0
GiteaMirror added the bug label 2026-04-12 14:36:29 -05:00
Author
Owner

@dhiltgen commented on GitHub (Aug 9, 2024):

Can you share your server log?

<!-- gh-comment-id:2278796626 --> @dhiltgen commented on GitHub (Aug 9, 2024): Can you share your server log?
Author
Owner

@lyfuci commented on GitHub (Aug 16, 2024):

maybe I meet the same problem, but not sure. my progress is 0.00. and progress run in a docker container.

INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="40623" tid="140540594020352" timestamp=1723807357
llama_model_loader: loaded meta data with 29 key-value pairs and 1138 tensors from /root/.ollama/models/blobs/sha256-939fd971f03801a9447af720a78d1fc00833cadf05252b4fc871bfb70eafdda6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = .
llama_model_loader: - kv   3:                           general.finetune str              = .
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 405B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 126
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 16384
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 53248
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 128
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  254 tensors
llama_model_loader: - type q4_0:  883 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-08-16T11:22:37.273Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 16384
llm_load_print_meta: n_layer          = 126
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 16
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 53248
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 405.85 B
llm_load_print_meta: model size       = 213.13 GiB (4.51 BPW)
llm_load_print_meta: general.name     = .
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
  Device 1: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
  Device 2: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
  Device 3: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size =    2.66 MiB
time=2024-08-16T11:22:38.730Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-16T11:22:45.301Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
time=2024-08-16T11:27:37.029Z level=ERROR source=sched.go:451 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
[GIN] 2024/08/16 - 11:27:37 | 500 |          5m5s |       127.0.0.1 | POST     "/api/chat"
time=2024-08-16T11:27:42.746Z level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.716938184 model=/root/.ollama/models/blobs/sha256-939fd971f03801a9447af720a78d1fc00833cadf05252b4fc871bfb70eafdda6
time=2024-08-16T11:27:43.589Z level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=6.55945734 model=/root/.ollama/models/blobs/sha256-939fd971f03801a9447af720a78d1fc00833cadf05252b4fc871bfb70eafdda6
<!-- gh-comment-id:2293347732 --> @lyfuci commented on GitHub (Aug 16, 2024): maybe I meet the same problem, but not sure. my progress is 0.00. and progress run in a docker container. ``` INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="40623" tid="140540594020352" timestamp=1723807357 llama_model_loader: loaded meta data with 29 key-value pairs and 1138 tensors from /root/.ollama/models/blobs/sha256-939fd971f03801a9447af720a78d1fc00833cadf05252b4fc871bfb70eafdda6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = . llama_model_loader: - kv 3: general.finetune str = . llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 405B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 126 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 16384 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 53248 llama_model_loader: - kv 13: llama.attention.head_count u32 = 128 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 254 tensors llama_model_loader: - type q4_0: 883 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-08-16T11:22:37.273Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 16384 llm_load_print_meta: n_layer = 126 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 16 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 53248 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 405.85 B llm_load_print_meta: model size = 213.13 GiB (4.51 BPW) llm_load_print_meta: general.name = . llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes Device 1: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes Device 2: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes Device 3: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 2.66 MiB time=2024-08-16T11:22:38.730Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server not responding" time=2024-08-16T11:22:45.301Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" time=2024-08-16T11:27:37.029Z level=ERROR source=sched.go:451 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " [GIN] 2024/08/16 - 11:27:37 | 500 | 5m5s | 127.0.0.1 | POST "/api/chat" time=2024-08-16T11:27:42.746Z level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.716938184 model=/root/.ollama/models/blobs/sha256-939fd971f03801a9447af720a78d1fc00833cadf05252b4fc871bfb70eafdda6 time=2024-08-16T11:27:43.589Z level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=6.55945734 model=/root/.ollama/models/blobs/sha256-939fd971f03801a9447af720a78d1fc00833cadf05252b4fc871bfb70eafdda6 ```
Author
Owner

@stl314159 commented on GitHub (Aug 29, 2024):

I am having this issue as well. The model appears to be loading but times out when it gets to 5 minutes. It would be nice if the stall duration was configurable:
56346ccfa3/llm/server.go (L587)

<!-- gh-comment-id:2318821674 --> @stl314159 commented on GitHub (Aug 29, 2024): I am having this issue as well. The model appears to be loading but times out when it gets to 5 minutes. It would be nice if the stall duration was configurable: https://github.com/ollama/ollama/blob/56346ccfa3e51eec51fc26ae8e91fc88cb74a9b8/llm/server.go#L587
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3776