[GH-ISSUE #9292] Ollama model stops working after reboot (EC2 / Linux) (Default EBS) #6060

Closed
opened 2026-04-12 17:23:23 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @mknwebsolutions on GitHub (Feb 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9292

What is the issue?

Using the standardized curl -fsSL https://ollama.com/install.sh | sh installer and then installing mistral-nemo. All works after that, however after a reboot the model "loads" but will not produce context. Here's the error messages / logs:

When requesting via API, but then it hangs up completely until I cancel the API request and it produces the following error:
ollama[4096]: time=2025-02-22T18:48:56.609Z level=WARN source=server.go:564 msg="client connection closed before server finished loading, aborting load"
ollama[4096]: time=2025-02-22T18:48:56.609Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
ollama[4096]: [GIN] 2025/02/22 - 18:48:56 | 499 | 1m56s | POST "/api/generate"
ollama[4096]: time=2025-02-22T18:49:01.799Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.189934493 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94
ollama[4096]: time=2025-02-22T18:49:02.048Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.439394722 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94
ollama[4096]: time=2025-02-22T18:49:02.299Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.6898286030000005 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94

Relevant log output

ollama[4096]: time=2025-02-22T18:47:00.597Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 gpu=GPU-c5e8c2b6-b114-ab26-b242-af9ef1805adf parallel=4 available=15530196992 required="8.7 GiB"
 ollama[4096]: time=2025-02-22T18:47:00.772Z level=INFO source=server.go:100 msg="system memory" total="15.4 GiB" free="14.5 GiB" free_swap="0 B"
 ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.5 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB"
 ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 2 --parallel 4 --port 43863"
 ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=sched.go:449 msg="loaded runners" count=1
 ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
 ollama[4096]: time=2025-02-22T18:47:00.774Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
 ollama[4096]: time=2025-02-22T18:47:00.789Z level=INFO source=runner.go:936 msg="starting go runner"
 ollama[4096]: time=2025-02-22T18:47:00.789Z level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=2
 ollama[4096]: time=2025-02-22T18:47:00.789Z level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:43863"
 ollama[4096]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
 ollama[4096]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
 ollama[4096]: ggml_cuda_init: found 1 CUDA devices:
 ollama[4096]:   Device 0: Tesla T4, compute capability 7.5, VMM: yes
 ollama[4096]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
 ollama[4096]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-skylakex.so
 ollama[4096]: llama_load_model_from_file: using device CUDA0 (Tesla T4) - 14810 MiB free
 ollama[4096]: time=2025-02-22T18:47:01.025Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
 ollama[4096]: llama_model_loader: loaded meta data with 35 key-value pairs and 363 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 (version GGUF V3 (latest))
 ollama[4096]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
 ollama[4096]: llama_model_loader: - kv   0:                       general.architecture str              = llama
 ollama[4096]: llama_model_loader: - kv   1:                               general.type str              = model
 ollama[4096]: llama_model_loader: - kv   2:                               general.name str              = Mistral Nemo Instruct 2407
 ollama[4096]: llama_model_loader: - kv   3:                            general.version str              = 2407
 ollama[4096]: llama_model_loader: - kv   4:                           general.finetune str              = Instruct
 ollama[4096]: llama_model_loader: - kv   5:                           general.basename str              = Mistral-Nemo
 ollama[4096]: llama_model_loader: - kv   6:                         general.size_label str              = 12B
 ollama[4096]: llama_model_loader: - kv   7:                            general.license str              = apache-2.0
 ollama[4096]: llama_model_loader: - kv   8:                          general.languages arr[str,9]       = ["en", "fr", "de", "es", "it", "pt", ...
 ollama[4096]: llama_model_loader: - kv   9:                          llama.block_count u32              = 40
 ollama[4096]: llama_model_loader: - kv  10:                       llama.context_length u32              = 1024000
 ollama[4096]: llama_model_loader: - kv  11:                     llama.embedding_length u32              = 5120
 ollama[4096]: llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
 ollama[4096]: llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
 ollama[4096]: llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
 ollama[4096]: llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 1000000.000000
 ollama[4096]: llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
 ollama[4096]: llama_model_loader: - kv  17:                 llama.attention.key_length u32              = 128
 ollama[4096]: llama_model_loader: - kv  18:               llama.attention.value_length u32              = 128
 ollama[4096]: llama_model_loader: - kv  19:                          general.file_type u32              = 2
 ollama[4096]: llama_model_loader: - kv  20:                           llama.vocab_size u32              = 131072
 ollama[4096]: llama_model_loader: - kv  21:                 llama.rope.dimension_count u32              = 128
 ollama[4096]: llama_model_loader: - kv  22:            tokenizer.ggml.add_space_prefix bool             = false
 ollama[4096]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
 ollama[4096]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
 ollama[4096]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
 ollama[4096]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
 ollama[4096]: [132B blob data]
 ollama[4096]: llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
 ollama[4096]: llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 2
 ollama[4096]: llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
 ollama[4096]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = true
 ollama[4096]: llama_model_loader: - kv  32:               tokenizer.ggml.add_eos_token bool             = false
 ollama[4096]: llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {%- if messages[0]['role'] == 'system...
 ollama[4096]: llama_model_loader: - kv  34:               general.quantization_version u32              = 2
 ollama[4096]: llama_model_loader: - type  f32:   81 tensors
 ollama[4096]: llama_model_loader: - type q4_0:  281 tensors
 ollama[4096]: llama_model_loader: - type q6_K:    1 tensors
 ollama[4096]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
 ollama[4096]: llm_load_vocab: special tokens cache size = 1000
 ollama[4096]: llm_load_vocab: token to piece cache size = 0.8498 MB
 ollama[4096]: llm_load_print_meta: format           = GGUF V3 (latest)
 ollama[4096]: llm_load_print_meta: arch             = llama
 ollama[4096]: llm_load_print_meta: vocab type       = BPE
 ollama[4096]: llm_load_print_meta: n_vocab          = 131072
 ollama[4096]: llm_load_print_meta: n_merges         = 269443
 ollama[4096]: llm_load_print_meta: vocab_only       = 0
 ollama[4096]: llm_load_print_meta: n_ctx_train      = 1024000
 ollama[4096]: llm_load_print_meta: n_embd           = 5120
 ollama[4096]: llm_load_print_meta: n_layer          = 40
 ollama[4096]: llm_load_print_meta: n_head           = 32
 ollama[4096]: llm_load_print_meta: n_head_kv        = 8
 ollama[4096]: llm_load_print_meta: n_rot            = 128
 ollama[4096]: llm_load_print_meta: n_swa            = 0
 ollama[4096]: llm_load_print_meta: n_embd_head_k    = 128
 ollama[4096]: llm_load_print_meta: n_embd_head_v    = 128
 ollama[4096]: llm_load_print_meta: n_gqa            = 4
 ollama[4096]: llm_load_print_meta: n_embd_k_gqa     = 1024
 ollama[4096]: llm_load_print_meta: n_embd_v_gqa     = 1024
 ollama[4096]: llm_load_print_meta: f_norm_eps       = 0.0e+00
 ollama[4096]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
 ollama[4096]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
 ollama[4096]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
 ollama[4096]: llm_load_print_meta: f_logit_scale    = 0.0e+00
 ollama[4096]: llm_load_print_meta: n_ff             = 14336
 ollama[4096]: llm_load_print_meta: n_expert         = 0
 ollama[4096]: llm_load_print_meta: n_expert_used    = 0
 ollama[4096]: llm_load_print_meta: causal attn      = 1
 ollama[4096]: llm_load_print_meta: pooling type     = 0
 ollama[4096]: llm_load_print_meta: rope type        = 0
 ollama[4096]: llm_load_print_meta: rope scaling     = linear
 ollama[4096]: llm_load_print_meta: freq_base_train  = 1000000.0
 ollama[4096]: llm_load_print_meta: freq_scale_train = 1
 ollama[4096]: llm_load_print_meta: n_ctx_orig_yarn  = 1024000
 ollama[4096]: llm_load_print_meta: rope_finetuned   = unknown
 ollama[4096]: llm_load_print_meta: ssm_d_conv       = 0
 ollama[4096]: llm_load_print_meta: ssm_d_inner      = 0
 ollama[4096]: llm_load_print_meta: ssm_d_state      = 0
 ollama[4096]: llm_load_print_meta: ssm_dt_rank      = 0
 ollama[4096]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
 ollama[4096]: llm_load_print_meta: model type       = 13B
 ollama[4096]: llm_load_print_meta: model ftype      = Q4_0
 ollama[4096]: llm_load_print_meta: model params     = 12.25 B
 ollama[4096]: llm_load_print_meta: model size       = 6.58 GiB (4.61 BPW)
 ollama[4096]: llm_load_print_meta: general.name     = Mistral Nemo Instruct 2407
 ollama[4096]: llm_load_print_meta: BOS token        = 1 '<s>'
 ollama[4096]: llm_load_print_meta: EOS token        = 2 '</s>'
 ollama[4096]: llm_load_print_meta: UNK token        = 0 '<unk>'
 ollama[4096]: llm_load_print_meta: LF token         = 1196 'Ä'
 ollama[4096]: llm_load_print_meta: EOG token        = 2 '</s>'
 ollama[4096]: llm_load_print_meta: max token length = 150

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @mknwebsolutions on GitHub (Feb 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9292 ### What is the issue? Using the standardized curl -fsSL https://ollama.com/install.sh | sh installer and then installing mistral-nemo. All works after that, however after a reboot the model "loads" but will not produce context. Here's the error messages / logs: When requesting via API, but then it hangs up completely until I cancel the API request and it produces the following error: ollama[4096]: time=2025-02-22T18:48:56.609Z level=WARN source=server.go:564 msg="client connection closed before server finished loading, aborting load" ollama[4096]: time=2025-02-22T18:48:56.609Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" ollama[4096]: [GIN] 2025/02/22 - 18:48:56 | 499 | 1m56s | POST "/api/generate" ollama[4096]: time=2025-02-22T18:49:01.799Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.189934493 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 ollama[4096]: time=2025-02-22T18:49:02.048Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.439394722 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 ollama[4096]: time=2025-02-22T18:49:02.299Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.6898286030000005 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 ### Relevant log output ```shell ollama[4096]: time=2025-02-22T18:47:00.597Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 gpu=GPU-c5e8c2b6-b114-ab26-b242-af9ef1805adf parallel=4 available=15530196992 required="8.7 GiB" ollama[4096]: time=2025-02-22T18:47:00.772Z level=INFO source=server.go:100 msg="system memory" total="15.4 GiB" free="14.5 GiB" free_swap="0 B" ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.5 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB" ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 2 --parallel 4 --port 43863" ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=sched.go:449 msg="loaded runners" count=1 ollama[4096]: time=2025-02-22T18:47:00.773Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" ollama[4096]: time=2025-02-22T18:47:00.774Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" ollama[4096]: time=2025-02-22T18:47:00.789Z level=INFO source=runner.go:936 msg="starting go runner" ollama[4096]: time=2025-02-22T18:47:00.789Z level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=2 ollama[4096]: time=2025-02-22T18:47:00.789Z level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:43863" ollama[4096]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ollama[4096]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ollama[4096]: ggml_cuda_init: found 1 CUDA devices: ollama[4096]: Device 0: Tesla T4, compute capability 7.5, VMM: yes ollama[4096]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so ollama[4096]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-skylakex.so ollama[4096]: llama_load_model_from_file: using device CUDA0 (Tesla T4) - 14810 MiB free ollama[4096]: time=2025-02-22T18:47:01.025Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" ollama[4096]: llama_model_loader: loaded meta data with 35 key-value pairs and 363 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 (version GGUF V3 (latest)) ollama[4096]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[4096]: llama_model_loader: - kv 0: general.architecture str = llama ollama[4096]: llama_model_loader: - kv 1: general.type str = model ollama[4096]: llama_model_loader: - kv 2: general.name str = Mistral Nemo Instruct 2407 ollama[4096]: llama_model_loader: - kv 3: general.version str = 2407 ollama[4096]: llama_model_loader: - kv 4: general.finetune str = Instruct ollama[4096]: llama_model_loader: - kv 5: general.basename str = Mistral-Nemo ollama[4096]: llama_model_loader: - kv 6: general.size_label str = 12B ollama[4096]: llama_model_loader: - kv 7: general.license str = apache-2.0 ollama[4096]: llama_model_loader: - kv 8: general.languages arr[str,9] = ["en", "fr", "de", "es", "it", "pt", ... ollama[4096]: llama_model_loader: - kv 9: llama.block_count u32 = 40 ollama[4096]: llama_model_loader: - kv 10: llama.context_length u32 = 1024000 ollama[4096]: llama_model_loader: - kv 11: llama.embedding_length u32 = 5120 ollama[4096]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 ollama[4096]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 ollama[4096]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 ollama[4096]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 1000000.000000 ollama[4096]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama[4096]: llama_model_loader: - kv 17: llama.attention.key_length u32 = 128 ollama[4096]: llama_model_loader: - kv 18: llama.attention.value_length u32 = 128 ollama[4096]: llama_model_loader: - kv 19: general.file_type u32 = 2 ollama[4096]: llama_model_loader: - kv 20: llama.vocab_size u32 = 131072 ollama[4096]: llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128 ollama[4096]: llama_model_loader: - kv 22: tokenizer.ggml.add_space_prefix bool = false ollama[4096]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 ollama[4096]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken ollama[4096]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... ollama[4096]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... ollama[4096]: [132B blob data] ollama[4096]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 ollama[4096]: llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 2 ollama[4096]: llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 ollama[4096]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true ollama[4096]: llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false ollama[4096]: llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if messages[0]['role'] == 'system... ollama[4096]: llama_model_loader: - kv 34: general.quantization_version u32 = 2 ollama[4096]: llama_model_loader: - type f32: 81 tensors ollama[4096]: llama_model_loader: - type q4_0: 281 tensors ollama[4096]: llama_model_loader: - type q6_K: 1 tensors ollama[4096]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[4096]: llm_load_vocab: special tokens cache size = 1000 ollama[4096]: llm_load_vocab: token to piece cache size = 0.8498 MB ollama[4096]: llm_load_print_meta: format = GGUF V3 (latest) ollama[4096]: llm_load_print_meta: arch = llama ollama[4096]: llm_load_print_meta: vocab type = BPE ollama[4096]: llm_load_print_meta: n_vocab = 131072 ollama[4096]: llm_load_print_meta: n_merges = 269443 ollama[4096]: llm_load_print_meta: vocab_only = 0 ollama[4096]: llm_load_print_meta: n_ctx_train = 1024000 ollama[4096]: llm_load_print_meta: n_embd = 5120 ollama[4096]: llm_load_print_meta: n_layer = 40 ollama[4096]: llm_load_print_meta: n_head = 32 ollama[4096]: llm_load_print_meta: n_head_kv = 8 ollama[4096]: llm_load_print_meta: n_rot = 128 ollama[4096]: llm_load_print_meta: n_swa = 0 ollama[4096]: llm_load_print_meta: n_embd_head_k = 128 ollama[4096]: llm_load_print_meta: n_embd_head_v = 128 ollama[4096]: llm_load_print_meta: n_gqa = 4 ollama[4096]: llm_load_print_meta: n_embd_k_gqa = 1024 ollama[4096]: llm_load_print_meta: n_embd_v_gqa = 1024 ollama[4096]: llm_load_print_meta: f_norm_eps = 0.0e+00 ollama[4096]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama[4096]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama[4096]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama[4096]: llm_load_print_meta: f_logit_scale = 0.0e+00 ollama[4096]: llm_load_print_meta: n_ff = 14336 ollama[4096]: llm_load_print_meta: n_expert = 0 ollama[4096]: llm_load_print_meta: n_expert_used = 0 ollama[4096]: llm_load_print_meta: causal attn = 1 ollama[4096]: llm_load_print_meta: pooling type = 0 ollama[4096]: llm_load_print_meta: rope type = 0 ollama[4096]: llm_load_print_meta: rope scaling = linear ollama[4096]: llm_load_print_meta: freq_base_train = 1000000.0 ollama[4096]: llm_load_print_meta: freq_scale_train = 1 ollama[4096]: llm_load_print_meta: n_ctx_orig_yarn = 1024000 ollama[4096]: llm_load_print_meta: rope_finetuned = unknown ollama[4096]: llm_load_print_meta: ssm_d_conv = 0 ollama[4096]: llm_load_print_meta: ssm_d_inner = 0 ollama[4096]: llm_load_print_meta: ssm_d_state = 0 ollama[4096]: llm_load_print_meta: ssm_dt_rank = 0 ollama[4096]: llm_load_print_meta: ssm_dt_b_c_rms = 0 ollama[4096]: llm_load_print_meta: model type = 13B ollama[4096]: llm_load_print_meta: model ftype = Q4_0 ollama[4096]: llm_load_print_meta: model params = 12.25 B ollama[4096]: llm_load_print_meta: model size = 6.58 GiB (4.61 BPW) ollama[4096]: llm_load_print_meta: general.name = Mistral Nemo Instruct 2407 ollama[4096]: llm_load_print_meta: BOS token = 1 '<s>' ollama[4096]: llm_load_print_meta: EOS token = 2 '</s>' ollama[4096]: llm_load_print_meta: UNK token = 0 '<unk>' ollama[4096]: llm_load_print_meta: LF token = 1196 'Ä' ollama[4096]: llm_load_print_meta: EOG token = 2 '</s>' ollama[4096]: llm_load_print_meta: max token length = 150 ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 17:23:23 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

ollama[4096]: time=2025-02-22T18:48:56.609Z level=WARN source=server.go:564 msg="client connection closed before server finished loading, aborting load"
ollama[4096]: [GIN] 2025/02/22 - 18:48:56 | 499 | 1m56s | POST "/api/generate"

The client closed the connection before the model finished loading, so the model load was stopped. You can load the model before the client by running

ollama run mistral-nemo:latest ''
<!-- gh-comment-id:2676351922 --> @rick-github commented on GitHub (Feb 22, 2025): ``` ollama[4096]: time=2025-02-22T18:48:56.609Z level=WARN source=server.go:564 msg="client connection closed before server finished loading, aborting load" ollama[4096]: [GIN] 2025/02/22 - 18:48:56 | 499 | 1m56s | POST "/api/generate" ``` The client closed the connection before the model finished loading, so the model load was stopped. You can load the model before the client by running ```console ollama run mistral-nemo:latest '' ```
Author
Owner

@mknwebsolutions commented on GitHub (Feb 22, 2025):

@rick-github I closed the connection because nothing was happening .

When I run: ollama run mistral-nemo:latest '' it gets stuck on loop. I can confirm that ram, vcpu, gpu are all healthy state.

e.g., ollama run mistral-nemo:latest ''
⠙⠋⠇⠙⠋⠇⠙⠋⠇

While watching the log:
ollama[4096]: time=2025-02-22T19:05:29.006Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.193704102 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94

nvidia-smi below
Image

<!-- gh-comment-id:2676353900 --> @mknwebsolutions commented on GitHub (Feb 22, 2025): @rick-github I closed the connection because nothing was happening . When I run: ollama run mistral-nemo:latest '' it gets stuck on loop. I can confirm that ram, vcpu, gpu are all healthy state. e.g., ollama run mistral-nemo:latest '' ⠙⠋⠇⠙⠋⠇⠙⠋⠇ While watching the log: ollama[4096]: time=2025-02-22T19:05:29.006Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.193704102 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 nvidia-smi below ![Image](https://github.com/user-attachments/assets/bd091af9-ac06-4e72-93fa-617d49f06831)
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

e.g., ollama run mistral-nemo:latest ''
⠙⠋⠇⠙⠋⠇⠙⠋⠇

The model is loading. Set OLLAMA_DEBUG=1 and the logs will show the load progression.

ollama[4096]: time=2025-02-22T19:05:29.006Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.193704102 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94

This is just a warning. The model was unloaded (or failed to load) and ollama is just keeping an eye on the freeing up of allocated VRAM.

<!-- gh-comment-id:2676362961 --> @rick-github commented on GitHub (Feb 22, 2025): > e.g., ollama run mistral-nemo:latest '' > ⠙⠋⠇⠙⠋⠇⠙⠋⠇ The model is loading. Set `OLLAMA_DEBUG=1` and the logs will show the load progression. ``` ollama[4096]: time=2025-02-22T19:05:29.006Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.193704102 model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 ``` This is just a warning. The model was unloaded (or failed to load) and ollama is just keeping an eye on the freeing up of allocated VRAM.
Author
Owner

@mknwebsolutions commented on GitHub (Feb 22, 2025):

This is the detailed debug-log:

llm_load_tensors: tensor 'token_embd.weight' (q4_0) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
[3289]: time=2025-02-22T21:15:51.642Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
[3289]: time=2025-02-22T21:15:51.642Z level=DEBUG source=sched.go:458 msg="triggering expiration for failed load" model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94
[3289]: time=2025-02-22T21:15:51.642Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94
[3289]: time=2025-02-22T21:15:51.642Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94

<!-- gh-comment-id:2676407094 --> @mknwebsolutions commented on GitHub (Feb 22, 2025): This is the detailed debug-log: llm_load_tensors: tensor 'token_embd.weight' (q4_0) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead [3289]: time=2025-02-22T21:15:51.642Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " [3289]: time=2025-02-22T21:15:51.642Z level=DEBUG source=sched.go:458 msg="triggering expiration for failed load" model=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 [3289]: time=2025-02-22T21:15:51.642Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 [3289]: time=2025-02-22T21:15:51.642Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

[3289]: time=2025-02-22T21:15:51.642Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "

Set OLLAMA_LOAD_TIMEOUT=30m in the server environment.

It seems that loading is slow, which may be due to slow disk reads or slow VRAM writes. What's the output of

dd if=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 of=/dev/null bs=1M

On my system:

$ dd if=sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 of=/dev/null bs=1M
6744+1 records in
6744+1 records out
7071700672 bytes (7.1 GB, 6.6 GiB) copied, 2.55736 s, 2.8 GB/s
<!-- gh-comment-id:2676408899 --> @rick-github commented on GitHub (Feb 22, 2025): ``` [3289]: time=2025-02-22T21:15:51.642Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " ``` Set `OLLAMA_LOAD_TIMEOUT=30m` in the server environment. It seems that loading is slow, which may be due to slow disk reads or slow VRAM writes. What's the output of ``` dd if=/usr/share/ollama/.ollama/models/blobs/sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 of=/dev/null bs=1M ``` On my system: ``` $ dd if=sha256-b559938ab7a0392fc9ea9675b82280f2a15669ec3e0e0fc491c9cb0a7681cf94 of=/dev/null bs=1M 6744+1 records in 6744+1 records out 7071700672 bytes (7.1 GB, 6.6 GiB) copied, 2.55736 s, 2.8 GB/s ```
Author
Owner

@mknwebsolutions commented on GitHub (Feb 23, 2025):

@rick-github that's the culprit! EC2 GP3 with 3000IOPS + 125 throughput is yielding only 6MB/s after launch. Thank you!

<!-- gh-comment-id:2676702560 --> @mknwebsolutions commented on GitHub (Feb 23, 2025): @rick-github that's the culprit! EC2 GP3 with 3000IOPS + 125 throughput is yielding only 6MB/s after launch. Thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6060