[GH-ISSUE #8699] Set custom timeout with API call. #67694

Closed
opened 2026-05-04 11:21:05 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @philippstoboy on GitHub (Jan 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8699

What is the issue?

Hello,

I am running ollama inside of a Docker Container on my Unraid Server. The Server has a total of 192GB of normal RAM. Now, I'm trying to run DeepSeekR1 with 4-bit Quantization. According to #8654, I faked my Memory Size to 500GB, because the model won't run if it thinks there aren't at least 446GB(or something similar). Additionally, I enabled mmap in the api-call. Here is my initial command that I used:

curl http://localhost:11434/api/generate -d '{"model":"deepseek-r1:671b","options":{"use_mmap":true}}'

Now, when I run this, for about 4 Minutes it seems like nothing is happening and the logs are filled with a bunch of information about the model, which is being loaded. Then, the process times out.
(time=2025-01-30T16:36:39.209Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " [GIN] 2025/01/30 - 16:36:39 | 500 | 5m0s | 127.0.0.1 | POST "/api/generate")

As the log says it took 5m until the timeout so my question is how I disable this timeout or maybe increase it by a lot.

I already set Env Vars as of:

  1. OLLAMA_TIMEOUT=120000000
  2. OLLAMA_LOAD_TIMEOUT=120000000

Thanks in advance!

OS

Docker

GPU

No response

CPU

Intel

Ollama version

0.5.7-0-ga420a45-dirty

Originally created by @philippstoboy on GitHub (Jan 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8699 ### What is the issue? Hello, I am running ollama inside of a **Docker Container** on my **Unraid Server**. The Server has a total of 192GB of normal RAM. Now, I'm trying to run **[DeepSeekR1 with 4-bit Quantization](https://ollama.com/library/deepseek-r1:671b)**. According to #8654, I faked my Memory Size to **500GB**, because the model won't run if it thinks there aren't at least **446GB**_(or something similar)_. Additionally, I enabled `mmap` in the api-call. Here is my initial command that I used: `curl http://localhost:11434/api/generate -d '{"model":"deepseek-r1:671b","options":{"use_mmap":true}}'` Now, when I run this, for about **4 Minutes** it seems like nothing is happening and the logs are filled with a bunch of information about the model, which is being loaded. Then, the process times out. `(time=2025-01-30T16:36:39.209Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " [GIN] 2025/01/30 - 16:36:39 | 500 | 5m0s | 127.0.0.1 | POST "/api/generate")` As the log says it took **5m** until the timeout so my question is how I disable this timeout or maybe increase it by a lot. I already set Env Vars as of: 1. `OLLAMA_TIMEOUT=120000000` 2. `OLLAMA_LOAD_TIMEOUT=120000000` Thanks in advance! ### OS Docker ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.7-0-ga420a45-dirty
GiteaMirror added the bug label 2026-05-04 11:21:05 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

OLLAMA_TIMEOUT is not an ollama environment variable.

OLLAMA_LOAD_TIMEOUT=120000000 is 3.8 years so should work. What do the server logs show?

<!-- gh-comment-id:2625062815 --> @rick-github commented on GitHub (Jan 30, 2025): `OLLAMA_TIMEOUT` is not an ollama environment variable. `OLLAMA_LOAD_TIMEOUT=120000000` is 3.8 years so should work. What do the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) show?
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

OLLAMA_TIMEOUT is not an ollama environment variable.

OLLAMA_LOAD_TIMEOUT=120000000 is 3.8 years so should work. What do the server logs show?

Should I provide the whole log content after the API-Call?

<!-- gh-comment-id:2625081109 --> @philippstoboy commented on GitHub (Jan 30, 2025): > `OLLAMA_TIMEOUT` is not an ollama environment variable. > > `OLLAMA_LOAD_TIMEOUT=120000000` is 3.8 years so should work. What do the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) show? Should I provide the whole log content after the API-Call?
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

Here's what happens after curl command is run:

time=2025-01-30T17:03:33.224Z level=INFO source=server.go:104 msg="system memory" total="188.9 GiB" free="488.3 GiB" free_swap="0 B" time=2025-01-30T17:03:33.227Z level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[488.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="417.4 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[417.4 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-01-30T17:03:33.228Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --threads 12 --parallel 4 --port 35387" time=2025-01-30T17:03:33.229Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-30T17:03:33.229Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-30T17:03:33.229Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-30T17:03:33.294Z level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-30T17:03:33.300Z level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=12 time=2025-01-30T17:03:33.300Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35387" time=2025-01-30T17:03:33.482Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 time=2025-01-30T17:08:33.330Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " [GIN] 2025/01/30 - 17:08:33 | 500 | 5m29s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2625090020 --> @philippstoboy commented on GitHub (Jan 30, 2025): Here's what happens after curl command is run: `time=2025-01-30T17:03:33.224Z level=INFO source=server.go:104 msg="system memory" total="188.9 GiB" free="488.3 GiB" free_swap="0 B" time=2025-01-30T17:03:33.227Z level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[488.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="417.4 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[417.4 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-01-30T17:03:33.228Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --threads 12 --parallel 4 --port 35387" time=2025-01-30T17:03:33.229Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-30T17:03:33.229Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-30T17:03:33.229Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-30T17:03:33.294Z level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-30T17:03:33.300Z level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=12 time=2025-01-30T17:03:33.300Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35387" time=2025-01-30T17:03:33.482Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 time=2025-01-30T17:08:33.330Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " [GIN] 2025/01/30 - 17:08:33 | 500 | 5m29s | 127.0.0.1 | POST "/api/generate"`
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Full log, from the start - that's where it shows the environment variables. And attach as a file, pasting it into the input field makes it unreadable.

<!-- gh-comment-id:2625094633 --> @rick-github commented on GitHub (Jan 30, 2025): Full log, from the start - that's where it shows the environment variables. And attach as a file, pasting it into the input field makes it unreadable.
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

This is the whole log file since the start of the container:
output.log

<!-- gh-comment-id:2625135323 --> @philippstoboy commented on GitHub (Jan 30, 2025): This is the whole log file since the start of the container: [output.log](https://github.com/user-attachments/files/18606546/output.log)
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

OLLAMA_LOAD_TIMEOUT:5m0s

However you are setting this, is incorrect. How do you configure the docker container?

<!-- gh-comment-id:2625143700 --> @rick-github commented on GitHub (Jan 30, 2025): ``` OLLAMA_LOAD_TIMEOUT:5m0s ``` However you are setting this, is incorrect. How do you configure the docker container?
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

I just did it with export inside of the docker container. As it showed up when using printenv I assumed it set the Variable.

<!-- gh-comment-id:2625160290 --> @philippstoboy commented on GitHub (Jan 30, 2025): I just did it with `export` inside of the docker container. As it showed up when using `printenv` I assumed it set the Variable.
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

So you started the container:

$ docker run --rm -it -d -v ollama-data:/root/.ollama --name ollama ollama/ollama

And then set the variable by running export in the container:

$ docker exec -it ollama bash
# export OLLAMA_LOAD_TIMEOUT=120000000
# printenv | grep OLLAMA_LOAD_TIMEOUT
OLLAMA_LOAD_TIMEOUT=120000000
# exit
$

Is that right?

<!-- gh-comment-id:2625180041 --> @rick-github commented on GitHub (Jan 30, 2025): So you started the container: ```console $ docker run --rm -it -d -v ollama-data:/root/.ollama --name ollama ollama/ollama ``` And then set the variable by running export in the container: ```console $ docker exec -it ollama bash # export OLLAMA_LOAD_TIMEOUT=120000000 # printenv | grep OLLAMA_LOAD_TIMEOUT OLLAMA_LOAD_TIMEOUT=120000000 # exit $ ``` Is that right?
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

Yes

<!-- gh-comment-id:2625184167 --> @philippstoboy commented on GitHub (Jan 30, 2025): Yes
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

docker run --rm -it -d -v ollama-data:/root/.ollama --env OLLAMA_LOAD_TIMEOUT=90m --name ollama ollama/ollama
<!-- gh-comment-id:2625188490 --> @rick-github commented on GitHub (Jan 30, 2025): ``` docker run --rm -it -d -v ollama-data:/root/.ollama --env OLLAMA_LOAD_TIMEOUT=90m --name ollama ollama/ollama ```
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

And how do I confirm if it worked?

<!-- gh-comment-id:2625195507 --> @philippstoboy commented on GitHub (Jan 30, 2025): And how do I confirm if it worked?
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Check the server logs.

<!-- gh-comment-id:2625196525 --> @rick-github commented on GitHub (Jan 30, 2025): Check the server logs.
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

Okay so I ran the Model Again And it Stopped After 10minuten now. I set variable to 1200000 or something so No idea Why it Ended After 10minutes.

<!-- gh-comment-id:2625266811 --> @philippstoboy commented on GitHub (Jan 30, 2025): Okay so I ran the Model Again And it Stopped After 10minuten now. I set variable to 1200000 or something so No idea Why it Ended After 10minutes.
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Logs.

<!-- gh-comment-id:2625268905 --> @rick-github commented on GitHub (Jan 30, 2025): Logs.
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

output (2).log

<!-- gh-comment-id:2625639754 --> @philippstoboy commented on GitHub (Jan 30, 2025): [output (2).log](https://github.com/user-attachments/files/18608972/output.2.log)
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

OLLAMA_LOAD_TIMEOUT:333h20m0s

Load timeout is now set correctly.

time=2025-01-30T18:11:26.868Z level=INFO source=server.go:594 msg="llama runner started in 954.24 seconds"

Model loaded in 15 minutes and 54 seconds.

[GIN] 2025/01/30 - 18:11:26 | 200 |        16m19s |       127.0.0.1 | POST     "/api/generate"

The generation took an additional 25 seconds.

This all looks fine.

If the client ended after 10 minutes, the client has a ten minute timeout.

<!-- gh-comment-id:2625663132 --> @rick-github commented on GitHub (Jan 30, 2025): ``` OLLAMA_LOAD_TIMEOUT:333h20m0s ``` Load timeout is now set correctly. ``` time=2025-01-30T18:11:26.868Z level=INFO source=server.go:594 msg="llama runner started in 954.24 seconds" ``` Model loaded in 15 minutes and 54 seconds. ``` [GIN] 2025/01/30 - 18:11:26 | 200 | 16m19s | 127.0.0.1 | POST "/api/generate" ``` The generation took an additional 25 seconds. This all looks fine. If the client ended after 10 minutes, the client has a ten minute timeout.
Author
Owner

@philippstoboy commented on GitHub (Jan 30, 2025):

Oh that's it I just overflew the log and didn't pick it up. Thank you so much for your help @rick-github 👍

<!-- gh-comment-id:2625667250 --> @philippstoboy commented on GitHub (Jan 30, 2025): Oh that's it I just overflew the log and didn't pick it up. Thank you so much for your help @rick-github 👍
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67694