[GH-ISSUE #7408] ollama keep reload the same model(Qwen2.5-70B) #51222

Closed
opened 2026-04-28 18:56:49 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @TianWuYuJiangHenShou on GitHub (Oct 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7408

What is the issue?

time=2024-10-29T03:23:31.503Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB
llama_new_context_with_model: KV self size = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.45 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 2144.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 48.01 MiB
llama_new_context_with_model: graph nodes = 2806
llama_new_context_with_model: graph splits = 2
time=2024-10-29T03:23:31.754Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="140624792489984" timestamp=1730172211
time=2024-10-29T03:23:32.006Z level=INFO source=server.go:626 msg="llama runner started in 8.13 seconds"
[GIN] 2024/10/29 - 03:23:51 | 200 | 29.221685658s | 10.102.227.89 | POST "/api/chat"
[GIN] 2024/10/29 - 03:23:58 | 200 | 6.120409627s | 10.102.227.89 | POST "/api/chat"
[GIN] 2024/10/29 - 03:24:05 | 200 | 6.69483427s | 10.102.227.89 | POST "/api/chat"
[GIN] 2024/10/29 - 03:24:16 | 200 | 9.664337429s | 10.102.227.89 | POST "/api/chat"
[GIN] 2024/10/29 - 03:24:23 | 200 | 6.192004337s | 10.102.227.89 | POST "/api/chat"
time=2024-10-29T03:24:25.096Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb gpu=GPU-e48a5d05-78b4-d21e-8fb9-6e26a4c0edb9 parallel=4 available=84544258048 required="52.6 GiB"
time=2024-10-29T03:24:25.363Z level=INFO source=server.go:105 msg="system memory" total="503.5 GiB" free="491.2 GiB" free_swap="8.0 GiB"
time=2024-10-29T03:24:25.364Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="52.6 GiB" memory.required.partial="52.6 GiB" memory.required.kv="9.8 GiB" memory.required.allocations="[52.6 GiB]" memory.weights.total="46.6 GiB" memory.weights.repeating="45.6 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="4.0 GiB" memory.graph.partial="5.0 GiB"
time=2024-10-29T03:24:25.366Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3695149922/runners/cuda_v12/ollama_llama_server --model /home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb --ctx-size 32096 --batch-size 512 --embedding --n-gpu-layers 81 --threads 32 --parallel 4 --port 35985"
time=2024-10-29T03:24:25.366Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-10-29T03:24:25.367Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [main] starting c++ runner | tid="140636503351296" timestamp=1730172265
INFO [main] build info | build=10 commit="3a8c75e" tid="140636503351296" timestamp=1730172265
INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140636503351296" timestamp=1730172265 total_threads=128
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="35985" tid="140636503351296" timestamp=1730172265
llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 72B
llama_model_loader: - kv 6: general.license str = other
llama_model_loader: - kv 7: general.license.name str = qwen
llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv 9: general.base_model.count u32 = 1
llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B
llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-72B
llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 15: qwen2.block_count u32 = 80
llama_model_loader: - kv 16: qwen2.context_length u32 = 32768
llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192
llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568
llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64
llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 23: general.file_type u32 = 2
llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 34: general.quantization_version u32 = 2
llama_model_loader: - type f32: 401 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-10-29T03:24:25.618Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 29568
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 72.71 B
llm_load_print_meta: model size = 38.39 GiB (4.54 BPW)
llm_load_print_meta: general.name = Qwen2.5 72B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.85 MiB
time=2024-10-29T03:24:27.075Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 668.25 MiB
llm_load_tensors: CUDA0 buffer size = 38647.70 MiB
time=2024-10-29T03:24:27.778Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
^C^Ctime=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load"
time=2024-10-29T03:24:29.283Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2024/10/29 - 03:24:29 | 499 | 5.239788128s | 10.102.227.89 | POST "/api/chat"

I have set the parameter OLLAMA_KEEP_ALIVE=-1,but it don't works.

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

ollama version=0.3.14

Originally created by @TianWuYuJiangHenShou on GitHub (Oct 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7408 ### What is the issue? time=2024-10-29T03:23:31.503Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 16384 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB llama_new_context_with_model: KV self size = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.45 MiB llama_new_context_with_model: CUDA0 compute buffer size = 2144.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 48.01 MiB llama_new_context_with_model: graph nodes = 2806 llama_new_context_with_model: graph splits = 2 time=2024-10-29T03:23:31.754Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" INFO [main] model loaded | tid="140624792489984" timestamp=1730172211 time=2024-10-29T03:23:32.006Z level=INFO source=server.go:626 msg="llama runner started in 8.13 seconds" [GIN] 2024/10/29 - 03:23:51 | 200 | 29.221685658s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:23:58 | 200 | 6.120409627s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:24:05 | 200 | 6.69483427s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:24:16 | 200 | 9.664337429s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:24:23 | 200 | 6.192004337s | 10.102.227.89 | POST "/api/chat" time=2024-10-29T03:24:25.096Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb gpu=GPU-e48a5d05-78b4-d21e-8fb9-6e26a4c0edb9 parallel=4 available=84544258048 required="52.6 GiB" time=2024-10-29T03:24:25.363Z level=INFO source=server.go:105 msg="system memory" total="503.5 GiB" free="491.2 GiB" free_swap="8.0 GiB" time=2024-10-29T03:24:25.364Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="52.6 GiB" memory.required.partial="52.6 GiB" memory.required.kv="9.8 GiB" memory.required.allocations="[52.6 GiB]" memory.weights.total="46.6 GiB" memory.weights.repeating="45.6 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="4.0 GiB" memory.graph.partial="5.0 GiB" time=2024-10-29T03:24:25.366Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3695149922/runners/cuda_v12/ollama_llama_server --model /home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb --ctx-size 32096 --batch-size 512 --embedding --n-gpu-layers 81 --threads 32 --parallel 4 --port 35985" time=2024-10-29T03:24:25.366Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-10-29T03:24:25.367Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [main] starting c++ runner | tid="140636503351296" timestamp=1730172265 INFO [main] build info | build=10 commit="3a8c75e" tid="140636503351296" timestamp=1730172265 INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140636503351296" timestamp=1730172265 total_threads=128 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="35985" tid="140636503351296" timestamp=1730172265 llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 72B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = qwen llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-72B llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2.block_count u32 = 80 llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192 llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568 llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64 llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 2 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - type f32: 401 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-10-29T03:24:25.618Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 29568 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 72.71 B llm_load_print_meta: model size = 38.39 GiB (4.54 BPW) llm_load_print_meta: general.name = Qwen2.5 72B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 0.85 MiB time=2024-10-29T03:24:27.075Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 668.25 MiB llm_load_tensors: CUDA0 buffer size = 38647.70 MiB time=2024-10-29T03:24:27.778Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" ^C^Ctime=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load" time=2024-10-29T03:24:29.283Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2024/10/29 - 03:24:29 | 499 | 5.239788128s | 10.102.227.89 | POST "/api/chat" ## I have set the parameter OLLAMA_KEEP_ALIVE=-1,but it don't works. ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version ollama version=0.3.14
GiteaMirror added the bug label 2026-04-28 18:56:49 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 29, 2024):

time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
^C^C
time=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load"

It looks like you only waited 5 seconds between loading the model and then cancelling. It will take longer than that to load a 56GB model.

<!-- gh-comment-id:2443141506 --> @rick-github commented on GitHub (Oct 29, 2024): ``` time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" ^C^C time=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load" ``` It looks like you only waited 5 seconds between loading the model and then cancelling. It will take longer than that to load a 56GB model.
Author
Owner

@TianWuYuJiangHenShou commented on GitHub (Oct 29, 2024):

time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
^C^C
time=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load"

It looks like you only waited 5 seconds between loading the model and then cancelling. It will take longer than that to load a 56GB model.

After testing, I find that the model will reloaded on the next request after stopping the previous batch of requests(time.sleep(5)).

<!-- gh-comment-id:2443173240 --> @TianWuYuJiangHenShou commented on GitHub (Oct 29, 2024): > ``` > time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" > ^C^C > time=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load" > ``` > > It looks like you only waited 5 seconds between loading the model and then cancelling. It will take longer than that to load a 56GB model. After testing, I find that the model will reloaded on the next request after stopping the previous batch of requests(time.sleep(5)).
Author
Owner

@pdevine commented on GitHub (Oct 29, 2024):

@TianWuYuJiangHenShou you'll need to wait for the model to fully load before canceling, otherwise it will start over again with every request until it loads. This is working as intended, so I'm going to go ahead and close the issue.

<!-- gh-comment-id:2445070924 --> @pdevine commented on GitHub (Oct 29, 2024): @TianWuYuJiangHenShou you'll need to wait for the model to fully load before canceling, otherwise it will start over again with every request until it loads. This is working as intended, so I'm going to go ahead and close the issue.
Author
Owner

@TianWuYuJiangHenShou commented on GitHub (Oct 30, 2024):

@pdevine I have successfully loaded the model and the inference service is functioning properly. But after a period of time without accessing the Ollama service (5 seconds or longer), the model still needs to be reloaded when requesting again.
I have set keep_alive=-1 , theoretically it will keep loading the model. Why does this happen

<!-- gh-comment-id:2445720851 --> @TianWuYuJiangHenShou commented on GitHub (Oct 30, 2024): @pdevine I have successfully loaded the model and the inference service is functioning properly. But after a period of time without accessing the Ollama service (5 seconds or longer), the model still needs to be reloaded when requesting again. I have set keep_alive=-1 , theoretically it will keep loading the model. Why does this happen
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

The log from above doesn't show this behaviour. Add full logs.

<!-- gh-comment-id:2446042493 --> @rick-github commented on GitHub (Oct 30, 2024): The log from above doesn't show this behaviour. Add full logs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51222