[GH-ISSUE #6601] when i try to visit https://xxxxxxxx.com/api/chat,it is very slow #29918

Closed
opened 2026-04-22 09:16:07 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @lessuit on GitHub (Sep 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6601

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

there is docker logs:

time=2024-09-03T06:16:48.144Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb gpu=GPU-1a66ac9e-e1b3-db2b-4e52-f54d2f373819 parallel=4 available=84100120576 required="42.2 GiB"
time=2024-09-03T06:16:48.150Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.3 GiB]" memory.required.full="42.2 GiB" memory.required.partial="42.2 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[42.2 GiB]" memory.weights.total="39.3 GiB" memory.weights.repeating="38.3 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.3 GiB"
time=2024-09-03T06:16:48.163Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama3339020570/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 4 --port 46231"
time=2024-09-03T06:16:48.164Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-03T06:16:48.164Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-03T06:16:48.164Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="139860331409408" timestamp=1725344208
INFO [main] system info | n_threads=48 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139860331409408" timestamp=1725344208 total_threads=96
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="46231" tid="139860331409408" timestamp=1725344208
llama_model_loader: loaded meta data with 21 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-72B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 80
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 8192
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 29568
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 64
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 401 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-09-03T06:16:48.416Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 0.9352 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 29568
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 72.71 B
llm_load_print_meta: model size = 38.39 GiB (4.54 BPW)
llm_load_print_meta: general.name = Qwen2-72B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.85 MiB
time=2024-09-03T06:16:49.873Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 668.25 MiB
llm_load_tensors: CUDA0 buffer size = 38647.70 MiB
time=2024-09-03T06:16:51.930Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2024-09-03T06:16:57.155Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 2560.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.45 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1104.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 32.01 MiB
llama_new_context_with_model: graph nodes = 2806
llama_new_context_with_model: graph splits = 2
time=2024-09-03T06:16:58.760Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="139860331409408" timestamp=1725344218
time=2024-09-03T06:16:59.013Z level=INFO source=server.go:630 msg="llama runner started in 10.85 seconds"
[GIN] 2024/09/03 - 06:17:08 | 200 | 20.303390852s | 113.57.107.43 | POST "/api/chat"

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.3.8 and 0.3.9

Originally created by @lessuit on GitHub (Sep 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6601 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? there is docker logs: time=2024-09-03T06:16:48.144Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb gpu=GPU-1a66ac9e-e1b3-db2b-4e52-f54d2f373819 parallel=4 available=84100120576 required="42.2 GiB" time=2024-09-03T06:16:48.150Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.3 GiB]" memory.required.full="42.2 GiB" memory.required.partial="42.2 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[42.2 GiB]" memory.weights.total="39.3 GiB" memory.weights.repeating="38.3 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.3 GiB" time=2024-09-03T06:16:48.163Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama3339020570/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 4 --port 46231" time=2024-09-03T06:16:48.164Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-03T06:16:48.164Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2024-09-03T06:16:48.164Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="1e6f655" tid="139860331409408" timestamp=1725344208 INFO [main] system info | n_threads=48 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139860331409408" timestamp=1725344208 total_threads=96 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="46231" tid="139860331409408" timestamp=1725344208 llama_model_loader: loaded meta data with 21 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-72B-Instruct llama_model_loader: - kv 2: qwen2.block_count u32 = 80 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 8192 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 29568 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 64 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 401 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-09-03T06:16:48.416Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 421 llm_load_vocab: token to piece cache size = 0.9352 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 29568 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 72.71 B llm_load_print_meta: model size = 38.39 GiB (4.54 BPW) llm_load_print_meta: general.name = Qwen2-72B-Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 0.85 MiB time=2024-09-03T06:16:49.873Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 668.25 MiB llm_load_tensors: CUDA0 buffer size = 38647.70 MiB time=2024-09-03T06:16:51.930Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2024-09-03T06:16:57.155Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 2560.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.45 MiB llama_new_context_with_model: CUDA0 compute buffer size = 1104.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 32.01 MiB llama_new_context_with_model: graph nodes = 2806 llama_new_context_with_model: graph splits = 2 time=2024-09-03T06:16:58.760Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" INFO [main] model loaded | tid="139860331409408" timestamp=1725344218 time=2024-09-03T06:16:59.013Z level=INFO source=server.go:630 msg="llama runner started in 10.85 seconds" [GIN] 2024/09/03 - 06:17:08 | 200 | 20.303390852s | 113.57.107.43 | POST "/api/chat" ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.8 and 0.3.9
GiteaMirror added the needs more info label 2026-04-22 09:16:07 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 3, 2024):

10 seconds to load a 40G model doesn't seem unreasonable. Another user found that a large model was slow to load (https://github.com/ollama/ollama/issues/6425#issuecomment-2316002395) and successfully reduced load time with the following command. Not sure it will help in your case but can't hurt to try.

echo 0 > /proc/sys/kernel/numa_balancing
<!-- gh-comment-id:2327275529 --> @rick-github commented on GitHub (Sep 3, 2024): 10 seconds to load a 40G model doesn't seem unreasonable. Another user found that a large model was slow to load (https://github.com/ollama/ollama/issues/6425#issuecomment-2316002395) and successfully reduced load time with the following command. Not sure it will help in your case but can't hurt to try. ``` echo 0 > /proc/sys/kernel/numa_balancing ```
Author
Owner

@dhiltgen commented on GitHub (Sep 3, 2024):

10s does seem reasonable for a load time of a large model.

Can you elaborate what "very slow" means? Are you just referring to latency, or is the token rate particularly low? You can try using the CLI with --verbose to get more timing information, including load duration. (e.g. ollama run --verbose llama3.1:70b hello)

<!-- gh-comment-id:2327384719 --> @dhiltgen commented on GitHub (Sep 3, 2024): 10s does seem reasonable for a load time of a large model. Can you elaborate what "very slow" means? Are you just referring to latency, or is the token rate particularly low? You can try using the CLI with `--verbose` to get more timing information, including load duration. (e.g. `ollama run --verbose llama3.1:70b hello`)
Author
Owner

@lessuit commented on GitHub (Sep 4, 2024):

10 seconds to load a 40G model doesn't seem unreasonable. Another user found that a large model was slow to load (#6425 (comment)) and successfully reduced load time with the following command. Not sure it will help in your case but can't hurt to try.

echo 0 > /proc/sys/kernel/numa_balancing

okey i get it, thanks for your reply and i will close the issue

<!-- gh-comment-id:2327717300 --> @lessuit commented on GitHub (Sep 4, 2024): > 10 seconds to load a 40G model doesn't seem unreasonable. Another user found that a large model was slow to load ([#6425 (comment)](https://github.com/ollama/ollama/issues/6425#issuecomment-2316002395)) and successfully reduced load time with the following command. Not sure it will help in your case but can't hurt to try. > > ``` > echo 0 > /proc/sys/kernel/numa_balancing > ``` okey i get it, thanks for your reply and i will close the issue
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29918