[GH-ISSUE #10601] I often encounter timeouts or 500 errors. So, I conducted a test and found that the response accuracy rate is 98%. Is this data normal? #32733

Closed
opened 2026-04-22 14:34:20 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @cexopgy on GitHub (May 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10601

What is the issue?

Image

Test duration: 17 hours
Total number of requests: 6002
Successful requests (status code 200): 5893
Failed requests (timeouts): 109
Success rate: 98.18%
Average response time for successful requests: approximately 8.76 seconds

The 500 errors are likely due to the network connection being interrupted after exceeding 60 seconds.

Image

Ubuntu 22.04 N:3090 model:qwen2.5:14b Or perhaps this is related to the model's performance and not related to Ollama?

model:qwen3:14b Success rate: 79.59% I'm not sure if this data is reliable, because I saw in the release notes for version 0.6.8 that the compatibility is not very good now.

Relevant log output

5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.526+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   1:                               general.type str              = model
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 14B Instruct
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   5:                         general.size_label str              = 14B
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-1...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 14B
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-14B
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  33:               general.quantization_version u32              = 2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type  f32:  241 tensors
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q4_K:  289 tensors
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q6_K:   49 tensors
5月 06 16:23:42 AIBOX ollama[4624]: print_info: file format = GGUF V3 (latest)
5月 06 16:23:42 AIBOX ollama[4624]: print_info: file type   = Q4_K - Medium
5月 06 16:23:42 AIBOX ollama[4624]: print_info: file size   = 8.37 GiB (4.87 BPW)
5月 06 16:23:42 AIBOX ollama[4624]: load: special tokens cache size = 22
5月 06 16:23:42 AIBOX ollama[4624]: load: token to piece cache size = 0.9310 MB
5月 06 16:23:42 AIBOX ollama[4624]: print_info: arch             = qwen2
5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab_only       = 1
5月 06 16:23:42 AIBOX ollama[4624]: print_info: model type       = ?B
5月 06 16:23:42 AIBOX ollama[4624]: print_info: model params     = 14.77 B
5月 06 16:23:42 AIBOX ollama[4624]: print_info: general.name     = Qwen2.5 14B Instruct
5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab type       = BPE
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_vocab          = 152064
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_merges         = 151387
5月 06 16:23:42 AIBOX ollama[4624]: print_info: BOS token        = 151643 '<|endoftext|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOS token        = 151645 '<|im_end|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOT token        = 151645 '<|im_end|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: PAD token        = 151643 '<|endoftext|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: LF token         = 198 'Ċ'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM REP token    = 151663 '<|repo_name|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151643 '<|endoftext|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151645 '<|im_end|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151662 '<|fim_pad|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151663 '<|repo_name|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151664 '<|file_sep|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: max token length = 256
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_load: vocab only - skipping tensors
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 4 --port 45661"
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.655+08:00 level=INFO source=runner.go:853 msg="starting go runner"
5月 06 16:23:42 AIBOX ollama[4624]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
5月 06 16:23:42 AIBOX ollama[4624]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
5月 06 16:23:42 AIBOX ollama[4624]: ggml_cuda_init: found 1 CUDA devices:
5月 06 16:23:42 AIBOX ollama[4624]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
5月 06 16:23:42 AIBOX ollama[4624]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
5月 06 16:23:42 AIBOX ollama[4624]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.697+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.697+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:45661"
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23888 MiB free
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   1:                               general.type str              = model
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 14B Instruct
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   5:                         general.size_label str              = 14B
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-1...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 14B
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-14B
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv  33:               general.quantization_version u32              = 2
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type  f32:  241 tensors
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q4_K:  289 tensors
5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q6_K:   49 tensors
5月 06 16:23:42 AIBOX ollama[4624]: print_info: file format = GGUF V3 (latest)
5月 06 16:23:42 AIBOX ollama[4624]: print_info: file type   = Q4_K - Medium
5月 06 16:23:42 AIBOX ollama[4624]: print_info: file size   = 8.37 GiB (4.87 BPW)
5月 06 16:23:42 AIBOX ollama[4624]: load: special tokens cache size = 22
5月 06 16:23:42 AIBOX ollama[4624]: load: token to piece cache size = 0.9310 MB
5月 06 16:23:42 AIBOX ollama[4624]: print_info: arch             = qwen2
5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab_only       = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_ctx_train      = 32768
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd           = 5120
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_layer          = 48
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_head           = 40
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_head_kv        = 8
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_rot            = 128
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_swa            = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_swa_pattern    = 1
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_head_k    = 128
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_head_v    = 128
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_gqa            = 5
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_k_gqa     = 1024
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_v_gqa     = 1024
5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_norm_eps       = 0.0e+00
5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_norm_rms_eps   = 1.0e-06
5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_clamp_kqv      = 0.0e+00
5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_max_alibi_bias = 0.0e+00
5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_logit_scale    = 0.0e+00
5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_attn_scale     = 0.0e+00
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_ff             = 13824
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_expert         = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_expert_used    = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: causal attn      = 1
5月 06 16:23:42 AIBOX ollama[4624]: print_info: pooling type     = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: rope type        = 2
5月 06 16:23:42 AIBOX ollama[4624]: print_info: rope scaling     = linear
5月 06 16:23:42 AIBOX ollama[4624]: print_info: freq_base_train  = 1000000.0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: freq_scale_train = 1
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_ctx_orig_yarn  = 32768
5月 06 16:23:42 AIBOX ollama[4624]: print_info: rope_finetuned   = unknown
5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_d_conv       = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_d_inner      = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_d_state      = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_dt_rank      = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_dt_b_c_rms   = 0
5月 06 16:23:42 AIBOX ollama[4624]: print_info: model type       = 14B
5月 06 16:23:42 AIBOX ollama[4624]: print_info: model params     = 14.77 B
5月 06 16:23:42 AIBOX ollama[4624]: print_info: general.name     = Qwen2.5 14B Instruct
5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab type       = BPE
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_vocab          = 152064
5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_merges         = 151387
5月 06 16:23:42 AIBOX ollama[4624]: print_info: BOS token        = 151643 '<|endoftext|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOS token        = 151645 '<|im_end|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOT token        = 151645 '<|im_end|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: PAD token        = 151643 '<|endoftext|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: LF token         = 198 'Ċ'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM REP token    = 151663 '<|repo_name|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151643 '<|endoftext|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151645 '<|im_end|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151662 '<|fim_pad|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151663 '<|repo_name|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token        = 151664 '<|file_sep|>'
5月 06 16:23:42 AIBOX ollama[4624]: print_info: max token length = 256
5月 06 16:23:42 AIBOX ollama[4624]: load_tensors: loading model tensors, this can take a while... (mmap = true)
5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.900+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: offloading 48 repeating layers to GPU
5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: offloading output layer to GPU
5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: offloaded 49/49 layers to GPU
5月 06 16:23:43 AIBOX ollama[4624]: load_tensors:        CUDA0 model buffer size =  8148.38 MiB
5月 06 16:23:43 AIBOX ollama[4624]: load_tensors:   CPU_Mapped model buffer size =   417.66 MiB
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: constructing llama_context
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_seq_max     = 4
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ctx         = 8192
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ctx_per_seq = 2048
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_batch       = 2048
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ubatch      = 512
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: causal_attn   = 1
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: flash_attn    = 0
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: freq_base     = 1000000.0
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: freq_scale    = 1
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
5月 06 16:23:44 AIBOX ollama[4624]: llama_context:  CUDA_Host  output buffer size =     2.40 MiB
5月 06 16:23:44 AIBOX ollama[4624]: init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
5月 06 16:23:44 AIBOX ollama[4624]: init:      CUDA0 KV buffer size =  1536.00 MiB
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: KV self size  = 1536.00 MiB, K (f16):  768.00 MiB, V (f16):  768.00 MiB
5月 06 16:23:44 AIBOX ollama[4624]: llama_context:      CUDA0 compute buffer size =   696.00 MiB
5月 06 16:23:44 AIBOX ollama[4624]: llama_context:  CUDA_Host compute buffer size =    26.01 MiB
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: graph nodes  = 1782
5月 06 16:23:44 AIBOX ollama[4624]: llama_context: graph splits = 2
5月 06 16:23:44 AIBOX ollama[4624]: time=2025-05-06T16:23:44.154+08:00 level=INFO source=server.go:619 msg="llama runner started in 1.51 seconds"
5月 06 16:24:04 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:04 | 200 | 22.548920737s |    192.168.1.41 | POST     "/api/generate"
5月 06 16:24:04 AIBOX ollama[4624]: time=2025-05-06T16:24:04.828+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
5月 06 16:24:11 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:11 | 200 |  6.803113064s |    192.168.1.41 | POST     "/api/generate"
5月 06 16:24:11 AIBOX ollama[4624]: time=2025-05-06T16:24:11.670+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
5月 06 16:24:20 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:20 | 200 |  8.362648926s |    192.168.1.41 | POST     "/api/generate"
5月 06 16:24:20 AIBOX ollama[4624]: time=2025-05-06T16:24:20.089+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
5月 06 16:24:25 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:25 | 200 |  5.671454319s |    192.168.1.41 | POST     "/api/generate"
5月 06 16:24:25 AIBOX ollama[4624]: time=2025-05-06T16:24:25.793+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
5月 06 16:24:28 AIBOX ollama[4624]: time=2025-05-06T16:24:28.330+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
5月 06 16:24:28 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:28 | 200 |    9.457882ms |       127.0.0.1 | POST     "/api/generate"
5月 06 16:24:32 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:32 | 200 |  6.640145617s |    192.168.1.41 | POST     "/api/generate"
5月 06 16:24:32 AIBOX ollama[4624]: time=2025-05-06T16:24:32.659+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
5月 06 16:24:39 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:39 | 200 |  7.084765217s |    192.168.1.41 | POST     "/api/generate"
5月 06 16:24:39 AIBOX ollama[4624]: time=2025-05-06T16:24:39.759+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.6.6

Originally created by @cexopgy on GitHub (May 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10601 ### What is the issue? ![Image](https://github.com/user-attachments/assets/c46cfdf3-9e93-48b2-b473-c3d343348db4) Test duration: 17 hours Total number of requests: 6002 Successful requests (status code 200): 5893 Failed requests (timeouts): 109 Success rate: 98.18% Average response time for successful requests: approximately 8.76 seconds The 500 errors are likely due to the network connection being interrupted after exceeding 60 seconds. ![Image](https://github.com/user-attachments/assets/50af867b-9e85-44a9-8aed-9a22ce45c526) Ubuntu 22.04 N:3090 model:qwen2.5:14b Or perhaps this is related to the model's performance and not related to Ollama? >>>model:qwen3:14b Success rate: 79.59% I'm not sure if this data is reliable, because I saw in the release notes for version 0.6.8 that the compatibility is not very good now. ### Relevant log output ```shell 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.526+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest)) 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 0: general.architecture str = qwen2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 1: general.type str = model 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 3: general.finetune str = Instruct 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 4: general.basename str = Qwen2.5 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 5: general.size_label str = 14B 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 6: general.license str = apache-2.0 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 14: qwen2.block_count u32 = 48 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 22: general.file_type u32 = 15 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type f32: 241 tensors 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q4_K: 289 tensors 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q6_K: 49 tensors 5月 06 16:23:42 AIBOX ollama[4624]: print_info: file format = GGUF V3 (latest) 5月 06 16:23:42 AIBOX ollama[4624]: print_info: file type = Q4_K - Medium 5月 06 16:23:42 AIBOX ollama[4624]: print_info: file size = 8.37 GiB (4.87 BPW) 5月 06 16:23:42 AIBOX ollama[4624]: load: special tokens cache size = 22 5月 06 16:23:42 AIBOX ollama[4624]: load: token to piece cache size = 0.9310 MB 5月 06 16:23:42 AIBOX ollama[4624]: print_info: arch = qwen2 5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab_only = 1 5月 06 16:23:42 AIBOX ollama[4624]: print_info: model type = ?B 5月 06 16:23:42 AIBOX ollama[4624]: print_info: model params = 14.77 B 5月 06 16:23:42 AIBOX ollama[4624]: print_info: general.name = Qwen2.5 14B Instruct 5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab type = BPE 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_vocab = 152064 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_merges = 151387 5月 06 16:23:42 AIBOX ollama[4624]: print_info: BOS token = 151643 '<|endoftext|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOS token = 151645 '<|im_end|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOT token = 151645 '<|im_end|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: PAD token = 151643 '<|endoftext|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: LF token = 198 'Ċ' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM MID token = 151660 '<|fim_middle|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PAD token = 151662 '<|fim_pad|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM REP token = 151663 '<|repo_name|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SEP token = 151664 '<|file_sep|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151643 '<|endoftext|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151645 '<|im_end|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151662 '<|fim_pad|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151663 '<|repo_name|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151664 '<|file_sep|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: max token length = 256 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_load: vocab only - skipping tensors 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 4 --port 45661" 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.648+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.655+08:00 level=INFO source=runner.go:853 msg="starting go runner" 5月 06 16:23:42 AIBOX ollama[4624]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 5月 06 16:23:42 AIBOX ollama[4624]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 5月 06 16:23:42 AIBOX ollama[4624]: ggml_cuda_init: found 1 CUDA devices: 5月 06 16:23:42 AIBOX ollama[4624]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes 5月 06 16:23:42 AIBOX ollama[4624]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so 5月 06 16:23:42 AIBOX ollama[4624]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.697+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.697+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:45661" 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23888 MiB free 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest)) 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 0: general.architecture str = qwen2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 1: general.type str = model 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 3: general.finetune str = Instruct 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 4: general.basename str = Qwen2.5 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 5: general.size_label str = 14B 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 6: general.license str = apache-2.0 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 14: qwen2.block_count u32 = 48 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 22: general.file_type u32 = 15 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type f32: 241 tensors 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q4_K: 289 tensors 5月 06 16:23:42 AIBOX ollama[4624]: llama_model_loader: - type q6_K: 49 tensors 5月 06 16:23:42 AIBOX ollama[4624]: print_info: file format = GGUF V3 (latest) 5月 06 16:23:42 AIBOX ollama[4624]: print_info: file type = Q4_K - Medium 5月 06 16:23:42 AIBOX ollama[4624]: print_info: file size = 8.37 GiB (4.87 BPW) 5月 06 16:23:42 AIBOX ollama[4624]: load: special tokens cache size = 22 5月 06 16:23:42 AIBOX ollama[4624]: load: token to piece cache size = 0.9310 MB 5月 06 16:23:42 AIBOX ollama[4624]: print_info: arch = qwen2 5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab_only = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_ctx_train = 32768 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd = 5120 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_layer = 48 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_head = 40 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_head_kv = 8 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_rot = 128 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_swa = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_swa_pattern = 1 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_head_k = 128 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_head_v = 128 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_gqa = 5 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_k_gqa = 1024 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_embd_v_gqa = 1024 5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_norm_eps = 0.0e+00 5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_norm_rms_eps = 1.0e-06 5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_clamp_kqv = 0.0e+00 5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_max_alibi_bias = 0.0e+00 5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_logit_scale = 0.0e+00 5月 06 16:23:42 AIBOX ollama[4624]: print_info: f_attn_scale = 0.0e+00 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_ff = 13824 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_expert = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_expert_used = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: causal attn = 1 5月 06 16:23:42 AIBOX ollama[4624]: print_info: pooling type = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: rope type = 2 5月 06 16:23:42 AIBOX ollama[4624]: print_info: rope scaling = linear 5月 06 16:23:42 AIBOX ollama[4624]: print_info: freq_base_train = 1000000.0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: freq_scale_train = 1 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_ctx_orig_yarn = 32768 5月 06 16:23:42 AIBOX ollama[4624]: print_info: rope_finetuned = unknown 5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_d_conv = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_d_inner = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_d_state = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_dt_rank = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: ssm_dt_b_c_rms = 0 5月 06 16:23:42 AIBOX ollama[4624]: print_info: model type = 14B 5月 06 16:23:42 AIBOX ollama[4624]: print_info: model params = 14.77 B 5月 06 16:23:42 AIBOX ollama[4624]: print_info: general.name = Qwen2.5 14B Instruct 5月 06 16:23:42 AIBOX ollama[4624]: print_info: vocab type = BPE 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_vocab = 152064 5月 06 16:23:42 AIBOX ollama[4624]: print_info: n_merges = 151387 5月 06 16:23:42 AIBOX ollama[4624]: print_info: BOS token = 151643 '<|endoftext|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOS token = 151645 '<|im_end|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOT token = 151645 '<|im_end|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: PAD token = 151643 '<|endoftext|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: LF token = 198 'Ċ' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM MID token = 151660 '<|fim_middle|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM PAD token = 151662 '<|fim_pad|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM REP token = 151663 '<|repo_name|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: FIM SEP token = 151664 '<|file_sep|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151643 '<|endoftext|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151645 '<|im_end|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151662 '<|fim_pad|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151663 '<|repo_name|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: EOG token = 151664 '<|file_sep|>' 5月 06 16:23:42 AIBOX ollama[4624]: print_info: max token length = 256 5月 06 16:23:42 AIBOX ollama[4624]: load_tensors: loading model tensors, this can take a while... (mmap = true) 5月 06 16:23:42 AIBOX ollama[4624]: time=2025-05-06T16:23:42.900+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" 5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: offloading 48 repeating layers to GPU 5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: offloading output layer to GPU 5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: offloaded 49/49 layers to GPU 5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: CUDA0 model buffer size = 8148.38 MiB 5月 06 16:23:43 AIBOX ollama[4624]: load_tensors: CPU_Mapped model buffer size = 417.66 MiB 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: constructing llama_context 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_seq_max = 4 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ctx = 8192 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ctx_per_seq = 2048 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_batch = 2048 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ubatch = 512 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: causal_attn = 1 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: flash_attn = 0 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: freq_base = 1000000.0 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: freq_scale = 1 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: CUDA_Host output buffer size = 2.40 MiB 5月 06 16:23:44 AIBOX ollama[4624]: init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 5月 06 16:23:44 AIBOX ollama[4624]: init: CUDA0 KV buffer size = 1536.00 MiB 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: CUDA0 compute buffer size = 696.00 MiB 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: CUDA_Host compute buffer size = 26.01 MiB 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: graph nodes = 1782 5月 06 16:23:44 AIBOX ollama[4624]: llama_context: graph splits = 2 5月 06 16:23:44 AIBOX ollama[4624]: time=2025-05-06T16:23:44.154+08:00 level=INFO source=server.go:619 msg="llama runner started in 1.51 seconds" 5月 06 16:24:04 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:04 | 200 | 22.548920737s | 192.168.1.41 | POST "/api/generate" 5月 06 16:24:04 AIBOX ollama[4624]: time=2025-05-06T16:24:04.828+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 5月 06 16:24:11 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:11 | 200 | 6.803113064s | 192.168.1.41 | POST "/api/generate" 5月 06 16:24:11 AIBOX ollama[4624]: time=2025-05-06T16:24:11.670+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 5月 06 16:24:20 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:20 | 200 | 8.362648926s | 192.168.1.41 | POST "/api/generate" 5月 06 16:24:20 AIBOX ollama[4624]: time=2025-05-06T16:24:20.089+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 5月 06 16:24:25 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:25 | 200 | 5.671454319s | 192.168.1.41 | POST "/api/generate" 5月 06 16:24:25 AIBOX ollama[4624]: time=2025-05-06T16:24:25.793+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 5月 06 16:24:28 AIBOX ollama[4624]: time=2025-05-06T16:24:28.330+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 5月 06 16:24:28 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:28 | 200 | 9.457882ms | 127.0.0.1 | POST "/api/generate" 5月 06 16:24:32 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:32 | 200 | 6.640145617s | 192.168.1.41 | POST "/api/generate" 5月 06 16:24:32 AIBOX ollama[4624]: time=2025-05-06T16:24:32.659+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 5月 06 16:24:39 AIBOX ollama[4624]: [GIN] 2025/05/06 - 16:24:39 | 200 | 7.084765217s | 192.168.1.41 | POST "/api/generate" 5月 06 16:24:39 AIBOX ollama[4624]: time=2025-05-06T16:24:39.759+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-22 14:34:20 -05:00
Author
Owner

@rick-github commented on GitHub (May 7, 2025):

LLMs are probabilistic token generators and sometimes those probabilities lead the model down a path that just starts generating random tokens. The client can recover from this by having a timeout on the connection, or can set num_predict in the API call to set a limit to the number of tokens to be generated.

However, 2% failure rate is high. Can you provide some information on the type of queries that cause this behaviour? Adding OLLAMA_DEBUG=1 to the server environment will add more information to the log that may be useful.

<!-- gh-comment-id:2858049702 --> @rick-github commented on GitHub (May 7, 2025): LLMs are probabilistic token generators and sometimes those probabilities lead the model down a path that just starts generating random tokens. The client can recover from this by having a timeout on the connection, or can set `num_predict` in the API call to set a limit to the number of tokens to be generated. However, 2% failure rate is high. Can you provide some information on the type of queries that cause this behaviour? Adding `OLLAMA_DEBUG=1` to the server environment will add more information to the log that may be useful.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32733