[GH-ISSUE #11306] ollama 0.9.5 - lower temp params cause extremely long inference times - gemma3:27b-it-qat #69517

Closed
opened 2026-05-04 18:18:05 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @2jfs904judsw20600jikn613d0dookl23jsig on GitHub (Jul 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11306

What is the issue?

Having significant issues with 0.9.5 running specifically gemma3 models now.

Running identical inputs but with lower temperature causes inference time for the input to go from 22s to timing out at 100% GPU util over 3 minutes. The tokens fit well within the context window for the input.

Only had this issue since 0.9.5, no issues on 0.9.3.

Windows 11, Nvidia 4090, Intel 285k

No other programs are running that take any material vram.

Relevant log output

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 7
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2025-07-05T10:46:08.913-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q8_0:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 7.95 GiB (8.50 BPW) 
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:    CUDA_Host model buffer size =   532.32 MiB
load_tensors:        CUDA0 model buffer size =  7605.34 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     1.01 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =  1024.00 MiB
llama_kv_cache_unified: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_context:      CUDA0 compute buffer size =   560.00 MiB
llama_context:  CUDA_Host compute buffer size =    24.01 MiB
llama_context: graph nodes  = 1094
llama_context: graph splits = 2
time=2025-07-05T10:46:10.665-07:00 level=INFO source=server.go:637 msg="llama runner started in 2.00 seconds"
[GIN] 2025/07/05 - 10:46:11 | 200 |    2.9854799s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/05 - 10:46:11 | 200 |    554.7744ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/05 - 10:46:13 | 200 |    1.2785369s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/05 - 10:46:18 | 200 |    696.6088ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/05 - 10:46:31 | 200 |    149.7448ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/05 - 10:46:33 | 200 |    797.5607ms |       127.0.0.1 | POST     "/api/generate"
time=2025-07-05T10:46:33.499-07:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="12.7 GiB"
time=2025-07-05T10:46:33.844-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=23825010688 required="20.7 GiB"
time=2025-07-05T10:46:33.861-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="50.4 GiB" free_swap="52.6 GiB"
time=2025-07-05T10:46:33.862-07:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.7 GiB" memory.required.partial="20.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.7 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-07-05T10:46:33.890-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 8 --no-mmap --parallel 2 --port 58363"
time=2025-07-05T10:46:33.894-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-07-05T10:46:33.894-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-07-05T10:46:33.894-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-07-05T10:46:33.927-07:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-07-05T10:46:33.929-07:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:58363"
time=2025-07-05T10:46:33.946-07:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-07-05T10:46:34.030-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-07-05T10:46:34.113-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CUDA0 size="16.8 GiB"
time=2025-07-05T10:46:34.113-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CPU size="2.6 GiB"
time=2025-07-05T10:46:34.145-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
time=2025-07-05T10:46:34.216-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
time=2025-07-05T10:46:34.216-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-07-05T10:46:34.516-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
time=2025-07-05T10:46:34.516-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"
time=2025-07-05T10:46:37.650-07:00 level=INFO source=server.go:637 msg="llama runner started in 3.76 seconds"
[GIN] 2025/07/05 - 10:46:39 | 200 |     6.570471s |       127.0.0.1 | POST     "/api/generate"
time=2025-07-05T10:46:40.480-07:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="2.1 GiB"
time=2025-07-05T10:46:40.826-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=23825657856 required="9.7 GiB"
time=2025-07-05T10:46:40.835-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="50.4 GiB" free_swap="52.5 GiB"
time=2025-07-05T10:46:40.835-07:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\admin\.ollama\models\blobs\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 7
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q8_0:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 7.95 GiB (8.50 BPW) 
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-07-05T10:46:41.029-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 8 --no-mmap --parallel 2 --port 58372"
time=2025-07-05T10:46:41.034-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-07-05T10:46:41.034-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-07-05T10:46:41.034-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-07-05T10:46:41.090-07:00 level=INFO source=runner.go:815 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-07-05T10:46:41.187-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-07-05T10:46:41.187-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:58372"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free
llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\admin\.ollama\models\blobs\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 7
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
time=2025-07-05T10:46:41.285-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q8_0:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 7.95 GiB (8.50 BPW) 
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:    CUDA_Host model buffer size =   532.32 MiB
load_tensors:        CUDA0 model buffer size =  7605.34 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     1.01 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =  1024.00 MiB
llama_kv_cache_unified: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_context:      CUDA0 compute buffer size =   560.00 MiB
llama_context:  CUDA_Host compute buffer size =    24.01 MiB
llama_context: graph nodes  = 1094
llama_context: graph splits = 2
time=2025-07-05T10:46:43.039-07:00 level=INFO source=server.go:637 msg="llama runner started in 2.00 seconds"
[GIN] 2025/07/05 - 10:46:43 | 200 |    2.9860402s |       127.0.0.1 | POST     "/api/generate"
time=2025-07-05T10:46:44.403-07:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="12.8 GiB"
time=2025-07-05T10:46:44.744-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=23850684416 required="20.7 GiB"
time=2025-07-05T10:46:44.762-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="50.4 GiB" free_swap="52.6 GiB"
time=2025-07-05T10:46:44.763-07:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.7 GiB" memory.required.partial="20.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.7 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-07-05T10:46:44.788-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 8 --no-mmap --parallel 2 --port 58380"
time=2025-07-05T10:46:44.792-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-07-05T10:46:44.792-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-07-05T10:46:44.792-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-07-05T10:46:44.830-07:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-07-05T10:46:44.831-07:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:58380"
time=2025-07-05T10:46:44.856-07:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-07-05T10:46:44.953-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-07-05T10:46:45.044-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
time=2025-07-05T10:46:45.052-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CUDA0 size="16.8 GiB"
time=2025-07-05T10:46:45.052-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CPU size="2.6 GiB"
time=2025-07-05T10:46:45.160-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
time=2025-07-05T10:46:45.160-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-07-05T10:46:45.464-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
time=2025-07-05T10:46:45.465-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"
time=2025-07-05T10:46:48.549-07:00 level=INFO source=server.go:637 msg="llama runner started in 3.76 seconds"
time=2025-07-05T10:46:48.599-07:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=21685 keep=4 new=4096
[GIN] 2025/07/05 - 10:51:23 | 500 |         4m39s |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.9.5

Originally created by @2jfs904judsw20600jikn613d0dookl23jsig on GitHub (Jul 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11306 ### What is the issue? Having significant issues with 0.9.5 running specifically gemma3 models now. Running identical inputs but with lower temperature causes inference time for the input to go from 22s to timing out at 100% GPU util over 3 minutes. The tokens fit well within the context window for the input. Only had this issue since 0.9.5, no issues on 0.9.3. Windows 11, Nvidia 4090, Intel 285k No other programs are running that take any material vram. ### Relevant log output ```shell llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 7 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-07-05T10:46:08.913-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q8_0: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 7.95 GiB (8.50 BPW) load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: CUDA_Host model buffer size = 532.32 MiB load_tensors: CUDA0 model buffer size = 7605.34 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 1.01 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 1024.00 MiB llama_kv_cache_unified: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_context: CUDA0 compute buffer size = 560.00 MiB llama_context: CUDA_Host compute buffer size = 24.01 MiB llama_context: graph nodes = 1094 llama_context: graph splits = 2 time=2025-07-05T10:46:10.665-07:00 level=INFO source=server.go:637 msg="llama runner started in 2.00 seconds" [GIN] 2025/07/05 - 10:46:11 | 200 | 2.9854799s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/05 - 10:46:11 | 200 | 554.7744ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/05 - 10:46:13 | 200 | 1.2785369s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/05 - 10:46:18 | 200 | 696.6088ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/05 - 10:46:31 | 200 | 149.7448ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/05 - 10:46:33 | 200 | 797.5607ms | 127.0.0.1 | POST "/api/generate" time=2025-07-05T10:46:33.499-07:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="12.7 GiB" time=2025-07-05T10:46:33.844-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=23825010688 required="20.7 GiB" time=2025-07-05T10:46:33.861-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="50.4 GiB" free_swap="52.6 GiB" time=2025-07-05T10:46:33.862-07:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.7 GiB" memory.required.partial="20.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.7 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-07-05T10:46:33.890-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 8 --no-mmap --parallel 2 --port 58363" time=2025-07-05T10:46:33.894-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-07-05T10:46:33.894-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-07-05T10:46:33.894-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-07-05T10:46:33.927-07:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-07-05T10:46:33.929-07:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:58363" time=2025-07-05T10:46:33.946-07:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-07-05T10:46:34.030-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-07-05T10:46:34.113-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CUDA0 size="16.8 GiB" time=2025-07-05T10:46:34.113-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CPU size="2.6 GiB" time=2025-07-05T10:46:34.145-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" time=2025-07-05T10:46:34.216-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-07-05T10:46:34.216-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-07-05T10:46:34.516-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-07-05T10:46:34.516-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" time=2025-07-05T10:46:37.650-07:00 level=INFO source=server.go:637 msg="llama runner started in 3.76 seconds" [GIN] 2025/07/05 - 10:46:39 | 200 | 6.570471s | 127.0.0.1 | POST "/api/generate" time=2025-07-05T10:46:40.480-07:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="2.1 GiB" time=2025-07-05T10:46:40.826-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=23825657856 required="9.7 GiB" time=2025-07-05T10:46:40.835-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="50.4 GiB" free_swap="52.5 GiB" time=2025-07-05T10:46:40.835-07:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\admin\.ollama\models\blobs\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 7 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q8_0: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 7.95 GiB (8.50 BPW) load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-07-05T10:46:41.029-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 8 --no-mmap --parallel 2 --port 58372" time=2025-07-05T10:46:41.034-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-07-05T10:46:41.034-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-07-05T10:46:41.034-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-07-05T10:46:41.090-07:00 level=INFO source=runner.go:815 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-07-05T10:46:41.187-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-07-05T10:46:41.187-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:58372" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\admin\.ollama\models\blobs\sha256-bfcba39d999d20dc12116b3c78c52bfec10adfc5e41303373ce507de49c3293c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 7 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe time=2025-07-05T10:46:41.285-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q8_0: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 7.95 GiB (8.50 BPW) load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: CUDA_Host model buffer size = 532.32 MiB load_tensors: CUDA0 model buffer size = 7605.34 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 1.01 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 1024.00 MiB llama_kv_cache_unified: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_context: CUDA0 compute buffer size = 560.00 MiB llama_context: CUDA_Host compute buffer size = 24.01 MiB llama_context: graph nodes = 1094 llama_context: graph splits = 2 time=2025-07-05T10:46:43.039-07:00 level=INFO source=server.go:637 msg="llama runner started in 2.00 seconds" [GIN] 2025/07/05 - 10:46:43 | 200 | 2.9860402s | 127.0.0.1 | POST "/api/generate" time=2025-07-05T10:46:44.403-07:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="12.8 GiB" time=2025-07-05T10:46:44.744-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=23850684416 required="20.7 GiB" time=2025-07-05T10:46:44.762-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="50.4 GiB" free_swap="52.6 GiB" time=2025-07-05T10:46:44.763-07:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.7 GiB" memory.required.partial="20.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.7 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-07-05T10:46:44.788-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 8 --no-mmap --parallel 2 --port 58380" time=2025-07-05T10:46:44.792-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-07-05T10:46:44.792-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-07-05T10:46:44.792-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-07-05T10:46:44.830-07:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-07-05T10:46:44.831-07:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:58380" time=2025-07-05T10:46:44.856-07:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-07-05T10:46:44.953-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-07-05T10:46:45.044-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" time=2025-07-05T10:46:45.052-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CUDA0 size="16.8 GiB" time=2025-07-05T10:46:45.052-07:00 level=INFO source=ggml.go:362 msg="model weights" buffer=CPU size="2.6 GiB" time=2025-07-05T10:46:45.160-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-07-05T10:46:45.160-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-07-05T10:46:45.464-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" time=2025-07-05T10:46:45.465-07:00 level=INFO source=ggml.go:651 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" time=2025-07-05T10:46:48.549-07:00 level=INFO source=server.go:637 msg="llama runner started in 3.76 seconds" time=2025-07-05T10:46:48.599-07:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=21685 keep=4 new=4096 [GIN] 2025/07/05 - 10:51:23 | 500 | 4m39s | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.5
GiteaMirror added the bug label 2026-05-04 18:18:05 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 5, 2025):

Is the output coherent? Long inference times can be the result of a model losing coherence and rambling. Loss of coherence can be caused by the model losing the system message and user message. Notably, this line:

time=2025-07-05T10:46:48.599-07:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=21685 keep=4 new=4096

indicates that the prompt doesn't fit in the context buffer, and the head of the buffer (system and user instructions) is being discarded. The difference in temperature causes a different probablity of output, so that the less random output (lower temperature) might reduce the likelihood of generating an EOS (end of sequence) token thereby extending the inference run.

You can prevent this runaway scenario by setting num_predict in the API call or Modelfile.

temperature:0

$ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":0}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))}'; done
{"version":"0.9.3"}
{
  "model": "gemma3:27b-it-qat",
  "duration": 51.142466324,
  "tps": 5.807306947585057
}
{"version":"0.9.5"}
{
  "model": "gemma3:27b-it-qat",
  "duration": 51.060142702,
  "tps": 5.81666999509515
}

temperature:1

$ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":1}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))}
'; done
{"version":"0.9.3"}
{
  "model": "gemma3:27b-it-qat",
  "duration": 51.106510741,
  "tps": 5.967928461124914
}
{"version":"0.9.5"}
{
  "model": "gemma3:27b-it-qat",
  "duration": 54.279999133,
  "tps": 5.913780492395868
}
<!-- gh-comment-id:3039781839 --> @rick-github commented on GitHub (Jul 5, 2025): Is the output coherent? Long inference times can be the result of a model losing coherence and rambling. Loss of coherence can be caused by the model losing the system message and user message. Notably, this line: ``` time=2025-07-05T10:46:48.599-07:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=21685 keep=4 new=4096 ``` indicates that the prompt doesn't fit in the context buffer, and the head of the buffer (system and user instructions) is being discarded. The difference in temperature causes a different probablity of output, so that the less random output (lower temperature) might reduce the likelihood of generating an EOS (end of sequence) token thereby extending the inference run. You can prevent this runaway scenario by setting [`num_predict`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values:~:text=stop%20%22AI%20assistant%3A%22-,num_predict,-Maximum%20number%20of) in the API call or Modelfile. temperature:0 ```console $ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":0}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))}'; done {"version":"0.9.3"} { "model": "gemma3:27b-it-qat", "duration": 51.142466324, "tps": 5.807306947585057 } {"version":"0.9.5"} { "model": "gemma3:27b-it-qat", "duration": 51.060142702, "tps": 5.81666999509515 } ``` temperature:1 ```console $ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":1}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))} '; done {"version":"0.9.3"} { "model": "gemma3:27b-it-qat", "duration": 51.106510741, "tps": 5.967928461124914 } {"version":"0.9.5"} { "model": "gemma3:27b-it-qat", "duration": 54.279999133, "tps": 5.913780492395868 } ``` ``` ```
Author
Owner

@2jfs904judsw20600jikn613d0dookl23jsig commented on GitHub (Jul 5, 2025):

Is the output coherent? Long inference times can be the result of a model losing coherence and rambling. Loss of coherence can be caused by the model losing the system message and user message. Notably, this line:

time=2025-07-05T10:46:48.599-07:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=21685 keep=4 new=4096

indicates that the prompt doesn't fit in the context buffer, and the head of the buffer (system and user instructions) is being discarded. The difference in temperature causes a different probablity of output, so that the less random output (lower temperature) might reduce the likelihood of generating an EOS (end of sequence) token thereby extending the inference run.

You can prevent this runaway scenario by setting num_predict in the API call or Modelfile.

temperature:0

$ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":0}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))}'; done
{"version":"0.9.3"}
{
"model": "gemma3:27b-it-qat",
"duration": 51.142466324,
"tps": 5.807306947585057
}
{"version":"0.9.5"}
{
"model": "gemma3:27b-it-qat",
"duration": 51.060142702,
"tps": 5.81666999509515
}
temperature:1

$ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":1}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))}
'; done
{"version":"0.9.3"}
{
"model": "gemma3:27b-it-qat",
"duration": 51.106510741,
"tps": 5.967928461124914
}
{"version":"0.9.5"}
{
"model": "gemma3:27b-it-qat",
"duration": 54.279999133,
"tps": 5.913780492395868
}

Oh dear. Realizing my rookie mistake and I have no idea HOW I haven't run into this issue before - I thought I was calling inference with num_ctx set to the model's capabilities but I wasn't - it was still set to ollama's default 4096.

However, before 0.9.5 output quality for long inputs became garbage - but they never failed/timed-out the way they are now. So potentially still an "issue" in 0.9.5 but one that is easily fixed by setting the num_ctx param correctly. Thanks for the tip

<!-- gh-comment-id:3039784489 --> @2jfs904judsw20600jikn613d0dookl23jsig commented on GitHub (Jul 5, 2025): > Is the output coherent? Long inference times can be the result of a model losing coherence and rambling. Loss of coherence can be caused by the model losing the system message and user message. Notably, this line: > > ``` > time=2025-07-05T10:46:48.599-07:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=21685 keep=4 new=4096 > ``` > > indicates that the prompt doesn't fit in the context buffer, and the head of the buffer (system and user instructions) is being discarded. The difference in temperature causes a different probablity of output, so that the less random output (lower temperature) might reduce the likelihood of generating an EOS (end of sequence) token thereby extending the inference run. > > You can prevent this runaway scenario by setting [`num_predict`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values:~:text=stop%20%22AI%20assistant%3A%22-,num_predict,-Maximum%20number%20of) in the API call or Modelfile. > > temperature:0 > > $ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":0}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))}'; done > {"version":"0.9.3"} > { > "model": "gemma3:27b-it-qat", > "duration": 51.142466324, > "tps": 5.807306947585057 > } > {"version":"0.9.5"} > { > "model": "gemma3:27b-it-qat", > "duration": 51.060142702, > "tps": 5.81666999509515 > } > temperature:1 > > $ for v in 0.9.3 0.9.5 ; do OLLAMA_DOCKER_TAG=$v docker compose up -d ollama 2>&- >&- ; sleep 2 ; curl -s localhost:11434/api/version | jq -c ; curl -s localhost:11434/api/generate -d '{"model":"gemma3:27b-it-qat","prompt":"why is the sky blue","stream":false,"options":{"temperature":1}}' | jq '{"model":.model,"duration":(.eval_duration/1e9),"tps":(.eval_count/(.eval_duration/1e9))} > '; done > {"version":"0.9.3"} > { > "model": "gemma3:27b-it-qat", > "duration": 51.106510741, > "tps": 5.967928461124914 > } > {"version":"0.9.5"} > { > "model": "gemma3:27b-it-qat", > "duration": 54.279999133, > "tps": 5.913780492395868 > } Oh dear. Realizing my rookie mistake and I have no idea HOW I haven't run into this issue before - I thought I was calling inference with num_ctx set to the model's capabilities but I wasn't - it was still set to ollama's default 4096. However, before 0.9.5 output quality for long inputs became garbage - but they never failed/timed-out the way they are now. So potentially still an "issue" in 0.9.5 but one that is easily fixed by setting the num_ctx param correctly. Thanks for the tip
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69517