[GH-ISSUE #12873] Ollama 0.12.7 is 4x+ slower than Ollama 0.12.6 #70587

Closed
opened 2026-05-04 22:06:49 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @K9Kraken on GitHub (Oct 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12873

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Ollama 12.7 is running 4x+ slower than Ollama 12.6

Running on Void Linux kernel 6.17.5_1 with a AMD CPU.
Loading llama3.2 and just saying "Hello" model takes much longer to load and is much slow in response time.

Setting the following made 12.7 even slower:
export OLLAMA_LLM_LIBRARY=cpu

Setting number of threads made no difference.

Relevant log output

**12.6 log:**

┌[~/Projects/agents/core/ollama/bin]
└> ./ollama start
time=2025-10-30T18:01:18.689-06:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/will/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-30T18:01:18.689-06:00 level=INFO source=images.go:522 msg="total blobs: 6"
time=2025-10-30T18:01:18.689-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-30T18:01:18.689-06:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-30T18:01:18.690-06:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-30T18:01:18.744-06:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.2 GiB" available="58.6 GiB"
time=2025-10-30T18:01:18.744-06:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/30 - 18:01:31 | 200 |      46.515µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/30 - 18:01:31 | 200 |   52.859889ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-10-30T18:01:31.523-06:00 level=INFO source=server.go:400 msg="starting runner" cmd="/home/will/Projects/agents/core/ollama/bin/ollama runner --model /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 41709"
time=2025-10-30T18:01:31.524-06:00 level=INFO source=server.go:505 msg="system memory" total="62.2 GiB" free="58.6 GiB" free_swap="0 B"
time=2025-10-30T18:01:31.525-06:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff library=cpu parallel=1 required="0 B" gpus=1
time=2025-10-30T18:01:31.525-06:00 level=INFO source=server.go:545 msg=offload library=cpu layers.requested=-1 layers.model=29 layers.offload=0 layers.split=[] memory.available="[58.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.6 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations="[2.6 GiB]" memory.weights.total="1.9 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="256.5 MiB" memory.graph.partial="570.7 MiB"
time=2025-10-30T18:01:31.535-06:00 level=INFO source=runner.go:893 msg="starting go runner"
load_backend: loaded CPU backend from /home/will/Projects/agents/core/ollama/lib/ollama/libggml-cpu-haswell.so
time=2025-10-30T18:01:31.539-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-10-30T18:01:31.540-06:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:41709"
time=2025-10-30T18:01:31.547-06:00 level=INFO source=runner.go:828 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T18:01:31.547-06:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-30T18:01:31.547-06:00 level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_layer          = 28
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 3B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size =  1918.35 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.50 MiB
llama_kv_cache:        CPU KV buffer size =   448.00 MiB
llama_kv_cache: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context:        CPU compute buffer size =   256.50 MiB
llama_context: graph nodes  = 1014
llama_context: graph splits = 1
time=2025-10-30T18:01:33.054-06:00 level=INFO source=server.go:1310 msg="llama runner started in 1.53 seconds"
time=2025-10-30T18:01:33.055-06:00 level=INFO source=sched.go:482 msg="loaded runners" count=1
time=2025-10-30T18:01:33.055-06:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-30T18:01:33.055-06:00 level=INFO source=server.go:1310 msg="llama runner started in 1.53 seconds"
[GIN] 2025/10/30 - 18:01:33 | 200 |   1.94835911s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/10/30 - 18:01:37 | 200 |  879.023877ms |       127.0.0.1 | POST     "/api/chat"



**12.7 log:**


┌[~/Downloads/ollama-linux-amd64/bin]
└> ./ollama start
time=2025-10-30T17:56:23.899-06:00 level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/will/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-30T17:56:23.899-06:00 level=INFO source=images.go:522 msg="total blobs: 6"
time=2025-10-30T17:56:23.899-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-30T17:56:23.900-06:00 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7)"
time=2025-10-30T17:56:23.900-06:00 level=INFO source=runner.go:76 msg="discovering available GPUs..."
time=2025-10-30T17:56:23.901-06:00 level=INFO source=server.go:385 msg="starting runner" cmd="/home/will/Downloads/ollama-linux-amd64/bin/ollama runner --ollama-engine --port 36023"
time=2025-10-30T17:56:23.928-06:00 level=INFO source=server.go:385 msg="starting runner" cmd="/home/will/Downloads/ollama-linux-amd64/bin/ollama runner --ollama-engine --port 34569"
time=2025-10-30T17:56:23.954-06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.2 GiB" available="58.7 GiB"
time=2025-10-30T17:56:23.954-06:00 level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/30 - 17:56:36 | 200 |       44.21µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/30 - 17:56:36 | 200 |   50.461901ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-10-30T17:56:36.985-06:00 level=INFO source=server.go:385 msg="starting runner" cmd="/home/will/Downloads/ollama-linux-amd64/bin/ollama runner --model /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 40877"
time=2025-10-30T17:56:36.985-06:00 level=INFO source=server.go:455 msg="system memory" total="62.2 GiB" free="58.7 GiB" free_swap="0 B"
time=2025-10-30T17:56:36.986-06:00 level=INFO source=memory.go:110 msg="new model will fit in available system memory for CPU inference, loading" model=/home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff parallel=1 required="2.3 GiB"
time=2025-10-30T17:56:36.986-06:00 level=INFO source=server.go:507 msg=offload library=cpu layers.requested=-1 layers.model=29 layers.offload=0 layers.split=[] memory.available=[] memory.gpu_overhead="0 B" memory.required.full="2.3 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations=[] memory.weights.total="1.9 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="256.5 MiB" memory.graph.partial="570.7 MiB"
time=2025-10-30T17:56:37.000-06:00 level=INFO source=runner.go:910 msg="starting go runner"
time=2025-10-30T17:56:37.000-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-10-30T17:56:37.001-06:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:40877"
time=2025-10-30T17:56:37.008-06:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T17:56:37.008-06:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-10-30T17:56:37.008-06:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_layer          = 28
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 3B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size =  1918.35 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.50 MiB
llama_kv_cache:        CPU KV buffer size =   448.00 MiB
llama_kv_cache: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context:        CPU compute buffer size =   256.50 MiB
llama_context: graph nodes  = 1014
llama_context: graph splits = 1
time=2025-10-30T17:56:38.515-06:00 level=INFO source=server.go:1274 msg="llama runner started in 1.53 seconds"
time=2025-10-30T17:56:38.515-06:00 level=INFO source=sched.go:493 msg="loaded runners" count=1
time=2025-10-30T17:56:38.515-06:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-10-30T17:56:38.515-06:00 level=INFO source=server.go:1274 msg="llama runner started in 1.53 seconds"
[GIN] 2025/10/30 - 17:56:38 | 200 |  1.936595318s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/10/30 - 17:56:48 | 200 |  7.125096281s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

12.6 vs 12.7

Originally created by @K9Kraken on GitHub (Oct 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12873 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Ollama 12.7 is running 4x+ slower than Ollama 12.6 Running on Void Linux kernel 6.17.5_1 with a AMD CPU. Loading llama3.2 and just saying "Hello" model takes much longer to load and is much slow in response time. Setting the following made 12.7 even slower: `export OLLAMA_LLM_LIBRARY=cpu ` Setting number of threads made no difference. ### Relevant log output ```shell **12.6 log:** ┌[~/Projects/agents/core/ollama/bin] └> ./ollama start time=2025-10-30T18:01:18.689-06:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/will/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-30T18:01:18.689-06:00 level=INFO source=images.go:522 msg="total blobs: 6" time=2025-10-30T18:01:18.689-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-30T18:01:18.689-06:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-30T18:01:18.690-06:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-30T18:01:18.744-06:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.2 GiB" available="58.6 GiB" time=2025-10-30T18:01:18.744-06:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/30 - 18:01:31 | 200 | 46.515µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/30 - 18:01:31 | 200 | 52.859889ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-10-30T18:01:31.523-06:00 level=INFO source=server.go:400 msg="starting runner" cmd="/home/will/Projects/agents/core/ollama/bin/ollama runner --model /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 41709" time=2025-10-30T18:01:31.524-06:00 level=INFO source=server.go:505 msg="system memory" total="62.2 GiB" free="58.6 GiB" free_swap="0 B" time=2025-10-30T18:01:31.525-06:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff library=cpu parallel=1 required="0 B" gpus=1 time=2025-10-30T18:01:31.525-06:00 level=INFO source=server.go:545 msg=offload library=cpu layers.requested=-1 layers.model=29 layers.offload=0 layers.split=[] memory.available="[58.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.6 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations="[2.6 GiB]" memory.weights.total="1.9 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="256.5 MiB" memory.graph.partial="570.7 MiB" time=2025-10-30T18:01:31.535-06:00 level=INFO source=runner.go:893 msg="starting go runner" load_backend: loaded CPU backend from /home/will/Projects/agents/core/ollama/lib/ollama/libggml-cpu-haswell.so time=2025-10-30T18:01:31.539-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-10-30T18:01:31.540-06:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:41709" time=2025-10-30T18:01:31.547-06:00 level=INFO source=runner.go:828 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T18:01:31.547-06:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-30T18:01:31.547-06:00 level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_layer = 28 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 1918.35 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.50 MiB llama_kv_cache: CPU KV buffer size = 448.00 MiB llama_kv_cache: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CPU compute buffer size = 256.50 MiB llama_context: graph nodes = 1014 llama_context: graph splits = 1 time=2025-10-30T18:01:33.054-06:00 level=INFO source=server.go:1310 msg="llama runner started in 1.53 seconds" time=2025-10-30T18:01:33.055-06:00 level=INFO source=sched.go:482 msg="loaded runners" count=1 time=2025-10-30T18:01:33.055-06:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-30T18:01:33.055-06:00 level=INFO source=server.go:1310 msg="llama runner started in 1.53 seconds" [GIN] 2025/10/30 - 18:01:33 | 200 | 1.94835911s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/10/30 - 18:01:37 | 200 | 879.023877ms | 127.0.0.1 | POST "/api/chat" **12.7 log:** ┌[~/Downloads/ollama-linux-amd64/bin] └> ./ollama start time=2025-10-30T17:56:23.899-06:00 level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/will/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-30T17:56:23.899-06:00 level=INFO source=images.go:522 msg="total blobs: 6" time=2025-10-30T17:56:23.899-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-30T17:56:23.900-06:00 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7)" time=2025-10-30T17:56:23.900-06:00 level=INFO source=runner.go:76 msg="discovering available GPUs..." time=2025-10-30T17:56:23.901-06:00 level=INFO source=server.go:385 msg="starting runner" cmd="/home/will/Downloads/ollama-linux-amd64/bin/ollama runner --ollama-engine --port 36023" time=2025-10-30T17:56:23.928-06:00 level=INFO source=server.go:385 msg="starting runner" cmd="/home/will/Downloads/ollama-linux-amd64/bin/ollama runner --ollama-engine --port 34569" time=2025-10-30T17:56:23.954-06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.2 GiB" available="58.7 GiB" time=2025-10-30T17:56:23.954-06:00 level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/30 - 17:56:36 | 200 | 44.21µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/30 - 17:56:36 | 200 | 50.461901ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-10-30T17:56:36.985-06:00 level=INFO source=server.go:385 msg="starting runner" cmd="/home/will/Downloads/ollama-linux-amd64/bin/ollama runner --model /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 40877" time=2025-10-30T17:56:36.985-06:00 level=INFO source=server.go:455 msg="system memory" total="62.2 GiB" free="58.7 GiB" free_swap="0 B" time=2025-10-30T17:56:36.986-06:00 level=INFO source=memory.go:110 msg="new model will fit in available system memory for CPU inference, loading" model=/home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff parallel=1 required="2.3 GiB" time=2025-10-30T17:56:36.986-06:00 level=INFO source=server.go:507 msg=offload library=cpu layers.requested=-1 layers.model=29 layers.offload=0 layers.split=[] memory.available=[] memory.gpu_overhead="0 B" memory.required.full="2.3 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations=[] memory.weights.total="1.9 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="256.5 MiB" memory.graph.partial="570.7 MiB" time=2025-10-30T17:56:37.000-06:00 level=INFO source=runner.go:910 msg="starting go runner" time=2025-10-30T17:56:37.000-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-10-30T17:56:37.001-06:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:40877" time=2025-10-30T17:56:37.008-06:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T17:56:37.008-06:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-10-30T17:56:37.008-06:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/will/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_layer = 28 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 1918.35 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.50 MiB llama_kv_cache: CPU KV buffer size = 448.00 MiB llama_kv_cache: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CPU compute buffer size = 256.50 MiB llama_context: graph nodes = 1014 llama_context: graph splits = 1 time=2025-10-30T17:56:38.515-06:00 level=INFO source=server.go:1274 msg="llama runner started in 1.53 seconds" time=2025-10-30T17:56:38.515-06:00 level=INFO source=sched.go:493 msg="loaded runners" count=1 time=2025-10-30T17:56:38.515-06:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-10-30T17:56:38.515-06:00 level=INFO source=server.go:1274 msg="llama runner started in 1.53 seconds" [GIN] 2025/10/30 - 17:56:38 | 200 | 1.936595318s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/10/30 - 17:56:48 | 200 | 7.125096281s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 12.6 vs 12.7
GiteaMirror added the bug label 2026-05-04 22:06:49 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 31, 2025):

0.12.6

time=2025-10-30T18:01:31.535-06:00 level=INFO source=runner.go:893 msg="starting go runner"
load_backend: loaded CPU backend from /home/will/Projects/agents/core/ollama/lib/ollama/libggml-cpu-haswell.so
time=2025-10-30T18:01:31.539-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

0.12.7

time=2025-10-30T17:56:37.000-06:00 level=INFO source=runner.go:910 msg="starting go runner"
time=2025-10-30T17:56:37.000-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

You've installed 0.12.7 in such a way that the ollama sever can't find the backend libraries that have the accelerated inference libraries, so rather than using AVX like 0.12.6 did, 0.12.7 is brute forcing with plain CPU. The ollama server looks relative to the ollama binary in ../lib/ollama to find the libraries, so you need to populate /home/will/Downloads/ollama-linux-amd64/lib/ollama.

<!-- gh-comment-id:3470906295 --> @rick-github commented on GitHub (Oct 31, 2025): 0.12.6 ``` time=2025-10-30T18:01:31.535-06:00 level=INFO source=runner.go:893 msg="starting go runner" load_backend: loaded CPU backend from /home/will/Projects/agents/core/ollama/lib/ollama/libggml-cpu-haswell.so time=2025-10-30T18:01:31.539-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ``` 0.12.7 ``` time=2025-10-30T17:56:37.000-06:00 level=INFO source=runner.go:910 msg="starting go runner" time=2025-10-30T17:56:37.000-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` You've installed 0.12.7 in such a way that the ollama sever can't find the backend libraries that have the accelerated inference libraries, so rather than using AVX like 0.12.6 did, 0.12.7 is brute forcing with plain CPU. The ollama server looks relative to the ollama binary in `../lib/ollama` to find the libraries, so you need to populate `/home/will/Downloads/ollama-linux-amd64/lib/ollama`.
Author
Owner

@K9Kraken commented on GitHub (Oct 31, 2025):

It looks like an issue with the ollama bin file itself.

Downloading and uncompromising this 12.7 file "ollama-linux-amd64.tgz" gives me the same layout as 12.6
bin/ollama
lib/ollama/libfiles

[~/Downloads/ollama-linux-amd64/bin]
└> ldd bin/ollama 
	linux-vdso.so.1 (0x00007f6450429000)
	libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007f64503f5000)
	libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f64503f0000)
	librt.so.1 => /usr/lib/librt.so.1 (0x00007f64503eb000)
	libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f644e1fb000)
	libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f644de00000)
	libm.so.6 => /usr/lib/libm.so.6 (0x00007f644e10d000)
	libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f644ddd3000)
	libc.so.6 => /usr/lib/libc.so.6 (0x00007f644dbe9000)
	/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f645042b000)

Running this 12.7 bin is slow.
I copied and replaced the 12.7 bin file with the 12.6 bin file and ollama runs normally.

Swapping the 12.7 lib folder out with the 12.6 lib folder and running the 12.7 bin file it runs slow.

<!-- gh-comment-id:3470998164 --> @K9Kraken commented on GitHub (Oct 31, 2025): It looks like an issue with the ollama bin file itself. Downloading and uncompromising this 12.7 file "ollama-linux-amd64.tgz" gives me the same layout as 12.6 bin/ollama lib/ollama/*libfiles* ```bash ┌[~/Downloads/ollama-linux-amd64/bin] └> ldd bin/ollama linux-vdso.so.1 (0x00007f6450429000) libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007f64503f5000) libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f64503f0000) librt.so.1 => /usr/lib/librt.so.1 (0x00007f64503eb000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f644e1fb000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f644de00000) libm.so.6 => /usr/lib/libm.so.6 (0x00007f644e10d000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f644ddd3000) libc.so.6 => /usr/lib/libc.so.6 (0x00007f644dbe9000) /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f645042b000) ``` Running this 12.7 bin is slow. I copied and replaced the 12.7 bin file with the 12.6 bin file and ollama runs normally. Swapping the 12.7 lib folder out with the 12.6 lib folder and running the 12.7 bin file it runs slow.
Author
Owner

@K9Kraken commented on GitHub (Oct 31, 2025):

I have also tried exporting the library path with no luck

<!-- gh-comment-id:3471009091 --> @K9Kraken commented on GitHub (Oct 31, 2025): I have also tried exporting the library path with no luck
Author
Owner

@rod-fu commented on GitHub (Oct 31, 2025):

so if i wanna change the ollama version to 0.12.7, can i do that

<!-- gh-comment-id:3471044227 --> @rod-fu commented on GitHub (Oct 31, 2025): so if i wanna change the ollama version to 0.12.7, can i do that
Author
Owner

@rod-fu commented on GitHub (Oct 31, 2025):

because i wanna use qwen3-vl that other versions are not support

<!-- gh-comment-id:3471048257 --> @rod-fu commented on GitHub (Oct 31, 2025): because i wanna use qwen3-vl that other versions are not support
Author
Owner

@K9Kraken commented on GitHub (Oct 31, 2025):

I compiled myself following instructions from here: https://github.com/ollama/ollama/blob/main/docs/development.md

The compiled version runs slow even when doing it for release.

<!-- gh-comment-id:3471095379 --> @K9Kraken commented on GitHub (Oct 31, 2025): I compiled myself following instructions from here: https://github.com/ollama/ollama/blob/main/docs/development.md The compiled version runs slow even when doing it for release.
Author
Owner

@K9Kraken commented on GitHub (Oct 31, 2025):

I download 0.12.7 on my intel CPU laptop running Void Linux and it is also very slow, seems even worse 6x+ slower than 0.12.6

<!-- gh-comment-id:3471126806 --> @K9Kraken commented on GitHub (Oct 31, 2025): I download 0.12.7 on my intel CPU laptop running Void Linux and it is also very slow, seems even worse 6x+ slower than 0.12.6
Author
Owner

@dusbot commented on GitHub (Oct 31, 2025):

Can it be fixed in 0.12.8 or later?

<!-- gh-comment-id:3471627736 --> @dusbot commented on GitHub (Oct 31, 2025): Can it be fixed in 0.12.8 or later?
Author
Owner

@ghost commented on GitHub (Oct 31, 2025):

@K9Kraken
There's a difference between linking a shared library to an application.
And using libc functions to load a library object to then grab functions out of it and use it.
Iirc ldd only conducts a search in the header of the application and then the system library(maybe it does a recursive one on libraries idk).
It doesn't actually launch your application, let alone launch it and send an http request to trigger a model load to check loaded libraries in /proc/$(pidof ollama)/maps.

<!-- gh-comment-id:3472218994 --> @ghost commented on GitHub (Oct 31, 2025): @K9Kraken There's a difference between linking a shared library to an application. And using libc functions to load a library object to then grab functions out of it and use it. Iirc ldd only conducts a search in the header of the application and then the system library(maybe it does a recursive one on libraries idk). It doesn't actually launch your application, let alone launch it and send an http request to trigger a model load to check loaded libraries in /proc/$(pidof ollama)/maps.
Author
Owner

@ghost commented on GitHub (Oct 31, 2025):

Image 0.12.7 CPU 5700x3D
@K9Kraken @rod-fu Can you guys send over result of this command on Linux:
cat /proc/cpuinfo | grep model | tail -1
My output:
model name : AMD Ryzen 7 5700X3D 8-Core Processor
libggml-cpu-haswell.so library loads for me on 0.12.7 as well.
Maybe the issue is with cpuid cpu feature detection or it's conducted through model names.

@K9Kraken Also ls -l ~/Downloads/ollama-linux-amd64/lib/ollama/libggml-cpu-haswell.so

ls /opt/ollama/lib/ollama/libggml-cpu-haswell.so 
/opt/ollama/lib/ollama/libggml-cpu-haswell.so

Also can you guys install it to like /opt and do root:root ownership so that random programs don't have an option to replace your binaries or chown 0:0 or use chattr -R +i?

My ollama service:

[Unit]
Description=Runs AI models

[Service]
Type=simple
ExecStart=/opt/ollama/bin/ollama serve
User=ollama
WorkingDirectory=/srv/ollama
Environment="OLLAMA_HOST=0.0.0.0:11434"

[Install]
WantedBy=network.target
<!-- gh-comment-id:3472878127 --> @ghost commented on GitHub (Oct 31, 2025): <img width="1914" height="188" alt="Image" src="https://github.com/user-attachments/assets/43ffd4bb-839b-4538-b7cb-8b09514bd4ff" /> 0.12.7 CPU 5700x3D @K9Kraken @rod-fu Can you guys send over result of this command on Linux: `cat /proc/cpuinfo | grep model | tail -1` My output: `model name : AMD Ryzen 7 5700X3D 8-Core Processor` libggml-cpu-haswell.so library loads for me on 0.12.7 as well. Maybe the issue is with cpuid cpu feature detection or it's conducted through model names. @K9Kraken Also ls -l ~/Downloads/ollama-linux-amd64/lib/ollama/libggml-cpu-haswell.so ``` ls /opt/ollama/lib/ollama/libggml-cpu-haswell.so /opt/ollama/lib/ollama/libggml-cpu-haswell.so ``` Also can you guys install it to like /opt and do root:root ownership so that random programs don't have an option to replace your binaries or chown 0:0 or use chattr -R +i? My ollama service: ``` [Unit] Description=Runs AI models [Service] Type=simple ExecStart=/opt/ollama/bin/ollama serve User=ollama WorkingDirectory=/srv/ollama Environment="OLLAMA_HOST=0.0.0.0:11434" [Install] WantedBy=network.target ```
Author
Owner

@K9Kraken commented on GitHub (Oct 31, 2025):

@esperanza-esperanza
For both my computers: cat /proc/cpuinfo | grep model | tail -1
model name : AMD Ryzen 7 5800U with Radeon Graphics
model name : Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz

I have never installed Ollama, I've always used it standalone.

-> ls /opt/ollama/lib/ollama/libggml-cpu-haswell.so 
lsd: /opt/ollama/lib/ollama/libggml-cpu-haswell.so: No such file or directory (os error 2)

The libggml-cpu-haswell.so file is in the correct relative path:

-> ls -l ~/Downloads/ollama-linux-amd64/lib/ollama/libggml-cpu-haswell.so
.rwxr-xr-x will will 805 KB Wed Oct 29 15:37:24 2025 /home/will/Downloads/ollama-linux-amd64/lib/ollama/libggml-cpu-haswell.so

"Also can you guys install it to like /opt and do root:root ownership so that random programs don't have an option to replace your binaries or chown 0:0 or use chattr -R +i?"

I installed Ollama to /opt and tested it out by running it from there and still getting the same behavior

-> ls /opt/ollama/lib/ollama/libggml-cpu-haswell.so
/opt/ollama/lib/ollama/libggml-cpu-haswell.so

I have tested different models and the issue is consistent:
llama3.2:latest
cogito:3b
deepseek-r1:1.5b

<!-- gh-comment-id:3473583132 --> @K9Kraken commented on GitHub (Oct 31, 2025): @esperanza-esperanza For both my computers: `cat /proc/cpuinfo | grep model | tail -1` `model name : AMD Ryzen 7 5800U with Radeon Graphics` `model name : Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz` I have never installed Ollama, I've always used it standalone. ```bash -> ls /opt/ollama/lib/ollama/libggml-cpu-haswell.so lsd: /opt/ollama/lib/ollama/libggml-cpu-haswell.so: No such file or directory (os error 2) ``` The `libggml-cpu-haswell.so` file is in the correct relative path: ```bash -> ls -l ~/Downloads/ollama-linux-amd64/lib/ollama/libggml-cpu-haswell.so .rwxr-xr-x will will 805 KB Wed Oct 29 15:37:24 2025 /home/will/Downloads/ollama-linux-amd64/lib/ollama/libggml-cpu-haswell.so ``` _"Also can you guys install it to like /opt and do root:root ownership so that random programs don't have an option to replace your binaries or chown 0:0 or use chattr -R +i?"_ I installed Ollama to /opt and tested it out by running it from there and still getting the same behavior ```bash -> ls /opt/ollama/lib/ollama/libggml-cpu-haswell.so /opt/ollama/lib/ollama/libggml-cpu-haswell.so ``` I have tested different models and the issue is consistent: llama3.2:latest cogito:3b deepseek-r1:1.5b
Author
Owner

@rick-github commented on GitHub (Oct 31, 2025):

#12886

<!-- gh-comment-id:3473625836 --> @rick-github commented on GitHub (Oct 31, 2025): #12886
Author
Owner

@dhiltgen commented on GitHub (Oct 31, 2025):

Fix will be in 0.12.9

<!-- gh-comment-id:3474975899 --> @dhiltgen commented on GitHub (Oct 31, 2025): Fix will be in 0.12.9
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70587