[GH-ISSUE #12676] running the run code for llama3.2 led to stuck #70467

Closed
opened 2026-05-04 21:39:44 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @mahendra0120 on GitHub (Oct 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12676

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Whenever I enter run command for llama3.2, it just got stuck. i reinstalled so many times. any solutions?

Relevant log output


OS

Windows

GPU

AMD

CPU

Intel

Ollama version

0.12.6

Originally created by @mahendra0120 on GitHub (Oct 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12676 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Whenever I enter run command for llama3.2, it just got stuck. i reinstalled so many times. any solutions? ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD ### CPU Intel ### Ollama version 0.12.6
GiteaMirror added the bugwindows labels 2026-05-04 21:39:45 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

More details will help in diagnosing the issue. What is the exact command you are typing? Have you downloaded the model? When you say "stuck", do you mean that the loading spinner doesn't stop, or that the client seems to hang before emitting a response? The server log may help in debugging.

<!-- gh-comment-id:3415486997 --> @rick-github commented on GitHub (Oct 17, 2025): More details will help in diagnosing the issue. What is the exact command you are typing? Have you downloaded the model? When you say "stuck", do you mean that the loading spinner doesn't stop, or that the client seems to hang before emitting a response? The [server log](https://github.com/ollama/ollama/blob/main/docs/[troubleshooting](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).md#how-to-troubleshoot-issues) may help in debugging.
Author
Owner

@mahendra0120 commented on GitHub (Oct 17, 2025):

i have already downloaded the model,
the command is : ollama run llama3.2.
after entering, the cursor, just below the command being idle. not even blinking for once

<!-- gh-comment-id:3415910250 --> @mahendra0120 commented on GitHub (Oct 17, 2025): i have already downloaded the model, the command is : ollama run llama3.2. after entering, the cursor, just below the command being idle. not even blinking for once
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

What happens when you run the following:

 curl http://localhost:11434/api/generate -d "{\"model\":\"llama3.2\",\"prompt\":\"hello\"}"
<!-- gh-comment-id:3415923578 --> @rick-github commented on GitHub (Oct 17, 2025): What happens when you run the following: ``` curl http://localhost:11434/api/generate -d "{\"model\":\"llama3.2\",\"prompt\":\"hello\"}" ```
Author
Owner

@mahendra0120 commented on GitHub (Oct 17, 2025):

Invoke-WebRequest : A positional parameter cannot be found that accepts argument '{'.
At line:1 char:2

  • curl http://localhost:11434/api/generate -d "{"model":"llama3.2" ...
  •  + CategoryInfo          : InvalidArgument: (:) [Invoke-WebRequest], ParameterBindingException
     + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
    
<!-- gh-comment-id:3415971649 --> @mahendra0120 commented on GitHub (Oct 17, 2025): Invoke-WebRequest : A positional parameter cannot be found that accepts argument '{\'. At line:1 char:2 + curl http://localhost:11434/api/generate -d "{\"model\":\"llama3.2\" ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Invoke-WebRequest], ParameterBindingException + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
Author
Owner

@pdevine commented on GitHub (Oct 19, 2025):

I think in Windows you need to do curl.exe and not curl.

<!-- gh-comment-id:3419091921 --> @pdevine commented on GitHub (Oct 19, 2025): I think in Windows you need to do `curl.exe` and not `curl`.
Author
Owner

@guiksign commented on GitHub (Oct 20, 2025):

@mahendra0120 ran powershell instead of cmd.
Same issue here. Running command just hangs.
When running command "ollama run llama3.2" the prompt is showing ⠋.
Note I am using llama3.2 because I have a poor i3.
Note as well that other models are giving same output.

<!-- gh-comment-id:3420160379 --> @guiksign commented on GitHub (Oct 20, 2025): @mahendra0120 ran powershell instead of cmd. Same issue here. Running command just hangs. When running command "ollama run llama3.2" the prompt is showing ⠋. Note I am using llama3.2 because I have a poor i3. Note as well that other models are giving same output.
Author
Owner

@guiksign commented on GitHub (Oct 20, 2025):

was working flawless in previous version, did not try to roll back to identify which version

<!-- gh-comment-id:3420167950 --> @guiksign commented on GitHub (Oct 20, 2025): was working flawless in previous version, did not try to roll back to identify which version
Author
Owner

@rick-github commented on GitHub (Oct 20, 2025):

Server log might help in debugging.

Do you have a GPU? If not, might be #12699.

<!-- gh-comment-id:3420179502 --> @rick-github commented on GitHub (Oct 20, 2025): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) might help in debugging. Do you have a GPU? If not, might be #12699.
Author
Owner

@guiksign commented on GitHub (Oct 20, 2025):

I have two GPUs, one intel, one NVidia.
Anyway, I rolled back to previous versions and found that my system is working up to 0.12.3.

<!-- gh-comment-id:3422144982 --> @guiksign commented on GitHub (Oct 20, 2025): I have two GPUs, one intel, one NVidia. Anyway, I rolled back to previous versions and found that my system is working up to 0.12.3.
Author
Owner

@guiksign commented on GitHub (Oct 20, 2025):

Windows 10 Family Intel(R) Core(TM) i3-4030U CPU @ 1.90GHz, 1901 MHz, 2 core(s), 4 logic(s) processor(s)

<!-- gh-comment-id:3422173880 --> @guiksign commented on GitHub (Oct 20, 2025): Windows 10 Family Intel(R) Core(TM) i3-4030U CPU @ 1.90GHz, 1901 MHz, 2 core(s), 4 logic(s) processor(s)
Author
Owner

@dhiltgen commented on GitHub (Oct 22, 2025):

If anyone on this issue is still facing problems with Ollama getting stuck, please provide server logs so we can see what's going wrong.

<!-- gh-comment-id:3433315481 --> @dhiltgen commented on GitHub (Oct 22, 2025): If anyone on this issue is still facing problems with Ollama getting stuck, please provide server logs so we can see what's going wrong.
Author
Owner

@guiksign commented on GitHub (Oct 22, 2025):

@dhiltgen not sure if your statement is still valid knowing that @mahendra0120 closed it... but anyway I am able to provide logs.

<!-- gh-comment-id:3434037704 --> @guiksign commented on GitHub (Oct 22, 2025): @dhiltgen not sure if your statement is still valid knowing that @mahendra0120 closed it... but anyway I am able to provide logs.
Author
Owner

@guiksign commented on GitHub (Oct 22, 2025):

log for working version 0.12.3:

time=2025-10-22T22:19:01.410+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\Asus\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-22T22:19:01.916+02:00 level=INFO source=images.go:518 msg="total blobs: 32"
time=2025-10-22T22:19:01.945+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
time=2025-10-22T22:19:01.972+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
time=2025-10-22T22:19:01.973+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-10-22T22:19:01.973+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-10-22T22:19:01.973+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=2 efficiency=0 threads=4
time=2025-10-22T22:19:02.368+02:00 level=INFO source=gpu.go:631 msg="Unable to load cudart library C:\Windows\system32\nvcuda.dll: symbol lookup for cuDeviceGetUuid failed: La proc\xe9dure sp\xe9cifi\xe9e est introuvable.\r\n"
time=2025-10-22T22:19:02.878+02:00 level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered"
time=2025-10-22T22:19:02.878+02:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="3.9 GiB" available="696.9 MiB"
time=2025-10-22T22:19:02.883+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
[GIN] 2025/10/22 - 22:19:03 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/10/22 - 22:19:03 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/10/22 - 22:19:03 | 200 | 41.7853ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/10/22 - 22:19:04 | 200 | 1.000274s | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/22 - 22:19:13 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/22 - 22:19:13 | 200 | 9.9911ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/10/22 - 22:19:25 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/22 - 22:19:25 | 200 | 168.6845ms | 127.0.0.1 | POST "/api/show"
llama_model_loader: loaded meta data with 35 key-value pairs and 147 tensors from C:\Users\Asus.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Llama-3.2
llama_model_loader: - kv 5: general.size_label str = 1B
llama_model_loader: - kv 6: general.license str = llama3.2
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 16
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 2048
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: llama.attention.key_length u32 = 64
llama_model_loader: - kv 18: llama.attention.value_length u32 = 64
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - kv 20: llama.vocab_size u32 = 128256
llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 30: general.quantization_version u32 = 2
llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-1B-Instruct-GGU...
llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 112
llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125
llama_model_loader: - type f32: 34 tensors
llama_model_loader: - type q4_0: 110 tensors
llama_model_loader: - type q4_1: 2 tensors
llama_model_loader: - type q6_K: 1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_0
print_info: file size = 729.75 MiB (4.95 BPW)
load: printing all EOG tokens:
load: - 128001 ('<|end_of_text|>')
load: - 128008 ('<|eom_id|>')
load: - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 1.24 B
print_info: general.name = Llama 3.2 1B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin_of_text|>'
print_info: EOS token = 128009 '<|eot_id|>'
print_info: EOT token = 128009 '<|eot_id|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end_of_text|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-10-22T22:19:26.601+02:00 level=INFO source=server.go:399 msg="starting runner" cmd="C:\Users\Asus\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\Asus\.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 --port 2342"
time=2025-10-22T22:19:27.044+02:00 level=INFO source=server.go:504 msg="system memory" total="3.9 GiB" free="882.1 MiB" free_swap="2.5 GiB"
time=2025-10-22T22:19:27.046+02:00 level=INFO source=server.go:544 msg=offload library=cpu layers.requested=-1 layers.model=17 layers.offload=0 layers.split=[] memory.available="[929.8 MiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="128.0 MiB" memory.required.allocations="[912.0 MiB]" memory.weights.total="729.7 MiB" memory.weights.repeating="524.2 MiB" memory.weights.nonrepeating="205.5 MiB" memory.graph.full="280.0 MiB" memory.graph.partial="464.0 MiB"
time=2025-10-22T22:19:27.146+02:00 level=INFO source=runner.go:864 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Asus\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-10-22T22:19:27.603+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-10-22T22:19:27.606+02:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:2342"
time=2025-10-22T22:19:27.611+02:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:2 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-22T22:19:27.612+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-10-22T22:19:27.612+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 35 key-value pairs and 147 tensors from C:\Users\Asus.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Llama-3.2
llama_model_loader: - kv 5: general.size_label str = 1B
llama_model_loader: - kv 6: general.license str = llama3.2
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 16
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 2048
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: llama.attention.key_length u32 = 64
llama_model_loader: - kv 18: llama.attention.value_length u32 = 64
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - kv 20: llama.vocab_size u32 = 128256
llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 30: general.quantization_version u32 = 2
llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-1B-Instruct-GGU...
llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 112
llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125
llama_model_loader: - type f32: 34 tensors
llama_model_loader: - type q4_0: 110 tensors
llama_model_loader: - type q4_1: 2 tensors
llama_model_loader: - type q6_K: 1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_0
print_info: file size = 729.75 MiB (4.95 BPW)
load: printing all EOG tokens:
load: - 128001 ('<|end_of_text|>')
load: - 128008 ('<|eom_id|>')
load: - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 2048
print_info: n_layer = 16
print_info: n_head = 32
print_info: n_head_kv = 8
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 64
print_info: n_embd_head_v = 64
print_info: n_gqa = 4
print_info: n_embd_k_gqa = 512
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 8192
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: model type = 1B
print_info: model params = 1.24 B
print_info: general.name = Llama 3.2 1B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin_of_text|>'
print_info: EOS token = 128009 '<|eot_id|>'
print_info: EOT token = 128009 '<|eot_id|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end_of_text|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: CPU model buffer size = 729.75 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: kv_unified = false
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.50 MiB
llama_kv_cache_unified: CPU KV buffer size = 128.00 MiB
llama_kv_cache_unified: size = 128.00 MiB ( 4096 cells, 16 layers, 1/1 seqs), K (f16): 64.00 MiB, V (f16): 64.00 MiB
llama_context: CPU compute buffer size = 282.01 MiB
llama_context: graph nodes = 566
llama_context: graph splits = 1
time=2025-10-22T22:19:31.642+02:00 level=INFO source=server.go:1289 msg="llama runner started in 5.04 seconds"
time=2025-10-22T22:19:31.642+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1
time=2025-10-22T22:19:31.642+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-10-22T22:19:31.877+02:00 level=INFO source=server.go:1289 msg="llama runner started in 5.28 seconds"
[GIN] 2025/10/22 - 22:19:31 | 200 | 6.5224222s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:3434096291 --> @guiksign commented on GitHub (Oct 22, 2025): log for working version 0.12.3: time=2025-10-22T22:19:01.410+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Asus\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-22T22:19:01.916+02:00 level=INFO source=images.go:518 msg="total blobs: 32" time=2025-10-22T22:19:01.945+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-22T22:19:01.972+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" time=2025-10-22T22:19:01.973+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-22T22:19:01.973+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-10-22T22:19:01.973+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=2 efficiency=0 threads=4 time=2025-10-22T22:19:02.368+02:00 level=INFO source=gpu.go:631 msg="Unable to load cudart library C:\\Windows\\system32\\nvcuda.dll: symbol lookup for cuDeviceGetUuid failed: La proc\xe9dure sp\xe9cifi\xe9e est introuvable.\r\n" time=2025-10-22T22:19:02.878+02:00 level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-22T22:19:02.878+02:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="3.9 GiB" available="696.9 MiB" time=2025-10-22T22:19:02.883+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" [GIN] 2025/10/22 - 22:19:03 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/22 - 22:19:03 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/22 - 22:19:03 | 200 | 41.7853ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/22 - 22:19:04 | 200 | 1.000274s | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/22 - 22:19:13 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/22 - 22:19:13 | 200 | 9.9911ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/22 - 22:19:25 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/22 - 22:19:25 | 200 | 168.6845ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 35 key-value pairs and 147 tensors from C:\Users\Asus\.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 1B llama_model_loader: - kv 6: general.license str = llama3.2 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 16 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 2048 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 64 llama_model_loader: - kv 18: llama.attention.value_length u32 = 64 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - kv 20: llama.vocab_size u32 = 128256 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-1B-Instruct-GGU... llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 112 llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 34 tensors llama_model_loader: - type q4_0: 110 tensors llama_model_loader: - type q4_1: 2 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 729.75 MiB (4.95 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 1.24 B print_info: general.name = Llama 3.2 1B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-10-22T22:19:26.601+02:00 level=INFO source=server.go:399 msg="starting runner" cmd="C:\\Users\\Asus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\Asus\\.ollama\\models\\blobs\\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 --port 2342" time=2025-10-22T22:19:27.044+02:00 level=INFO source=server.go:504 msg="system memory" total="3.9 GiB" free="882.1 MiB" free_swap="2.5 GiB" time=2025-10-22T22:19:27.046+02:00 level=INFO source=server.go:544 msg=offload library=cpu layers.requested=-1 layers.model=17 layers.offload=0 layers.split=[] memory.available="[929.8 MiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="128.0 MiB" memory.required.allocations="[912.0 MiB]" memory.weights.total="729.7 MiB" memory.weights.repeating="524.2 MiB" memory.weights.nonrepeating="205.5 MiB" memory.graph.full="280.0 MiB" memory.graph.partial="464.0 MiB" time=2025-10-22T22:19:27.146+02:00 level=INFO source=runner.go:864 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Asus\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-10-22T22:19:27.603+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-10-22T22:19:27.606+02:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:2342" time=2025-10-22T22:19:27.611+02:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:2 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-22T22:19:27.612+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-10-22T22:19:27.612+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 35 key-value pairs and 147 tensors from C:\Users\Asus\.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 1B llama_model_loader: - kv 6: general.license str = llama3.2 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 16 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 2048 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 64 llama_model_loader: - kv 18: llama.attention.value_length u32 = 64 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - kv 20: llama.vocab_size u32 = 128256 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-1B-Instruct-GGU... llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 112 llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 34 tensors llama_model_loader: - type q4_0: 110 tensors llama_model_loader: - type q4_1: 2 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 729.75 MiB (4.95 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 2048 print_info: n_layer = 16 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 64 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 64 print_info: n_embd_head_v = 64 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 1B print_info: model params = 1.24 B print_info: general.name = Llama 3.2 1B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 729.75 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.50 MiB llama_kv_cache_unified: CPU KV buffer size = 128.00 MiB llama_kv_cache_unified: size = 128.00 MiB ( 4096 cells, 16 layers, 1/1 seqs), K (f16): 64.00 MiB, V (f16): 64.00 MiB llama_context: CPU compute buffer size = 282.01 MiB llama_context: graph nodes = 566 llama_context: graph splits = 1 time=2025-10-22T22:19:31.642+02:00 level=INFO source=server.go:1289 msg="llama runner started in 5.04 seconds" time=2025-10-22T22:19:31.642+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-22T22:19:31.642+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-10-22T22:19:31.877+02:00 level=INFO source=server.go:1289 msg="llama runner started in 5.28 seconds" [GIN] 2025/10/22 - 22:19:31 | 200 | 6.5224222s | 127.0.0.1 | POST "/api/generate"
Author
Owner

@guiksign commented on GitHub (Oct 22, 2025):

log for non working version 0.12.4:

time=2025-10-22T22:13:41.573+02:00 level=INFO source=routes.go:1479 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\Asus\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-22T22:13:42.003+02:00 level=INFO source=images.go:522 msg="total blobs: 32"
time=2025-10-22T22:13:42.019+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-22T22:13:42.035+02:00 level=INFO source=routes.go:1532 msg="Listening on 127.0.0.1:11434 (version 0.12.4)"
time=2025-10-22T22:13:42.042+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-22T22:14:04.102+02:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="767.9 MiB"
time=2025-10-22T22:14:04.103+02:00 level=INFO source=routes.go:1573 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/22 - 22:14:04 | 200 | 516.7µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/10/22 - 22:14:04 | 200 | 516.7µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/10/22 - 22:14:04 | 200 | 30.2196ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/10/22 - 22:14:04 | 200 | 296.3424ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/22 - 22:15:31 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/22 - 22:15:31 | 200 | 23.8776ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/10/22 - 22:15:41 | 200 | 27.4µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/22 - 22:15:41 | 200 | 196.933ms | 127.0.0.1 | POST "/api/show"
llama_model_loader: loaded meta data with 35 key-value pairs and 147 tensors from C:\Users\Asus.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Llama-3.2
llama_model_loader: - kv 5: general.size_label str = 1B
llama_model_loader: - kv 6: general.license str = llama3.2
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 16
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 2048
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: llama.attention.key_length u32 = 64
llama_model_loader: - kv 18: llama.attention.value_length u32 = 64
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - kv 20: llama.vocab_size u32 = 128256
llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 30: general.quantization_version u32 = 2
llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-1B-Instruct-GGU...
llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 112
llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125
llama_model_loader: - type f32: 34 tensors
llama_model_loader: - type q4_0: 110 tensors
llama_model_loader: - type q4_1: 2 tensors
llama_model_loader: - type q6_K: 1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_0
print_info: file size = 729.75 MiB (4.95 BPW)
load: printing all EOG tokens:
load: - 128001 ('<|end_of_text|>')
load: - 128008 ('<|eom_id|>')
load: - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 1.24 B
print_info: general.name = Llama 3.2 1B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin_of_text|>'
print_info: EOS token = 128009 '<|eot_id|>'
print_info: EOT token = 128009 '<|eot_id|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end_of_text|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors

<!-- gh-comment-id:3434098543 --> @guiksign commented on GitHub (Oct 22, 2025): log for non working version 0.12.4: time=2025-10-22T22:13:41.573+02:00 level=INFO source=routes.go:1479 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Asus\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-22T22:13:42.003+02:00 level=INFO source=images.go:522 msg="total blobs: 32" time=2025-10-22T22:13:42.019+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-22T22:13:42.035+02:00 level=INFO source=routes.go:1532 msg="Listening on 127.0.0.1:11434 (version 0.12.4)" time=2025-10-22T22:13:42.042+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-22T22:14:04.102+02:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="767.9 MiB" time=2025-10-22T22:14:04.103+02:00 level=INFO source=routes.go:1573 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/22 - 22:14:04 | 200 | 516.7µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/22 - 22:14:04 | 200 | 516.7µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/22 - 22:14:04 | 200 | 30.2196ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/22 - 22:14:04 | 200 | 296.3424ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/22 - 22:15:31 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/22 - 22:15:31 | 200 | 23.8776ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/22 - 22:15:41 | 200 | 27.4µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/22 - 22:15:41 | 200 | 196.933ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 35 key-value pairs and 147 tensors from C:\Users\Asus\.ollama\models\blobs\sha256-fa0390e7c043f89ae1847bd6682d748041a99d4ef3de0e0b27d33b6af97a8be8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 1B llama_model_loader: - kv 6: general.license str = llama3.2 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 16 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 2048 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 64 llama_model_loader: - kv 18: llama.attention.value_length u32 = 64 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - kv 20: llama.vocab_size u32 = 128256 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-1B-Instruct-GGU... llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 112 llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 34 tensors llama_model_loader: - type q4_0: 110 tensors llama_model_loader: - type q4_1: 2 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 729.75 MiB (4.95 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 1.24 B print_info: general.name = Llama 3.2 1B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors
Author
Owner

@dhiltgen commented on GitHub (Oct 22, 2025):

@guiksign it looks like you hit #12699 which will be fixed in the next release.

<!-- gh-comment-id:3434326882 --> @dhiltgen commented on GitHub (Oct 22, 2025): @guiksign it looks like you hit #12699 which will be fixed in the next release.
Author
Owner

@guiksign commented on GitHub (Oct 23, 2025):

hey @dhiltgen, thank you for the heads up.

<!-- gh-comment-id:3435707205 --> @guiksign commented on GitHub (Oct 23, 2025): hey @dhiltgen, thank you for the heads up.
Author
Owner

@guiksign commented on GitHub (Oct 23, 2025):

thank you as well @rick-github , I am so proud to have GPU, I did not understood I dont have GPU from Ollama perspective. Maybe time to invest...

<!-- gh-comment-id:3435733630 --> @guiksign commented on GitHub (Oct 23, 2025): thank you as well @rick-github , I am so proud to have GPU, I did not understood I dont have GPU from Ollama perspective. Maybe time to invest...
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70467