[GH-ISSUE #10406] Upgrade Ollama Error #32598

Closed
opened 2026-04-22 14:03:39 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @MonsieurMa on GitHub (Apr 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10406

What is the issue?

Steps to Reproduce
Upgrade Ollama on Windows 10.
Run ollama list to confirm that the model is listed.
Try to run a model with ollama run <model_name>.
The model fails to run, and the error message Error: llama runner process has terminated: exit status 2 is displayed.

Relevant log output

2025/04/25 18:23:31 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-25T18:23:31.028+08:00 level=INFO source=images.go:458 msg="total blobs: 5"
time=2025-04-25T18:23:31.028+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-25T18:23:31.029+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)"
time=2025-04-25T18:23:31.029+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-25T18:23:31.029+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-04-25T18:23:31.029+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=8
time=2025-04-25T18:23:31.187+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-ff007008-7612-a6c3-c04e-bf292c9067e9 library=cuda compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" overhead="630.5 MiB"
time=2025-04-25T18:23:31.189+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ff007008-7612-a6c3-c04e-bf292c9067e9 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2025/04/25 - 18:27:19 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/04/25 - 18:27:33 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/04/25 - 18:27:33 | 200 |      1.0336ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/04/25 - 18:27:43 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-04-25T18:27:44.005+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T18:27:44.020+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/25 - 18:27:44 | 200 |     30.8879ms |       127.0.0.1 | POST     "/api/show"
time=2025-04-25T18:27:44.055+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T18:27:44.085+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T18:27:44.099+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T18:27:44.100+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-25T18:27:44.100+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-25T18:27:44.100+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-25T18:27:44.101+08:00 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=D:\Ollama\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-ff007008-7612-a6c3-c04e-bf292c9067e9 parallel=4 available=24322859008 required="10.8 GiB"
time=2025-04-25T18:27:44.120+08:00 level=INFO source=server.go:105 msg="system memory" total="127.9 GiB" free="119.9 GiB" free_swap="126.4 GiB"
time=2025-04-25T18:27:44.120+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-25T18:27:44.120+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-25T18:27:44.120+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-25T18:27:44.121+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from D:\Ollama\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-04-25T18:27:44.367+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\.ollama\\models\\blobs\\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 4 --port 56007"
time=2025-04-25T18:27:44.376+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-25T18:27:44.376+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-25T18:27:44.376+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-25T18:27:44.433+08:00 level=INFO source=runner.go:853 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-04-25T18:27:44.592+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-04-25T18:27:44.593+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:56007"
time=2025-04-25T18:27:44.627+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from D:\Ollama\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
Exception 0xc0000005 0x0 0x21 0x7ff6c0faa432
PC=0x7ff6c0faa432
signal arrived during external code execution

runtime.cgocall(0x7ff6c0e00c50, 0xc000475be0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x3e fp=0xc000475bb8 sp=0xc000475b50 pc=0x7ff6c017259e
github.com/ollama/ollama/llama._Cfunc_llama_model_load_from_file(0x13c70fafd00, {0x0, 0x0, 0x31, 0x1, 0x0, 0x0, 0x7ff6c0e00420, 0xc0002e6000, 0x0, ...})
	_cgo_gotypes.go:813 +0x51 fp=0xc000475be0 sp=0xc000475bb8 pc=0x7ff6c0525231
github.com/ollama/ollama/llama.LoadModelFromFile.func1(...)
	C:/a/ollama/ollama/llama/llama.go:244
github.com/ollama/ollama/llama.LoadModelFromFile({0xc00003e150, 0x66}, {0x31, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000046950, ...})
	C:/a/ollama/ollama/llama/llama.go:244 +0x3aa fp=0xc000475dc8 sp=0xc000475be0 pc=0x7ff6c05281ca
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004e4000, {0x31, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000046950, 0x0}, ...)
	C:/a/ollama/ollama/runner/llamarunner/runner.go:771 +0x9b fp=0xc000475f10 sp=0xc000475dc8 pc=0x7ff6c05d9abb
github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1()
	C:/a/ollama/ollama/runner/llamarunner/runner.go:887 +0xda fp=0xc000475fe0 sp=0xc000475f10 pc=0x7ff6c05db29a
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000475fe8 sp=0xc000475fe0 pc=0x7ff6c017d161
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
	C:/a/ollama/ollama/runner/llamarunner/runner.go:887 +0xbd7

goroutine 1 gp=0xc0000021c0 m=nil [IO wait]:
runtime.gopark(0x7ff6c017e960?, 0x7ff6c1df1f60?, 0x20?, 0x80?, 0xc0004e80cc?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc00011f4b0 sp=0xc00011f490 pc=0x7ff6c017596e
runtime.netpollblock(0x380?, 0xc01103e6?, 0xf6?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:575 +0xf7 fp=0xc00011f4e8 sp=0xc00011f4b0 pc=0x7ff6c013b817
internal/poll.runtime_pollWait(0x13c70e3db30, 0x72)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:351 +0x85 fp=0xc00011f508 sp=0xc00011f4e8 pc=0x7ff6c0174b05
internal/poll.(*pollDesc).wait(0x7ff6c0209813?, 0x7ff6c0121ef6?, 0x0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00011f530 sp=0xc00011f508 pc=0x7ff6c020ae07
internal/poll.execIO(0xc0004e8020, 0xc00011f5d8)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:177 +0x105 fp=0xc00011f5a8 sp=0xc00011f530 pc=0x7ff6c020c265
internal/poll.(*FD).acceptOne(0xc0004e8008, 0x378, {0xc000128000?, 0xc00011f638?, 0x7ff6c017b177?}, 0xc00011f678?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:946 +0x65 fp=0xc00011f608 sp=0xc00011f5a8 pc=0x7ff6c02107e5
internal/poll.(*FD).Accept(0xc0004e8008, 0xc00011f7b8)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:980 +0x1b6 fp=0xc00011f6c0 sp=0xc00011f608 pc=0x7ff6c0210b16
net.(*netFD).accept(0xc0004e8008)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/fd_windows.go:182 +0x4b fp=0xc00011f7d8 sp=0xc00011f6c0 pc=0x7ff6c0281f2b
net.(*TCPListener).accept(0xc0003d4080)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/tcpsock_posix.go:159 +0x1b fp=0xc00011f828 sp=0xc00011f7d8 pc=0x7ff6c0297f7b
net.(*TCPListener).Accept(0xc0003d4080)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/tcpsock.go:380 +0x30 fp=0xc00011f858 sp=0xc00011f828 pc=0x7ff6c0296d30
net/http.(*onceCloseListener).Accept(0xc00010c000?)
	<autogenerated>:1 +0x24 fp=0xc00011f870 sp=0xc00011f858 pc=0x7ff6c04b0004
net/http.(*Server).Serve(0xc0004e6100, {0x7ff6c14c7580, 0xc0003d4080})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3424 +0x30c fp=0xc00011f9a0 sp=0xc00011f870 pc=0x7ff6c04878cc
github.com/ollama/ollama/runner/llamarunner.Execute({0xc0000a8020, 0xf, 0x1e})
	C:/a/ollama/ollama/runner/llamarunner/runner.go:914 +0x108a fp=0xc00011fd08 sp=0xc00011f9a0 pc=0x7ff6c05daeca
github.com/ollama/ollama/runner.Execute({0xc0000a8010?, 0x0?, 0x0?})
	C:/a/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc00011fd30 sp=0xc00011fd08 pc=0x7ff6c0643614
github.com/ollama/ollama/cmd.NewCLI.func2(0xc0000a7200?, {0x7ff6c12f5d04?, 0x4?, 0x7ff6c12f5d08?})
	C:/a/ollama/ollama/cmd/cmd.go:1365 +0x45 fp=0xc00011fd58 sp=0xc00011fd30 pc=0x7ff6c0d93be5
github.com/spf13/cobra.(*Command).execute(0xc000453508, {0xc0003d03c0, 0xf, 0xf})
	C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc00011fe78 sp=0xc00011fd58 pc=0x7ff6c02fc9fc
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004c6908)
	C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00011ff30 sp=0xc00011fe78 pc=0x7ff6c02fd245
github.com/spf13/cobra.(*Command).Execute(...)
	C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
	C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
	C:/a/ollama/ollama/main.go:12 +0x4d fp=0xc00011ff50 sp=0xc00011ff30 pc=0x7ff6c0d93f4d
runtime.main()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:283 +0x27d fp=0xc00011ffe0 sp=0xc00011ff50 pc=0x7ff6c01447fd
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00011ffe8 sp=0xc00011ffe0 pc=0x7ff6c017d161

goroutine 2 gp=0xc0000028c0 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000071fa8 sp=0xc000071f88 pc=0x7ff6c017596e
runtime.goparkunlock(...)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441
runtime.forcegchelper()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:348 +0xb8 fp=0xc000071fe0 sp=0xc000071fa8 pc=0x7ff6c0144b18
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x7ff6c017d161
created by runtime.init.7 in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:336 +0x1a

goroutine 3 gp=0xc000002c40 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000073f80 sp=0xc000073f60 pc=0x7ff6c017596e
runtime.goparkunlock(...)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441
runtime.bgsweep(0xc000080000)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgcsweep.go:316 +0xdf fp=0xc000073fc8 sp=0xc000073f80 pc=0x7ff6c012d77f
runtime.gcenable.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:204 +0x25 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x7ff6c0121b45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x7ff6c017d161
created by runtime.gcenable in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc000002e00 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x7ff6c14b5240?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000087f78 sp=0xc000087f58 pc=0x7ff6c017596e
runtime.goparkunlock(...)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441
runtime.(*scavengerState).park(0x7ff6c1e185c0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000087fa8 sp=0xc000087f78 pc=0x7ff6c012b1c9
runtime.bgscavenge(0xc000080000)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000087fc8 sp=0xc000087fa8 pc=0x7ff6c012b759
runtime.gcenable.gowrap2()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:205 +0x25 fp=0xc000087fe0 sp=0xc000087fc8 pc=0x7ff6c0121ae5
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000087fe8 sp=0xc000087fe0 pc=0x7ff6c017d161
created by runtime.gcenable in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc000003340 m=nil [finalizer wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000089e30 sp=0xc000089e10 pc=0x7ff6c017596e
runtime.runfinq()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mfinal.go:196 +0x107 fp=0xc000089fe0 sp=0xc000089e30 pc=0x7ff6c0120ac7
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000089fe8 sp=0xc000089fe0 pc=0x7ff6c017d161
created by runtime.createfing in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mfinal.go:166 +0x3d

goroutine 6 gp=0xc000003dc0 m=nil [chan receive]:
runtime.gopark(0xc0001ef540?, 0xc000508018?, 0x60?, 0x5f?, 0x7ff6c026af68?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000075f18 sp=0xc000075ef8 pc=0x7ff6c017596e
runtime.chanrecv(0xc00003e3f0, 0x0, 0x1)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/chan.go:664 +0x445 fp=0xc000075f90 sp=0xc000075f18 pc=0x7ff6c0112d25
runtime.chanrecv1(0x7ff6c0144960?, 0xc000075f76?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/chan.go:506 +0x12 fp=0xc000075fb8 sp=0xc000075f90 pc=0x7ff6c01128b2
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1799 +0x2f fp=0xc000075fe0 sp=0xc000075fb8 pc=0x7ff6c0124d6f
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000075fe8 sp=0xc000075fe0 pc=0x7ff6c017d161
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1794 +0x85

goroutine 7 gp=0xc0003da540 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000083f38 sp=0xc000083f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000083fc8 sp=0xc000083f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000083fe0 sp=0xc000083fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000083fe8 sp=0xc000083fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 18 gp=0xc0001061c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000113f38 sp=0xc000113f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000113fc8 sp=0xc000113f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000113fe0 sp=0xc000113fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000113fe8 sp=0xc000113fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 34 gp=0xc000484000 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc00010ff38 sp=0xc00010ff18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc00010ffc8 sp=0xc00010ff38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc00010ffe0 sp=0xc00010ffc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00010ffe8 sp=0xc00010ffe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 8 gp=0xc0003da700 m=nil [GC worker (idle)]:
runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000085f38 sp=0xc000085f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000085fc8 sp=0xc000085f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000085fe0 sp=0xc000085fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 19 gp=0xc000106380 m=nil [GC worker (idle)]:
runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000115f38 sp=0xc000115f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000115fc8 sp=0xc000115f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000115fe0 sp=0xc000115fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000115fe8 sp=0xc000115fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 35 gp=0xc0004841c0 m=nil [GC worker (idle)]:
runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000111f38 sp=0xc000111f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000111fc8 sp=0xc000111f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000111fe0 sp=0xc000111fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000111fe8 sp=0xc000111fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 9 gp=0xc0003da8c0 m=nil [GC worker (idle)]:
runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000477f38 sp=0xc000477f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000477fc8 sp=0xc000477f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000477fe0 sp=0xc000477fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000477fe8 sp=0xc000477fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 20 gp=0xc000106540 m=nil [GC worker (idle)]:
runtime.gopark(0x24dcb3c1fd4?, 0x1?, 0xec?, 0xe3?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000473f38 sp=0xc000473f18 pc=0x7ff6c017596e
runtime.gcBgMarkWorker(0xc00003f9d0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000473fc8 sp=0xc000473f38 pc=0x7ff6c0124069
runtime.gcBgMarkStartWorkers.gowrap1()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000473fe0 sp=0xc000473fc8 pc=0x7ff6c0123f45
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000473fe8 sp=0xc000473fe0 pc=0x7ff6c017d161
created by runtime.gcBgMarkStartWorkers in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105

goroutine 37 gp=0xc0001068c0 m=nil [sync.WaitGroup.Wait]:
runtime.gopark(0x0?, 0x0?, 0x60?, 0xfe?, 0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000479e18 sp=0xc000479df8 pc=0x7ff6c017596e
runtime.goparkunlock(...)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441
runtime.semacquire1(0xc0004e4008, 0x0, 0x1, 0x0, 0x18)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/sema.go:188 +0x22f fp=0xc000479e80 sp=0xc000479e18 pc=0x7ff6c0156f2f
sync.runtime_SemacquireWaitGroup(0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/sema.go:110 +0x25 fp=0xc000479eb8 sp=0xc000479e80 pc=0x7ff6c0177045
sync.(*WaitGroup).Wait(0x0?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/sync/waitgroup.go:118 +0x48 fp=0xc000479ee0 sp=0xc000479eb8 pc=0x7ff6c018b028
github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0004e4000, {0x7ff6c14c9850, 0xc000690730})
	C:/a/ollama/ollama/runner/llamarunner/runner.go:317 +0x47 fp=0xc000479fb8 sp=0xc000479ee0 pc=0x7ff6c05d65e7
github.com/ollama/ollama/runner/llamarunner.Execute.gowrap2()
	C:/a/ollama/ollama/runner/llamarunner/runner.go:894 +0x28 fp=0xc000479fe0 sp=0xc000479fb8 pc=0x7ff6c05db188
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000479fe8 sp=0xc000479fe0 pc=0x7ff6c017d161
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
	C:/a/ollama/ollama/runner/llamarunner/runner.go:894 +0xcb7

goroutine 21 gp=0xc0005061c0 m=nil [IO wait]:
runtime.gopark(0x0?, 0xc0000ba020?, 0xc8?, 0xa0?, 0xc0000ba0cc?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc00004b8c8 sp=0xc00004b8a8 pc=0x7ff6c017596e
runtime.netpollblock(0x3a4?, 0xc01103e6?, 0xf6?)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:575 +0xf7 fp=0xc00004b900 sp=0xc00004b8c8 pc=0x7ff6c013b817
internal/poll.runtime_pollWait(0x13c70e3da18, 0x72)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:351 +0x85 fp=0xc00004b920 sp=0xc00004b900 pc=0x7ff6c0174b05
internal/poll.(*pollDesc).wait(0x13c70c61b78?, 0xc00004b970?, 0x0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004b948 sp=0xc00004b920 pc=0x7ff6c020ae07
internal/poll.execIO(0xc0000ba020, 0x7ff6c1368c60)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:177 +0x105 fp=0xc00004b9c0 sp=0xc00004b948 pc=0x7ff6c020c265
internal/poll.(*FD).Read(0xc0000ba008, {0xc0004ee000, 0x1000, 0x1000})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:438 +0x29b fp=0xc00004ba60 sp=0xc00004b9c0 pc=0x7ff6c020cf3b
net.(*netFD).Read(0xc0000ba008, {0xc0004ee000?, 0xc00004bad0?, 0x7ff6c020b2c5?})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/fd_posix.go:55 +0x25 fp=0xc00004baa8 sp=0xc00004ba60 pc=0x7ff6c0280045
net.(*conn).Read(0xc0000ec000, {0xc0004ee000?, 0x0?, 0x0?})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/net.go:194 +0x45 fp=0xc00004baf0 sp=0xc00004baa8 pc=0x7ff6c028f525
net/http.(*connReader).Read(0xc000243170, {0xc0004ee000, 0x1000, 0x1000})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:798 +0x159 fp=0xc00004bb40 sp=0xc00004baf0 pc=0x7ff6c047c779
bufio.(*Reader).fill(0xc0001081e0)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/bufio/bufio.go:113 +0x103 fp=0xc00004bb78 sp=0xc00004bb40 pc=0x7ff6c02a5d63
bufio.(*Reader).Peek(0xc0001081e0, 0x4)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/bufio/bufio.go:152 +0x53 fp=0xc00004bb98 sp=0xc00004bb78 pc=0x7ff6c02a5e93
net/http.(*conn).serve(0xc00010c000, {0x7ff6c14c9818, 0xc0002430e0})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:2137 +0x785 fp=0xc00004bfb8 sp=0xc00004bb98 pc=0x7ff6c0482565
net/http.(*Server).Serve.gowrap3()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x28 fp=0xc00004bfe0 sp=0xc00004bfb8 pc=0x7ff6c0487cc8
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x7ff6c017d161
created by net/http.(*Server).Serve in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x485
rax     0x1
rbx     0x7ffb36e9a040
rcx     0x1
rdx     0x2dbf3ff0c8
rdi     0x2
rsi     0x1
rbp     0x2dbf3ff228
rsp     0x2dbf3ff078
r8      0x0
r9      0x7ffbddc289b8
r10     0x80
r11     0x13c7efcac00
r12     0x13c305bdd70
r13     0x2dbf3ff1f8
r14     0x2dbf3ff1d8
r15     0x2dbf3ff160
rip     0x7ff6c0faa432
rflags  0x10202
cs      0x33
fs      0x53
gs      0x2b
time=2025-04-25T18:27:45.127+08:00 level=ERROR source=sched.go:457 msg="error loading llama server" error="llama runner process has terminated: exit status 2"
[GIN] 2025/04/25 - 18:27:45 | 500 |    1.0875474s |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.6.6

Originally created by @MonsieurMa on GitHub (Apr 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10406 ### What is the issue? Steps to Reproduce Upgrade Ollama on Windows 10. Run ollama list to confirm that the model is listed. Try to run a model with ollama run <model_name>. The model fails to run, and the error message Error: llama runner process has terminated: exit status 2 is displayed. ### Relevant log output ```shell 2025/04/25 18:23:31 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-25T18:23:31.028+08:00 level=INFO source=images.go:458 msg="total blobs: 5" time=2025-04-25T18:23:31.028+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-25T18:23:31.029+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)" time=2025-04-25T18:23:31.029+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-25T18:23:31.029+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-04-25T18:23:31.029+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=8 time=2025-04-25T18:23:31.187+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-ff007008-7612-a6c3-c04e-bf292c9067e9 library=cuda compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" overhead="630.5 MiB" time=2025-04-25T18:23:31.189+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ff007008-7612-a6c3-c04e-bf292c9067e9 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2025/04/25 - 18:27:19 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/04/25 - 18:27:33 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/04/25 - 18:27:33 | 200 | 1.0336ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/25 - 18:27:43 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-04-25T18:27:44.005+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T18:27:44.020+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/25 - 18:27:44 | 200 | 30.8879ms | 127.0.0.1 | POST "/api/show" time=2025-04-25T18:27:44.055+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T18:27:44.085+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T18:27:44.099+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T18:27:44.100+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-25T18:27:44.100+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-25T18:27:44.100+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-25T18:27:44.101+08:00 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=D:\Ollama\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-ff007008-7612-a6c3-c04e-bf292c9067e9 parallel=4 available=24322859008 required="10.8 GiB" time=2025-04-25T18:27:44.120+08:00 level=INFO source=server.go:105 msg="system memory" total="127.9 GiB" free="119.9 GiB" free_swap="126.4 GiB" time=2025-04-25T18:27:44.120+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-25T18:27:44.120+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-25T18:27:44.120+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-25T18:27:44.121+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from D:\Ollama\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-25T18:27:44.367+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\.ollama\\models\\blobs\\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 4 --port 56007" time=2025-04-25T18:27:44.376+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-25T18:27:44.376+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-25T18:27:44.376+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-25T18:27:44.433+08:00 level=INFO source=runner.go:853 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-04-25T18:27:44.592+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-04-25T18:27:44.593+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:56007" time=2025-04-25T18:27:44.627+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from D:\Ollama\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) Exception 0xc0000005 0x0 0x21 0x7ff6c0faa432 PC=0x7ff6c0faa432 signal arrived during external code execution runtime.cgocall(0x7ff6c0e00c50, 0xc000475be0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x3e fp=0xc000475bb8 sp=0xc000475b50 pc=0x7ff6c017259e github.com/ollama/ollama/llama._Cfunc_llama_model_load_from_file(0x13c70fafd00, {0x0, 0x0, 0x31, 0x1, 0x0, 0x0, 0x7ff6c0e00420, 0xc0002e6000, 0x0, ...}) _cgo_gotypes.go:813 +0x51 fp=0xc000475be0 sp=0xc000475bb8 pc=0x7ff6c0525231 github.com/ollama/ollama/llama.LoadModelFromFile.func1(...) C:/a/ollama/ollama/llama/llama.go:244 github.com/ollama/ollama/llama.LoadModelFromFile({0xc00003e150, 0x66}, {0x31, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000046950, ...}) C:/a/ollama/ollama/llama/llama.go:244 +0x3aa fp=0xc000475dc8 sp=0xc000475be0 pc=0x7ff6c05281ca github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004e4000, {0x31, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000046950, 0x0}, ...) C:/a/ollama/ollama/runner/llamarunner/runner.go:771 +0x9b fp=0xc000475f10 sp=0xc000475dc8 pc=0x7ff6c05d9abb github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1() C:/a/ollama/ollama/runner/llamarunner/runner.go:887 +0xda fp=0xc000475fe0 sp=0xc000475f10 pc=0x7ff6c05db29a runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000475fe8 sp=0xc000475fe0 pc=0x7ff6c017d161 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 C:/a/ollama/ollama/runner/llamarunner/runner.go:887 +0xbd7 goroutine 1 gp=0xc0000021c0 m=nil [IO wait]: runtime.gopark(0x7ff6c017e960?, 0x7ff6c1df1f60?, 0x20?, 0x80?, 0xc0004e80cc?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc00011f4b0 sp=0xc00011f490 pc=0x7ff6c017596e runtime.netpollblock(0x380?, 0xc01103e6?, 0xf6?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:575 +0xf7 fp=0xc00011f4e8 sp=0xc00011f4b0 pc=0x7ff6c013b817 internal/poll.runtime_pollWait(0x13c70e3db30, 0x72) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:351 +0x85 fp=0xc00011f508 sp=0xc00011f4e8 pc=0x7ff6c0174b05 internal/poll.(*pollDesc).wait(0x7ff6c0209813?, 0x7ff6c0121ef6?, 0x0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00011f530 sp=0xc00011f508 pc=0x7ff6c020ae07 internal/poll.execIO(0xc0004e8020, 0xc00011f5d8) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:177 +0x105 fp=0xc00011f5a8 sp=0xc00011f530 pc=0x7ff6c020c265 internal/poll.(*FD).acceptOne(0xc0004e8008, 0x378, {0xc000128000?, 0xc00011f638?, 0x7ff6c017b177?}, 0xc00011f678?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:946 +0x65 fp=0xc00011f608 sp=0xc00011f5a8 pc=0x7ff6c02107e5 internal/poll.(*FD).Accept(0xc0004e8008, 0xc00011f7b8) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:980 +0x1b6 fp=0xc00011f6c0 sp=0xc00011f608 pc=0x7ff6c0210b16 net.(*netFD).accept(0xc0004e8008) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/fd_windows.go:182 +0x4b fp=0xc00011f7d8 sp=0xc00011f6c0 pc=0x7ff6c0281f2b net.(*TCPListener).accept(0xc0003d4080) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/tcpsock_posix.go:159 +0x1b fp=0xc00011f828 sp=0xc00011f7d8 pc=0x7ff6c0297f7b net.(*TCPListener).Accept(0xc0003d4080) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/tcpsock.go:380 +0x30 fp=0xc00011f858 sp=0xc00011f828 pc=0x7ff6c0296d30 net/http.(*onceCloseListener).Accept(0xc00010c000?) <autogenerated>:1 +0x24 fp=0xc00011f870 sp=0xc00011f858 pc=0x7ff6c04b0004 net/http.(*Server).Serve(0xc0004e6100, {0x7ff6c14c7580, 0xc0003d4080}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3424 +0x30c fp=0xc00011f9a0 sp=0xc00011f870 pc=0x7ff6c04878cc github.com/ollama/ollama/runner/llamarunner.Execute({0xc0000a8020, 0xf, 0x1e}) C:/a/ollama/ollama/runner/llamarunner/runner.go:914 +0x108a fp=0xc00011fd08 sp=0xc00011f9a0 pc=0x7ff6c05daeca github.com/ollama/ollama/runner.Execute({0xc0000a8010?, 0x0?, 0x0?}) C:/a/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc00011fd30 sp=0xc00011fd08 pc=0x7ff6c0643614 github.com/ollama/ollama/cmd.NewCLI.func2(0xc0000a7200?, {0x7ff6c12f5d04?, 0x4?, 0x7ff6c12f5d08?}) C:/a/ollama/ollama/cmd/cmd.go:1365 +0x45 fp=0xc00011fd58 sp=0xc00011fd30 pc=0x7ff6c0d93be5 github.com/spf13/cobra.(*Command).execute(0xc000453508, {0xc0003d03c0, 0xf, 0xf}) C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc00011fe78 sp=0xc00011fd58 pc=0x7ff6c02fc9fc github.com/spf13/cobra.(*Command).ExecuteC(0xc0004c6908) C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00011ff30 sp=0xc00011fe78 pc=0x7ff6c02fd245 github.com/spf13/cobra.(*Command).Execute(...) C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) C:/Users/runneradmin/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 main.main() C:/a/ollama/ollama/main.go:12 +0x4d fp=0xc00011ff50 sp=0xc00011ff30 pc=0x7ff6c0d93f4d runtime.main() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:283 +0x27d fp=0xc00011ffe0 sp=0xc00011ff50 pc=0x7ff6c01447fd runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00011ffe8 sp=0xc00011ffe0 pc=0x7ff6c017d161 goroutine 2 gp=0xc0000028c0 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000071fa8 sp=0xc000071f88 pc=0x7ff6c017596e runtime.goparkunlock(...) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441 runtime.forcegchelper() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:348 +0xb8 fp=0xc000071fe0 sp=0xc000071fa8 pc=0x7ff6c0144b18 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x7ff6c017d161 created by runtime.init.7 in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:336 +0x1a goroutine 3 gp=0xc000002c40 m=nil [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000073f80 sp=0xc000073f60 pc=0x7ff6c017596e runtime.goparkunlock(...) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441 runtime.bgsweep(0xc000080000) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgcsweep.go:316 +0xdf fp=0xc000073fc8 sp=0xc000073f80 pc=0x7ff6c012d77f runtime.gcenable.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:204 +0x25 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x7ff6c0121b45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x7ff6c017d161 created by runtime.gcenable in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:204 +0x66 goroutine 4 gp=0xc000002e00 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x7ff6c14b5240?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000087f78 sp=0xc000087f58 pc=0x7ff6c017596e runtime.goparkunlock(...) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441 runtime.(*scavengerState).park(0x7ff6c1e185c0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000087fa8 sp=0xc000087f78 pc=0x7ff6c012b1c9 runtime.bgscavenge(0xc000080000) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000087fc8 sp=0xc000087fa8 pc=0x7ff6c012b759 runtime.gcenable.gowrap2() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:205 +0x25 fp=0xc000087fe0 sp=0xc000087fc8 pc=0x7ff6c0121ae5 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000087fe8 sp=0xc000087fe0 pc=0x7ff6c017d161 created by runtime.gcenable in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:205 +0xa5 goroutine 5 gp=0xc000003340 m=nil [finalizer wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000089e30 sp=0xc000089e10 pc=0x7ff6c017596e runtime.runfinq() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mfinal.go:196 +0x107 fp=0xc000089fe0 sp=0xc000089e30 pc=0x7ff6c0120ac7 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000089fe8 sp=0xc000089fe0 pc=0x7ff6c017d161 created by runtime.createfing in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mfinal.go:166 +0x3d goroutine 6 gp=0xc000003dc0 m=nil [chan receive]: runtime.gopark(0xc0001ef540?, 0xc000508018?, 0x60?, 0x5f?, 0x7ff6c026af68?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000075f18 sp=0xc000075ef8 pc=0x7ff6c017596e runtime.chanrecv(0xc00003e3f0, 0x0, 0x1) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/chan.go:664 +0x445 fp=0xc000075f90 sp=0xc000075f18 pc=0x7ff6c0112d25 runtime.chanrecv1(0x7ff6c0144960?, 0xc000075f76?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/chan.go:506 +0x12 fp=0xc000075fb8 sp=0xc000075f90 pc=0x7ff6c01128b2 runtime.unique_runtime_registerUniqueMapCleanup.func2(...) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1796 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1799 +0x2f fp=0xc000075fe0 sp=0xc000075fb8 pc=0x7ff6c0124d6f runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000075fe8 sp=0xc000075fe0 pc=0x7ff6c017d161 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1794 +0x85 goroutine 7 gp=0xc0003da540 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000083f38 sp=0xc000083f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000083fc8 sp=0xc000083f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000083fe0 sp=0xc000083fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000083fe8 sp=0xc000083fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 18 gp=0xc0001061c0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000113f38 sp=0xc000113f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000113fc8 sp=0xc000113f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000113fe0 sp=0xc000113fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000113fe8 sp=0xc000113fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 34 gp=0xc000484000 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc00010ff38 sp=0xc00010ff18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc00010ffc8 sp=0xc00010ff38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc00010ffe0 sp=0xc00010ffc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00010ffe8 sp=0xc00010ffe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 8 gp=0xc0003da700 m=nil [GC worker (idle)]: runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000085f38 sp=0xc000085f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000085fc8 sp=0xc000085f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000085fe0 sp=0xc000085fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 19 gp=0xc000106380 m=nil [GC worker (idle)]: runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000115f38 sp=0xc000115f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000115fc8 sp=0xc000115f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000115fe0 sp=0xc000115fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000115fe8 sp=0xc000115fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 35 gp=0xc0004841c0 m=nil [GC worker (idle)]: runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000111f38 sp=0xc000111f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000111fc8 sp=0xc000111f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000111fe0 sp=0xc000111fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000111fe8 sp=0xc000111fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 9 gp=0xc0003da8c0 m=nil [GC worker (idle)]: runtime.gopark(0x24dcb3c1fd4?, 0x0?, 0x0?, 0x0?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000477f38 sp=0xc000477f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000477fc8 sp=0xc000477f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000477fe0 sp=0xc000477fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000477fe8 sp=0xc000477fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 20 gp=0xc000106540 m=nil [GC worker (idle)]: runtime.gopark(0x24dcb3c1fd4?, 0x1?, 0xec?, 0xe3?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000473f38 sp=0xc000473f18 pc=0x7ff6c017596e runtime.gcBgMarkWorker(0xc00003f9d0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xe9 fp=0xc000473fc8 sp=0xc000473f38 pc=0x7ff6c0124069 runtime.gcBgMarkStartWorkers.gowrap1() C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x25 fp=0xc000473fe0 sp=0xc000473fc8 pc=0x7ff6c0123f45 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000473fe8 sp=0xc000473fe0 pc=0x7ff6c017d161 created by runtime.gcBgMarkStartWorkers in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x105 goroutine 37 gp=0xc0001068c0 m=nil [sync.WaitGroup.Wait]: runtime.gopark(0x0?, 0x0?, 0x60?, 0xfe?, 0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc000479e18 sp=0xc000479df8 pc=0x7ff6c017596e runtime.goparkunlock(...) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:441 runtime.semacquire1(0xc0004e4008, 0x0, 0x1, 0x0, 0x18) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/sema.go:188 +0x22f fp=0xc000479e80 sp=0xc000479e18 pc=0x7ff6c0156f2f sync.runtime_SemacquireWaitGroup(0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/sema.go:110 +0x25 fp=0xc000479eb8 sp=0xc000479e80 pc=0x7ff6c0177045 sync.(*WaitGroup).Wait(0x0?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/sync/waitgroup.go:118 +0x48 fp=0xc000479ee0 sp=0xc000479eb8 pc=0x7ff6c018b028 github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0004e4000, {0x7ff6c14c9850, 0xc000690730}) C:/a/ollama/ollama/runner/llamarunner/runner.go:317 +0x47 fp=0xc000479fb8 sp=0xc000479ee0 pc=0x7ff6c05d65e7 github.com/ollama/ollama/runner/llamarunner.Execute.gowrap2() C:/a/ollama/ollama/runner/llamarunner/runner.go:894 +0x28 fp=0xc000479fe0 sp=0xc000479fb8 pc=0x7ff6c05db188 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000479fe8 sp=0xc000479fe0 pc=0x7ff6c017d161 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 C:/a/ollama/ollama/runner/llamarunner/runner.go:894 +0xcb7 goroutine 21 gp=0xc0005061c0 m=nil [IO wait]: runtime.gopark(0x0?, 0xc0000ba020?, 0xc8?, 0xa0?, 0xc0000ba0cc?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/proc.go:435 +0xce fp=0xc00004b8c8 sp=0xc00004b8a8 pc=0x7ff6c017596e runtime.netpollblock(0x3a4?, 0xc01103e6?, 0xf6?) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:575 +0xf7 fp=0xc00004b900 sp=0xc00004b8c8 pc=0x7ff6c013b817 internal/poll.runtime_pollWait(0x13c70e3da18, 0x72) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/netpoll.go:351 +0x85 fp=0xc00004b920 sp=0xc00004b900 pc=0x7ff6c0174b05 internal/poll.(*pollDesc).wait(0x13c70c61b78?, 0xc00004b970?, 0x0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004b948 sp=0xc00004b920 pc=0x7ff6c020ae07 internal/poll.execIO(0xc0000ba020, 0x7ff6c1368c60) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:177 +0x105 fp=0xc00004b9c0 sp=0xc00004b948 pc=0x7ff6c020c265 internal/poll.(*FD).Read(0xc0000ba008, {0xc0004ee000, 0x1000, 0x1000}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/internal/poll/fd_windows.go:438 +0x29b fp=0xc00004ba60 sp=0xc00004b9c0 pc=0x7ff6c020cf3b net.(*netFD).Read(0xc0000ba008, {0xc0004ee000?, 0xc00004bad0?, 0x7ff6c020b2c5?}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/fd_posix.go:55 +0x25 fp=0xc00004baa8 sp=0xc00004ba60 pc=0x7ff6c0280045 net.(*conn).Read(0xc0000ec000, {0xc0004ee000?, 0x0?, 0x0?}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/net.go:194 +0x45 fp=0xc00004baf0 sp=0xc00004baa8 pc=0x7ff6c028f525 net/http.(*connReader).Read(0xc000243170, {0xc0004ee000, 0x1000, 0x1000}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:798 +0x159 fp=0xc00004bb40 sp=0xc00004baf0 pc=0x7ff6c047c779 bufio.(*Reader).fill(0xc0001081e0) C:/hostedtoolcache/windows/go/1.24.0/x64/src/bufio/bufio.go:113 +0x103 fp=0xc00004bb78 sp=0xc00004bb40 pc=0x7ff6c02a5d63 bufio.(*Reader).Peek(0xc0001081e0, 0x4) C:/hostedtoolcache/windows/go/1.24.0/x64/src/bufio/bufio.go:152 +0x53 fp=0xc00004bb98 sp=0xc00004bb78 pc=0x7ff6c02a5e93 net/http.(*conn).serve(0xc00010c000, {0x7ff6c14c9818, 0xc0002430e0}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:2137 +0x785 fp=0xc00004bfb8 sp=0xc00004bb98 pc=0x7ff6c0482565 net/http.(*Server).Serve.gowrap3() C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x28 fp=0xc00004bfe0 sp=0xc00004bfb8 pc=0x7ff6c0487cc8 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x7ff6c017d161 created by net/http.(*Server).Serve in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x485 rax 0x1 rbx 0x7ffb36e9a040 rcx 0x1 rdx 0x2dbf3ff0c8 rdi 0x2 rsi 0x1 rbp 0x2dbf3ff228 rsp 0x2dbf3ff078 r8 0x0 r9 0x7ffbddc289b8 r10 0x80 r11 0x13c7efcac00 r12 0x13c305bdd70 r13 0x2dbf3ff1f8 r14 0x2dbf3ff1d8 r15 0x2dbf3ff160 rip 0x7ff6c0faa432 rflags 0x10202 cs 0x33 fs 0x53 gs 0x2b time=2025-04-25T18:27:45.127+08:00 level=ERROR source=sched.go:457 msg="error loading llama server" error="llama runner process has terminated: exit status 2" [GIN] 2025/04/25 - 18:27:45 | 500 | 1.0875474s | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-22 14:03:39 -05:00
Author
Owner

@keithjjones commented on GitHub (Apr 30, 2025):

+1, but on a Mac instead of Windows.

<!-- gh-comment-id:2842523433 --> @keithjjones commented on GitHub (Apr 30, 2025): +1, but on a Mac instead of Windows.
Author
Owner

@shinjitumala commented on GitHub (May 1, 2025):

Same logs on Linux. Go issue?

<!-- gh-comment-id:2843991241 --> @shinjitumala commented on GitHub (May 1, 2025): Same logs on Linux. Go issue?
Author
Owner

@unecologeek commented on GitHub (May 2, 2025):

Same here on Ubuntu, using Qwen3:14b
trying to call it with Cline (inside VSCode), the chat answers once, then it fails.

llama_context: n_ctx_per_seq (28768) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.60 MiB
init: kv_size = 28768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1
init:      CUDA0 KV buffer size =  2472.25 MiB
init:        CPU KV buffer size =  2022.75 MiB
llama_context: KV self size  = 4495.00 MiB, K (f16): 2247.50 MiB, V (f16): 2247.50 MiB
llama_context:      CUDA0 compute buffer size =  2502.25 MiB
llama_context:  CUDA_Host compute buffer size =    66.19 MiB
llama_context: graph nodes  = 1526
llama_context: graph splits = 238 (with bs=512), 39 (with bs=1)
time=2025-05-02T17:27:34.708+02:00 level=INFO source=server.go:619 msg="llama runner started in 25.85 seconds"
time=2025-05-02T17:29:09.296+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 17:29:39 | 200 |         2m31s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/02 - 17:32:50 | 200 |         3m41s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T17:32:54.599+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 17:37:25 | 200 |         4m31s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T17:39:29.423+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T17:41:30.425+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T17:43:32.441+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 17:43:59 | 200 |         4m30s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T17:46:12.492+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
^C[GIN] 2025/05/02 - 17:46:29 | 200 |         4m58s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T17:46:29.240+02:00 level=INFO source=server.go:741 msg="aborting completion request due to client closing the connection"
[GIN] 2025/05/02 - 17:46:29 | 200 |         2m56s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T17:46:29.240+02:00 level=INFO source=server.go:741 msg="aborting completion request due to client closing the connection"
[GIN] 2025/05/02 - 17:46:29 | 200 | 16.766115747s |       127.0.0.1 | POST     "/api/chat"


<!-- gh-comment-id:2847560708 --> @unecologeek commented on GitHub (May 2, 2025): Same here on Ubuntu, using Qwen3:14b trying to call it with Cline (inside VSCode), the chat answers once, then it fails. ``` llama_context: n_ctx_per_seq (28768) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.60 MiB init: kv_size = 28768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1 init: CUDA0 KV buffer size = 2472.25 MiB init: CPU KV buffer size = 2022.75 MiB llama_context: KV self size = 4495.00 MiB, K (f16): 2247.50 MiB, V (f16): 2247.50 MiB llama_context: CUDA0 compute buffer size = 2502.25 MiB llama_context: CUDA_Host compute buffer size = 66.19 MiB llama_context: graph nodes = 1526 llama_context: graph splits = 238 (with bs=512), 39 (with bs=1) time=2025-05-02T17:27:34.708+02:00 level=INFO source=server.go:619 msg="llama runner started in 25.85 seconds" time=2025-05-02T17:29:09.296+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/02 - 17:29:39 | 200 | 2m31s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/02 - 17:32:50 | 200 | 3m41s | 127.0.0.1 | POST "/api/chat" time=2025-05-02T17:32:54.599+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/02 - 17:37:25 | 200 | 4m31s | 127.0.0.1 | POST "/api/chat" time=2025-05-02T17:39:29.423+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-02T17:41:30.425+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-02T17:43:32.441+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/02 - 17:43:59 | 200 | 4m30s | 127.0.0.1 | POST "/api/chat" time=2025-05-02T17:46:12.492+02:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ^C[GIN] 2025/05/02 - 17:46:29 | 200 | 4m58s | 127.0.0.1 | POST "/api/chat" time=2025-05-02T17:46:29.240+02:00 level=INFO source=server.go:741 msg="aborting completion request due to client closing the connection" [GIN] 2025/05/02 - 17:46:29 | 200 | 2m56s | 127.0.0.1 | POST "/api/chat" time=2025-05-02T17:46:29.240+02:00 level=INFO source=server.go:741 msg="aborting completion request due to client closing the connection" [GIN] 2025/05/02 - 17:46:29 | 200 | 16.766115747s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@mtribiere commented on GitHub (May 3, 2025):

Same issue on Debian for qwen3:1.7b model and ollama 0.6.6

<!-- gh-comment-id:2848594331 --> @mtribiere commented on GitHub (May 3, 2025): Same issue on Debian for `qwen3:1.7b` model and ollama `0.6.6`
Author
Owner

@micseydel commented on GitHub (May 8, 2025):

This may be related to https://github.com/ollama/ollama/issues/2023#issuecomment-2860601956

<!-- gh-comment-id:2863876604 --> @micseydel commented on GitHub (May 8, 2025): This may be related to https://github.com/ollama/ollama/issues/2023#issuecomment-2860601956
Author
Owner

@rick-github commented on GitHub (May 12, 2025):

Unlikely to be related to prompt caching. Does upgrading to the most recent release help?

<!-- gh-comment-id:2871449094 --> @rick-github commented on GitHub (May 12, 2025): Unlikely to be related to prompt caching. Does upgrading to the most recent release help?
Author
Owner

@MonsieurMa commented on GitHub (May 12, 2025):

升级到最新版,已解决

<!-- gh-comment-id:2872045068 --> @MonsieurMa commented on GitHub (May 12, 2025): 升级到最新版,已解决
Author
Owner

@keithjjones commented on GitHub (May 12, 2025):

Is this fix in the latest version? I'm on 0.6.8.

I'm still seeing:

time=2025-05-12T10:34:01.541-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32

Thx!

<!-- gh-comment-id:2872937989 --> @keithjjones commented on GitHub (May 12, 2025): Is this fix in the latest version? I'm on 0.6.8. I'm still seeing: ``` time=2025-05-12T10:34:01.541-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ``` Thx!
Author
Owner

@rick-github commented on GitHub (May 12, 2025):

key not found is not an error condition. A full server log will aid in debugging.

<!-- gh-comment-id:2872982628 --> @rick-github commented on GitHub (May 12, 2025): `key not found` is not an error condition. A full server log will aid in debugging.
Author
Owner

@keithjjones commented on GitHub (May 12, 2025):

Here are some logs for macOS. It sounds like it happens on Linux and Windows too, but I don't have those to test. There are other logs in this thread, above.

Several key not found errors in this log:

2025/05/12 11:17:39 routes.go:1233: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/REDACTED/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-05-12T11:17:39.619-04:00 level=INFO source=images.go:463 msg="total blobs: 21"
time=2025-05-12T11:17:39.619-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-12T11:17:39.620-04:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.8)"
time=2025-05-12T11:17:40.027-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB"


time=2025-05-12T11:17:58.682-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-12T11:17:58.699-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-12T11:17:58.713-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-12T11:17:58.714-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-05-12T11:17:58.715-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-05-12T11:17:58.715-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-05-12T11:17:58.716-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-05-12T11:17:58.716-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-05-12T11:17:58.716-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-05-12T11:17:58.716-04:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=0 parallel=1 available=22906503168 required="19.2 GiB"
time=2025-05-12T11:17:58.717-04:00 level=INFO source=server.go:106 msg="system memory" total="32.0 GiB" free="19.5 GiB" free_swap="0 B"
time=2025-05-12T11:17:58.717-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-05-12T11:17:58.717-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-05-12T11:17:58.717-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-05-12T11:17:58.717-04:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.3 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
llama_model_load_from_file_impl: using device Metal (Apple M2 Pro) - 21845 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-12T11:17:58.852-04:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/opt/homebrew/Cellar/ollama/0.6.8/bin/ollama runner --model /Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 40000 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 1 --port 58384"
time=2025-05-12T11:17:58.855-04:00 level=INFO source=sched.go:452 msg="loaded runners" count=1
time=2025-05-12T11:17:58.855-04:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-05-12T11:17:58.856-04:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-12T11:17:58.921-04:00 level=INFO source=runner.go:853 msg="starting go runner"
time=2025-05-12T11:17:58.922-04:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-05-12T11:17:58.925-04:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:58384"
llama_model_load_from_file_impl: using device Metal (Apple M2 Pro) - 21845 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-05-12T11:17:59.108-04:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors:   CPU_Mapped model buffer size =   417.66 MiB
load_tensors: Metal_Mapped model buffer size =  8566.04 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 40000
llama_context: n_ctx_per_seq = 40000
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (40000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Pro
ggml_metal_init: picking default device: Apple M2 Pro
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = true
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 22906.50 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96       (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
llama_context:        CPU  output buffer size =     0.60 MiB
init: kv_size = 40000, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
init:      Metal KV buffer size =  7500.00 MiB
llama_context: KV self size  = 7500.00 MiB, K (f16): 3750.00 MiB, V (f16): 3750.00 MiB
llama_context:      Metal compute buffer size =  3243.13 MiB
llama_context:        CPU compute buffer size =    88.13 MiB
llama_context: graph nodes  = 1782
llama_context: graph splits = 2
time=2025-05-12T11:18:00.867-04:00 level=INFO source=server.go:628 msg="llama runner started in 2.01 seconds"
[GIN] 2025/05/12 - 11:27:26 | 200 |    1.019708ms |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/12 - 11:27:26 | 200 |    1.622125ms |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/12 - 11:28:29 | 200 |        10m31s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-12T11:28:29.677-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/12 - 11:29:44 | 200 |         1m15s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-12T11:29:44.936-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
<!-- gh-comment-id:2873062611 --> @keithjjones commented on GitHub (May 12, 2025): Here are some logs for macOS. It sounds like it happens on Linux and Windows too, but I don't have those to test. There are other logs in this thread, above. Several key not found errors in this log: ``` 2025/05/12 11:17:39 routes.go:1233: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/REDACTED/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-05-12T11:17:39.619-04:00 level=INFO source=images.go:463 msg="total blobs: 21" time=2025-05-12T11:17:39.619-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-12T11:17:39.620-04:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.8)" time=2025-05-12T11:17:40.027-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB" time=2025-05-12T11:17:58.682-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-12T11:17:58.699-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-12T11:17:58.713-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-12T11:17:58.714-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-05-12T11:17:58.715-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-05-12T11:17:58.715-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-05-12T11:17:58.716-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-05-12T11:17:58.716-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-05-12T11:17:58.716-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-05-12T11:17:58.716-04:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=0 parallel=1 available=22906503168 required="19.2 GiB" time=2025-05-12T11:17:58.717-04:00 level=INFO source=server.go:106 msg="system memory" total="32.0 GiB" free="19.5 GiB" free_swap="0 B" time=2025-05-12T11:17:58.717-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-05-12T11:17:58.717-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-05-12T11:17:58.717-04:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-05-12T11:17:58.717-04:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.3 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB" llama_model_load_from_file_impl: using device Metal (Apple M2 Pro) - 21845 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-12T11:17:58.852-04:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/opt/homebrew/Cellar/ollama/0.6.8/bin/ollama runner --model /Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 40000 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 1 --port 58384" time=2025-05-12T11:17:58.855-04:00 level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-12T11:17:58.855-04:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-12T11:17:58.856-04:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-12T11:17:58.921-04:00 level=INFO source=runner.go:853 msg="starting go runner" time=2025-05-12T11:17:58.922-04:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-05-12T11:17:58.925-04:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:58384" llama_model_load_from_file_impl: using device Metal (Apple M2 Pro) - 21845 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /Users/REDACTED/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-05-12T11:17:59.108-04:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 48 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 49/49 layers to GPU load_tensors: CPU_Mapped model buffer size = 417.66 MiB load_tensors: Metal_Mapped model buffer size = 8566.04 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 40000 llama_context: n_ctx_per_seq = 40000 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (40000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = true ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 0.60 MiB init: kv_size = 40000, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 init: Metal KV buffer size = 7500.00 MiB llama_context: KV self size = 7500.00 MiB, K (f16): 3750.00 MiB, V (f16): 3750.00 MiB llama_context: Metal compute buffer size = 3243.13 MiB llama_context: CPU compute buffer size = 88.13 MiB llama_context: graph nodes = 1782 llama_context: graph splits = 2 time=2025-05-12T11:18:00.867-04:00 level=INFO source=server.go:628 msg="llama runner started in 2.01 seconds" [GIN] 2025/05/12 - 11:27:26 | 200 | 1.019708ms | 127.0.0.1 | HEAD "/" [GIN] 2025/05/12 - 11:27:26 | 200 | 1.622125ms | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/12 - 11:28:29 | 200 | 10m31s | 127.0.0.1 | POST "/api/chat" time=2025-05-12T11:28:29.677-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/12 - 11:29:44 | 200 | 1m15s | 127.0.0.1 | POST "/api/chat" time=2025-05-12T11:29:44.936-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ```
Author
Owner

@rick-github commented on GitHub (May 12, 2025):

key not found is not an error condition. It's a notification that the model didn't have the key and that a default value will be used instead.

<!-- gh-comment-id:2873073612 --> @rick-github commented on GitHub (May 12, 2025): `key not found` is not an error condition. It's a notification that the model didn't have the key and that a default value will be used instead.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32598