[GH-ISSUE #10534] wsarecv forcibly closed with key not found #6931

Closed
opened 2026-04-12 18:49:25 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @arkimium on GitHub (May 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10534

Here now is also appeared in Windows 21H2 with AMD Radeon RX 5500, with version 0.6.7.

the error is the same as above:

Error: POST predict: Post "http://127.0.0.1:61115/completion": read tcp 127.0.0.1:61117->127.0.0.1:61115: wsarecv: An existing connection was forcibly closed by the remote host.

my fully server log:

2025/05/02 20:10:48 routes.go:1233: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Administrator\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-05-02T20:10:48.853+08:00 level=INFO source=images.go:458 msg="total blobs: 5"
time=2025-05-02T20:10:48.853+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-05-02T20:10:48.854+08:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.7)"
time=2025-05-02T20:10:48.854+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-02T20:10:48.854+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-05-02T20:10:48.854+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=4 efficiency=0 threads=8
time=2025-05-02T20:10:48.866+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-05-02T20:10:48.866+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="16.0 GiB" available="11.2 GiB"
[GIN] 2025/05/02 - 20:10:49 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-02T20:10:49.244+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:10:49.292+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 20:10:49 | 200 |    101.7662ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-02T20:10:49.356+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:10:49.402+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:10:49.449+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:10:49.453+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="11.2 GiB" free_swap="9.4 GiB"
time=2025-05-02T20:10:49.454+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[11.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-02T20:10:49.534+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:10:49.539+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:10:49.561+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 62929"
time=2025-05-02T20:10:49.565+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-02T20:10:49.565+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-02T20:10:49.566+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-02T20:10:49.593+08:00 level=INFO source=runner.go:861 msg="starting ollama engine"
time=2025-05-02T20:10:49.609+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:62929"
time=2025-05-02T20:10:49.689+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:10:49.693+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-02T20:10:49.693+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-02T20:10:49.693+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-05-02T20:10:49.711+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-02T20:10:49.716+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
time=2025-05-02T20:10:49.825+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-02T20:10:50.725+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:10:50.832+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds"
[GIN] 2025/05/02 - 20:10:50 | 200 |    1.5180319s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-02T20:10:51.549+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/05/02 - 20:10:51 | 200 |    259.0518ms |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T20:10:52.043+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/05/02 - 20:11:05 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-02T20:11:05.056+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:05.101+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 20:11:05 | 200 |     94.8025ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-02T20:11:05.166+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:05.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:05.259+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:05.264+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="11.1 GiB" free_swap="9.3 GiB"
time=2025-05-02T20:11:05.265+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[11.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-02T20:11:05.335+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:05.339+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:11:05.355+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 62964"
time=2025-05-02T20:11:05.359+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-02T20:11:05.359+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-02T20:11:05.360+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-02T20:11:05.389+08:00 level=INFO source=runner.go:861 msg="starting ollama engine"
time=2025-05-02T20:11:05.405+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:62964"
time=2025-05-02T20:11:05.483+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:05.487+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-02T20:11:05.487+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-02T20:11:05.487+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-05-02T20:11:05.505+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-02T20:11:05.510+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
time=2025-05-02T20:11:05.616+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-02T20:11:06.504+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:11:06.509+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:11:06.632+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds"
[GIN] 2025/05/02 - 20:11:06 | 200 |    1.5093505s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-02T20:11:07.814+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/05/02 - 20:11:08 | 200 |    269.4994ms |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T20:11:08.297+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/05/02 - 20:11:09 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-02T20:11:09.191+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:09.235+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 20:11:09 | 200 |     98.6684ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-02T20:11:09.299+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:09.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:09.390+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:09.394+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="11.1 GiB" free_swap="9.3 GiB"
time=2025-05-02T20:11:09.396+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[11.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-02T20:11:09.467+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:09.472+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:11:09.478+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 62977"
time=2025-05-02T20:11:09.482+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-02T20:11:09.482+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-02T20:11:09.483+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-02T20:11:09.512+08:00 level=INFO source=runner.go:861 msg="starting ollama engine"
time=2025-05-02T20:11:09.529+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:62977"
time=2025-05-02T20:11:09.604+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:11:09.608+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-02T20:11:09.608+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-02T20:11:09.608+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-05-02T20:11:09.625+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-02T20:11:09.631+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
time=2025-05-02T20:11:09.736+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-02T20:11:10.652+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
[GIN] 2025/05/02 - 20:11:10 | 200 |    1.4949226s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-02T20:11:10.751+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds"
time=2025-05-02T20:11:14.476+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/05/02 - 20:11:14 | 200 |    222.3765ms |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T20:11:14.904+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/05/02 - 20:13:09 | 200 |            0s |       127.0.0.1 | GET      "/"
[GIN] 2025/05/02 - 20:13:56 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-02T20:13:56.689+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:13:56.734+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 20:13:56 | 200 |     96.0357ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-02T20:13:56.797+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:13:56.841+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:13:56.887+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:13:56.891+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="10.8 GiB" free_swap="9.0 GiB"
time=2025-05-02T20:13:56.892+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-02T20:13:56.965+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:13:56.970+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:13:56.986+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 63140"
time=2025-05-02T20:13:56.991+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-02T20:13:56.991+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-02T20:13:56.991+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-02T20:13:57.020+08:00 level=INFO source=runner.go:861 msg="starting ollama engine"
time=2025-05-02T20:13:57.035+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:63140"
time=2025-05-02T20:13:57.112+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:13:57.116+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-02T20:13:57.116+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-02T20:13:57.116+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-05-02T20:13:57.133+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-02T20:13:57.139+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
time=2025-05-02T20:13:57.247+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-02T20:13:58.121+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:13:58.258+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds"
[GIN] 2025/05/02 - 20:13:58 | 200 |    1.5034494s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-02T20:13:58.941+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/05/02 - 20:13:59 | 200 |    331.2297ms |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T20:13:59.555+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/05/02 - 20:17:11 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-02T20:17:11.992+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:17:12.037+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 20:17:12 | 200 |     93.5256ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-02T20:17:12.103+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:17:12.148+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:17:12.195+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:17:12.200+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="10.8 GiB" free_swap="9.1 GiB"
time=2025-05-02T20:17:12.202+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-02T20:17:12.270+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:17:12.274+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:17:12.290+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 63257"
time=2025-05-02T20:17:12.300+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-02T20:17:12.300+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-02T20:17:12.301+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-02T20:17:12.328+08:00 level=INFO source=runner.go:861 msg="starting ollama engine"
time=2025-05-02T20:17:12.345+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:63257"
time=2025-05-02T20:17:12.420+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:17:12.424+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-02T20:17:12.424+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-02T20:17:12.424+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-05-02T20:17:12.441+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-02T20:17:12.445+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
time=2025-05-02T20:17:12.553+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-02T20:17:13.414+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:17:13.562+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.26 seconds"
[GIN] 2025/05/02 - 20:17:13 | 200 |    1.5043563s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-02T20:17:14.507+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/05/02 - 20:17:14 | 200 |    300.4559ms |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T20:17:15.092+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/05/02 - 20:20:52 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-02T20:20:52.933+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:20:52.976+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/02 - 20:20:52 | 200 |     89.5658ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-02T20:20:53.038+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:20:53.083+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:20:53.130+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:20:53.133+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="10.6 GiB" free_swap="8.9 GiB"
time=2025-05-02T20:20:53.135+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[10.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-02T20:20:53.204+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:20:53.208+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:20:53.225+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 60447"
time=2025-05-02T20:20:53.234+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-02T20:20:53.234+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-02T20:20:53.234+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-02T20:20:53.261+08:00 level=INFO source=runner.go:861 msg="starting ollama engine"
time=2025-05-02T20:20:53.276+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:60447"
time=2025-05-02T20:20:53.351+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-02T20:20:53.356+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-02T20:20:53.356+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-02T20:20:53.356+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-05-02T20:20:53.373+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-02T20:20:53.377+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
time=2025-05-02T20:20:53.488+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-02T20:20:54.305+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-02T20:20:54.494+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.26 seconds"
[GIN] 2025/05/02 - 20:20:54 | 200 |    1.4984859s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-02T20:21:07.835+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/05/02 - 20:21:08 | 200 |    289.4455ms |       127.0.0.1 | POST     "/api/chat"
time=2025-05-02T20:21:08.342+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"

it's all done while pulling gemma3:4b, but just cannot run and interact with it. So what's that means with the msg key not found?

Originally posted by @arkimium in #3769

Originally created by @arkimium on GitHub (May 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10534 > Here now is also appeared in Windows 21H2 with AMD Radeon RX 5500, with version 0.6.7. > > the error is the same as above: > ``` > Error: POST predict: Post "http://127.0.0.1:61115/completion": read tcp 127.0.0.1:61117->127.0.0.1:61115: wsarecv: An existing connection was forcibly closed by the remote host. > ``` > > my fully server log: > ``` > 2025/05/02 20:10:48 routes.go:1233: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Administrator\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" > time=2025-05-02T20:10:48.853+08:00 level=INFO source=images.go:458 msg="total blobs: 5" > time=2025-05-02T20:10:48.853+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" > time=2025-05-02T20:10:48.854+08:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.7)" > time=2025-05-02T20:10:48.854+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" > time=2025-05-02T20:10:48.854+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 > time=2025-05-02T20:10:48.854+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=4 efficiency=0 threads=8 > time=2025-05-02T20:10:48.866+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" > time=2025-05-02T20:10:48.866+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="16.0 GiB" available="11.2 GiB" > [GIN] 2025/05/02 - 20:10:49 | 200 | 0s | 127.0.0.1 | HEAD "/" > time=2025-05-02T20:10:49.244+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:10:49.292+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > [GIN] 2025/05/02 - 20:10:49 | 200 | 101.7662ms | 127.0.0.1 | POST "/api/show" > time=2025-05-02T20:10:49.356+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:10:49.402+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:10:49.449+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:10:49.453+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="11.2 GiB" free_swap="9.4 GiB" > time=2025-05-02T20:10:49.454+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[11.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" > time=2025-05-02T20:10:49.534+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:10:49.539+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:10:49.544+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:10:49.561+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 62929" > time=2025-05-02T20:10:49.565+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 > time=2025-05-02T20:10:49.565+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" > time=2025-05-02T20:10:49.566+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" > time=2025-05-02T20:10:49.593+08:00 level=INFO source=runner.go:861 msg="starting ollama engine" > time=2025-05-02T20:10:49.609+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:62929" > time=2025-05-02T20:10:49.689+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:10:49.693+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" > time=2025-05-02T20:10:49.693+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" > time=2025-05-02T20:10:49.693+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 > load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-05-02T20:10:49.711+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) > time=2025-05-02T20:10:49.716+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" > time=2025-05-02T20:10:49.825+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" > time=2025-05-02T20:10:50.725+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:10:50.731+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:10:50.832+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds" > [GIN] 2025/05/02 - 20:10:50 | 200 | 1.5180319s | 127.0.0.1 | POST "/api/generate" > time=2025-05-02T20:10:51.549+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed > [GIN] 2025/05/02 - 20:10:51 | 200 | 259.0518ms | 127.0.0.1 | POST "/api/chat" > time=2025-05-02T20:10:52.043+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > [GIN] 2025/05/02 - 20:11:05 | 200 | 0s | 127.0.0.1 | HEAD "/" > time=2025-05-02T20:11:05.056+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:05.101+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > [GIN] 2025/05/02 - 20:11:05 | 200 | 94.8025ms | 127.0.0.1 | POST "/api/show" > time=2025-05-02T20:11:05.166+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:05.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:05.259+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:05.264+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="11.1 GiB" free_swap="9.3 GiB" > time=2025-05-02T20:11:05.265+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[11.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" > time=2025-05-02T20:11:05.335+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:05.339+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:11:05.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:11:05.355+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 62964" > time=2025-05-02T20:11:05.359+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 > time=2025-05-02T20:11:05.359+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" > time=2025-05-02T20:11:05.360+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" > time=2025-05-02T20:11:05.389+08:00 level=INFO source=runner.go:861 msg="starting ollama engine" > time=2025-05-02T20:11:05.405+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:62964" > time=2025-05-02T20:11:05.483+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:05.487+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" > time=2025-05-02T20:11:05.487+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" > time=2025-05-02T20:11:05.487+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 > load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-05-02T20:11:05.505+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) > time=2025-05-02T20:11:05.510+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" > time=2025-05-02T20:11:05.616+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" > time=2025-05-02T20:11:06.504+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:11:06.509+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:11:06.510+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:11:06.632+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds" > [GIN] 2025/05/02 - 20:11:06 | 200 | 1.5093505s | 127.0.0.1 | POST "/api/generate" > time=2025-05-02T20:11:07.814+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed > [GIN] 2025/05/02 - 20:11:08 | 200 | 269.4994ms | 127.0.0.1 | POST "/api/chat" > time=2025-05-02T20:11:08.297+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > [GIN] 2025/05/02 - 20:11:09 | 200 | 0s | 127.0.0.1 | HEAD "/" > time=2025-05-02T20:11:09.191+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:09.235+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > [GIN] 2025/05/02 - 20:11:09 | 200 | 98.6684ms | 127.0.0.1 | POST "/api/show" > time=2025-05-02T20:11:09.299+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:09.343+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:09.390+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:09.394+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="11.1 GiB" free_swap="9.3 GiB" > time=2025-05-02T20:11:09.396+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[11.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" > time=2025-05-02T20:11:09.467+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:09.472+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:11:09.477+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:11:09.478+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 62977" > time=2025-05-02T20:11:09.482+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 > time=2025-05-02T20:11:09.482+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" > time=2025-05-02T20:11:09.483+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" > time=2025-05-02T20:11:09.512+08:00 level=INFO source=runner.go:861 msg="starting ollama engine" > time=2025-05-02T20:11:09.529+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:62977" > time=2025-05-02T20:11:09.604+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:11:09.608+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" > time=2025-05-02T20:11:09.608+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" > time=2025-05-02T20:11:09.608+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 > load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-05-02T20:11:09.625+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) > time=2025-05-02T20:11:09.631+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" > time=2025-05-02T20:11:09.736+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" > time=2025-05-02T20:11:10.652+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:11:10.657+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > [GIN] 2025/05/02 - 20:11:10 | 200 | 1.4949226s | 127.0.0.1 | POST "/api/generate" > time=2025-05-02T20:11:10.751+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds" > time=2025-05-02T20:11:14.476+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed > [GIN] 2025/05/02 - 20:11:14 | 200 | 222.3765ms | 127.0.0.1 | POST "/api/chat" > time=2025-05-02T20:11:14.904+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > [GIN] 2025/05/02 - 20:13:09 | 200 | 0s | 127.0.0.1 | GET "/" > [GIN] 2025/05/02 - 20:13:56 | 200 | 0s | 127.0.0.1 | HEAD "/" > time=2025-05-02T20:13:56.689+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:13:56.734+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > [GIN] 2025/05/02 - 20:13:56 | 200 | 96.0357ms | 127.0.0.1 | POST "/api/show" > time=2025-05-02T20:13:56.797+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:13:56.841+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:13:56.887+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:13:56.891+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="10.8 GiB" free_swap="9.0 GiB" > time=2025-05-02T20:13:56.892+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" > time=2025-05-02T20:13:56.965+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:13:56.970+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:13:56.975+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:13:56.986+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 63140" > time=2025-05-02T20:13:56.991+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 > time=2025-05-02T20:13:56.991+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" > time=2025-05-02T20:13:56.991+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" > time=2025-05-02T20:13:57.020+08:00 level=INFO source=runner.go:861 msg="starting ollama engine" > time=2025-05-02T20:13:57.035+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:63140" > time=2025-05-02T20:13:57.112+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:13:57.116+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" > time=2025-05-02T20:13:57.116+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" > time=2025-05-02T20:13:57.116+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 > load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-05-02T20:13:57.133+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) > time=2025-05-02T20:13:57.139+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" > time=2025-05-02T20:13:57.247+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" > time=2025-05-02T20:13:58.121+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:13:58.127+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:13:58.258+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.27 seconds" > [GIN] 2025/05/02 - 20:13:58 | 200 | 1.5034494s | 127.0.0.1 | POST "/api/generate" > time=2025-05-02T20:13:58.941+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed > [GIN] 2025/05/02 - 20:13:59 | 200 | 331.2297ms | 127.0.0.1 | POST "/api/chat" > time=2025-05-02T20:13:59.555+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > [GIN] 2025/05/02 - 20:17:11 | 200 | 0s | 127.0.0.1 | HEAD "/" > time=2025-05-02T20:17:11.992+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:17:12.037+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > [GIN] 2025/05/02 - 20:17:12 | 200 | 93.5256ms | 127.0.0.1 | POST "/api/show" > time=2025-05-02T20:17:12.103+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:17:12.148+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:17:12.195+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:17:12.200+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="10.8 GiB" free_swap="9.1 GiB" > time=2025-05-02T20:17:12.202+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" > time=2025-05-02T20:17:12.270+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:17:12.274+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:17:12.279+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:17:12.290+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 63257" > time=2025-05-02T20:17:12.300+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 > time=2025-05-02T20:17:12.300+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" > time=2025-05-02T20:17:12.301+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" > time=2025-05-02T20:17:12.328+08:00 level=INFO source=runner.go:861 msg="starting ollama engine" > time=2025-05-02T20:17:12.345+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:63257" > time=2025-05-02T20:17:12.420+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:17:12.424+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" > time=2025-05-02T20:17:12.424+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" > time=2025-05-02T20:17:12.424+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 > load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-05-02T20:17:12.441+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) > time=2025-05-02T20:17:12.445+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" > time=2025-05-02T20:17:12.553+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" > time=2025-05-02T20:17:13.414+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:17:13.419+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:17:13.562+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.26 seconds" > [GIN] 2025/05/02 - 20:17:13 | 200 | 1.5043563s | 127.0.0.1 | POST "/api/generate" > time=2025-05-02T20:17:14.507+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed > [GIN] 2025/05/02 - 20:17:14 | 200 | 300.4559ms | 127.0.0.1 | POST "/api/chat" > time=2025-05-02T20:17:15.092+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > [GIN] 2025/05/02 - 20:20:52 | 200 | 0s | 127.0.0.1 | HEAD "/" > time=2025-05-02T20:20:52.933+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:20:52.976+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > [GIN] 2025/05/02 - 20:20:52 | 200 | 89.5658ms | 127.0.0.1 | POST "/api/show" > time=2025-05-02T20:20:53.038+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:20:53.083+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:20:53.130+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:20:53.133+08:00 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="10.6 GiB" free_swap="8.9 GiB" > time=2025-05-02T20:20:53.135+08:00 level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[10.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="450.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" > time=2025-05-02T20:20:53.204+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:20:53.208+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:20:53.213+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:20:53.225+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads 4 --no-mmap --parallel 2 --port 60447" > time=2025-05-02T20:20:53.234+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 > time=2025-05-02T20:20:53.234+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" > time=2025-05-02T20:20:53.234+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" > time=2025-05-02T20:20:53.261+08:00 level=INFO source=runner.go:861 msg="starting ollama engine" > time=2025-05-02T20:20:53.276+08:00 level=INFO source=runner.go:924 msg="Server listening on 127.0.0.1:60447" > time=2025-05-02T20:20:53.351+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > time=2025-05-02T20:20:53.356+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" > time=2025-05-02T20:20:53.356+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" > time=2025-05-02T20:20:53.356+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 > load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-05-02T20:20:53.373+08:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) > time=2025-05-02T20:20:53.377+08:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" > time=2025-05-02T20:20:53.488+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" > time=2025-05-02T20:20:54.305+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false > time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 > time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 > time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 > time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 > time=2025-05-02T20:20:54.310+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 > time=2025-05-02T20:20:54.494+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.26 seconds" > [GIN] 2025/05/02 - 20:20:54 | 200 | 1.4984859s | 127.0.0.1 | POST "/api/generate" > time=2025-05-02T20:21:07.835+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed > [GIN] 2025/05/02 - 20:21:08 | 200 | 289.4455ms | 127.0.0.1 | POST "/api/chat" > time=2025-05-02T20:21:08.342+08:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > ``` > > it's all done while pulling `gemma3:4b`, but just cannot run and interact with it. So what's that means with the msg `key not found`? _Originally posted by @arkimium in [#3769](https://github.com/ollama/ollama/issues/3769#issuecomment-2847091358)_
Author
Owner

@arkimium commented on GitHub (May 2, 2025):

https://github.com/user-attachments/assets/099fef17-a7ea-4470-adb2-a86a4b40f909

I just reproduce the issue by record the terminal, hope can be as a reference.

<!-- gh-comment-id:2847105574 --> @arkimium commented on GitHub (May 2, 2025): https://github.com/user-attachments/assets/099fef17-a7ea-4470-adb2-a86a4b40f909 I just reproduce the issue by record the terminal, hope can be as a reference.
Author
Owner

@raymondtri commented on GitHub (May 2, 2025):

I can confirm I'm seeing this on my end too. It seems to work once or twice and then fails to key not found. I have updated to the latest docker and I am using 0.6.7.

Ollama error: time=2025-05-02T08:39:48.417-07:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
Ollama error message: key not found
Ollama error: C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4305: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
Ollama output: [GIN] 2025/05/02 - 08:39:48 | 500 |    1.3229413s |       127.0.0.1 | POST     "/v1/embeddings"
Ollama error: time=2025-05-02T08:39:48.593-07:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"
<!-- gh-comment-id:2847115798 --> @raymondtri commented on GitHub (May 2, 2025): I can confirm I'm seeing this on my end too. It seems to work once or twice and then fails to key not found. I have updated to the latest docker and I am using 0.6.7. ``` Ollama error: time=2025-05-02T08:39:48.417-07:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 Ollama error message: key not found Ollama error: C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4305: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed Ollama output: [GIN] 2025/05/02 - 08:39:48 | 500 | 1.3229413s | 127.0.0.1 | POST "/v1/embeddings" Ollama error: time=2025-05-02T08:39:48.593-07:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" ```
Author
Owner

@arkimium commented on GitHub (May 2, 2025):

I can confirm I'm seeing this on my end too. It seems to work once or twice and then fails to key not found. I have updated to the latest docker and I am using 0.6.7.

Ollama error: time=2025-05-02T08:39:48.417-07:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
Ollama error message: key not found
Ollama error: C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4305: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
Ollama output: [GIN] 2025/05/02 - 08:39:48 | 500 |    1.3229413s |       127.0.0.1 | POST     "/v1/embeddings"
Ollama error: time=2025-05-02T08:39:48.593-07:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409"

I was not using docker, and using ollama locally at my terminal. i even dk what caused it to this.
Just 10hrs ago there is also a issue at #10503 , that issue was appeared on Linux CUDA and Apple MPS.
So now linux/windows/darwin are now also had this issue lmfao. :D

<!-- gh-comment-id:2847138816 --> @arkimium commented on GitHub (May 2, 2025): > I can confirm I'm seeing this on my end too. It seems to work once or twice and then fails to key not found. I have updated to the latest docker and I am using 0.6.7. > > ``` > Ollama error: time=2025-05-02T08:39:48.417-07:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 > Ollama error message: key not found > Ollama error: C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4305: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed > Ollama output: [GIN] 2025/05/02 - 08:39:48 | 500 | 1.3229413s | 127.0.0.1 | POST "/v1/embeddings" > Ollama error: time=2025-05-02T08:39:48.593-07:00 level=ERROR source=server.go:454 msg="llama runner terminated" error="exit status 0xc0000409" > ``` I was not using docker, and using ollama locally at my terminal. i even dk what caused it to this. Just 10hrs ago there is also a issue at #10503 , that issue was appeared on Linux CUDA and Apple MPS. So now linux/windows/darwin are now also had this issue lmfao. :D
Author
Owner

@rick-github commented on GitHub (May 2, 2025):

@arkimium

D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed

#9509

@raymondtri

Ollama error: C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4305: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed

https://github.com/ollama/ollama/issues/7288#issuecomment-2591709109

key not found warning is not relevant.

<!-- gh-comment-id:2847235396 --> @rick-github commented on GitHub (May 2, 2025): @arkimium ``` D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed ``` #9509 @raymondtri ``` Ollama error: C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4305: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed ``` https://github.com/ollama/ollama/issues/7288#issuecomment-2591709109 `key not found` warning is not relevant.
Author
Owner

@raymondtri commented on GitHub (May 2, 2025):

Yep I did a little more digging, you're correct, there was a context overflow issue with my embedding model that regressed on my end.

<!-- gh-comment-id:2847451904 --> @raymondtri commented on GitHub (May 2, 2025): Yep I did a little more digging, you're correct, there was a context overflow issue with my embedding model that regressed on my end.
Author
Owner

@arkimium commented on GitHub (May 3, 2025):

Resolved when installed the latest Docker Desktop (4.41.1), referenced https://github.com/ollama/ollama/issues/9509#issuecomment-2846734150

<!-- gh-comment-id:2848369214 --> @arkimium commented on GitHub (May 3, 2025): Resolved when installed the latest Docker Desktop (4.41.1), referenced [https://github.com/ollama/ollama/issues/9509#issuecomment-2846734150](https://github.com/ollama/ollama/issues/9509#issuecomment-2846734150)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6931