[GH-ISSUE #9676] qwq:32b-fp16 model fails with EOF error during inference #6313

Closed
opened 2026-04-12 17:47:31 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @mrhein on GitHub (Mar 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9676

What is the issue?

When attempting to run the qwq:32b-fp16 model using Ollama, the process terminates with an EOF error after sending a simple prompt.

Steps to Reproduce
Install Ollama (version: 0.6.0)
Pull the model: ollama pull qwq:32b-fp16
Run the model: ollama run qwq:32b-fp16
Enter a simple prompt such as "hello"
Observed Behavior
After entering the prompt, the following error is displayed:

Error: POST predict: Post "http://127.0.0.1:52725/completion": EOF
The model fails to generate any response and the session terminates.

Expected Behavior
The model should process the prompt and generate a coherent response without crashing.

System Information
OS: macOS (Apple Silicon)
CPU: Apple M4 Max
RAM: 128GB
GPU: 40-core integrated GPU
Ollama version: 0.6.0
Additional Information
The model was pulled successfully before attempting to run it
Other models work correctly on the same system
This occurs on high-end Apple Silicon hardware which should be capable of running the model

Relevant log output

Error: POST predict: Post "http://127.0.0.1:52725/completion": EOF

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.6.0

Originally created by @mrhein on GitHub (Mar 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9676 ### What is the issue? When attempting to run the qwq:32b-fp16 model using Ollama, the process terminates with an EOF error after sending a simple prompt. Steps to Reproduce Install Ollama (version: 0.6.0) Pull the model: ollama pull qwq:32b-fp16 Run the model: ollama run qwq:32b-fp16 Enter a simple prompt such as "hello" Observed Behavior After entering the prompt, the following error is displayed: Error: POST predict: Post "http://127.0.0.1:52725/completion": EOF The model fails to generate any response and the session terminates. Expected Behavior The model should process the prompt and generate a coherent response without crashing. System Information OS: macOS (Apple Silicon) CPU: Apple M4 Max RAM: 128GB GPU: 40-core integrated GPU Ollama version: 0.6.0 Additional Information The model was pulled successfully before attempting to run it Other models work correctly on the same system This occurs on high-end Apple Silicon hardware which should be capable of running the model ### Relevant log output ```shell Error: POST predict: Post "http://127.0.0.1:52725/completion": EOF ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.6.0
GiteaMirror added the bugneeds more info labels 2026-04-12 17:47:31 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2717857128 --> @rick-github commented on GitHub (Mar 12, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@dn833 commented on GitHub (Mar 12, 2025):

Same error , but running ollama run gemma3:27b succeeded

[GIN] 2025/03/12 - 21:45:52 | 200 |     120.875µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/12 - 21:45:52 | 200 |      95.098ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-12T21:45:52.808+08:00 level=INFO source=server.go:105 msg="system memory" total="64.0 GiB" free="60.6 GiB" free_swap="0 B"
time=2025-03-12T21:45:52.809+08:00 level=INFO source=server.go:138 msg=offload library=metal layers.requested=-1 layers.model=63 layers.offload=58 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="53.1 GiB" memory.required.partial="47.3 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[47.3 GiB]" memory.weights.total="48.7 GiB" memory.weights.repeating="46.0 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="522.5 MiB"
time=2025-03-12T21:45:52.878+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T21:45:52.882+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-12T21:45:52.883+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30
time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-12T21:45:52.885+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/xx/.ollama/models/blobs/sha256-8bf5daddfa5b7ee1f84fd3d759261439151106d8908ea064c4e4445afc8c8683 --ctx-size 2048 --batch-size 512 --n-gpu-layers 58 --threads 8 --no-mmap --parallel 1 --port 62003"
time=2025-03-12T21:45:52.887+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-12T21:45:52.887+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-12T21:45:52.887+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-12T21:45:52.893+08:00 level=INFO source=runner.go:882 msg="starting ollama engine"
time=2025-03-12T21:45:52.893+08:00 level=INFO source=runner.go:938 msg="Server listening on 127.0.0.1:62003"
time=2025-03-12T21:45:52.935+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-12T21:45:52.935+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-12T21:45:52.935+08:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=F16 name="" description="" num_tensors=1247 num_key_values=36
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-icelake.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-haswell.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-alderlake.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-sandybridge.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-skylakex.so
time=2025-03-12T21:45:52.937+08:00 level=INFO source=ggml.go:109 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-03-12T21:45:53.019+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="9.1 GiB"
time=2025-03-12T21:45:53.019+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=Metal size="44.6 GiB"
time=2025-03-12T21:45:53.161+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
^@ggml_metal_init: allocating
ggml_metal_init: found device: Apple M4 Pro
ggml_metal_init: picking default device: Apple M4 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M4 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 51539.61 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
time=2025-03-12T21:47:40.816+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=Metal buffer_type=Metal
time=2025-03-12T21:47:40.817+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CPU
time=2025-03-12T21:47:40.818+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T21:47:40.890+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-12T21:47:40.892+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T21:47:40.894+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30
time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-12T21:47:40.942+08:00 level=INFO source=server.go:624 msg="llama runner started in 108.05 seconds"
[GIN] 2025/03/12 - 21:47:40 | 200 |         1m48s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/12 - 21:48:16 | 200 |  6.066066708s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2717959127 --> @dn833 commented on GitHub (Mar 12, 2025): Same error , but running ollama run gemma3:27b succeeded ``` [GIN] 2025/03/12 - 21:45:52 | 200 | 120.875µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/12 - 21:45:52 | 200 | 95.098ms | 127.0.0.1 | POST "/api/show" time=2025-03-12T21:45:52.808+08:00 level=INFO source=server.go:105 msg="system memory" total="64.0 GiB" free="60.6 GiB" free_swap="0 B" time=2025-03-12T21:45:52.809+08:00 level=INFO source=server.go:138 msg=offload library=metal layers.requested=-1 layers.model=63 layers.offload=58 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="53.1 GiB" memory.required.partial="47.3 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[47.3 GiB]" memory.weights.total="48.7 GiB" memory.weights.repeating="46.0 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="522.5 MiB" time=2025-03-12T21:45:52.878+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T21:45:52.882+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-12T21:45:52.883+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30 time=2025-03-12T21:45:52.885+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-12T21:45:52.885+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/xx/.ollama/models/blobs/sha256-8bf5daddfa5b7ee1f84fd3d759261439151106d8908ea064c4e4445afc8c8683 --ctx-size 2048 --batch-size 512 --n-gpu-layers 58 --threads 8 --no-mmap --parallel 1 --port 62003" time=2025-03-12T21:45:52.887+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-12T21:45:52.887+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-12T21:45:52.887+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-12T21:45:52.893+08:00 level=INFO source=runner.go:882 msg="starting ollama engine" time=2025-03-12T21:45:52.893+08:00 level=INFO source=runner.go:938 msg="Server listening on 127.0.0.1:62003" time=2025-03-12T21:45:52.935+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-12T21:45:52.935+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-12T21:45:52.935+08:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=F16 name="" description="" num_tensors=1247 num_key_values=36 ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-icelake.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-haswell.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-alderlake.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-sandybridge.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-skylakex.so time=2025-03-12T21:45:52.937+08:00 level=INFO source=ggml.go:109 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-03-12T21:45:53.019+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="9.1 GiB" time=2025-03-12T21:45:53.019+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=Metal size="44.6 GiB" time=2025-03-12T21:45:53.161+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" ^@ggml_metal_init: allocating ggml_metal_init: found device: Apple M4 Pro ggml_metal_init: picking default device: Apple M4 Pro ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M4 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) time=2025-03-12T21:47:40.816+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=Metal buffer_type=Metal time=2025-03-12T21:47:40.817+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CPU time=2025-03-12T21:47:40.818+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T21:47:40.890+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-12T21:47:40.892+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T21:47:40.894+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30 time=2025-03-12T21:47:40.895+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-12T21:47:40.942+08:00 level=INFO source=server.go:624 msg="llama runner started in 108.05 seconds" [GIN] 2025/03/12 - 21:47:40 | 200 | 1m48s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/12 - 21:48:16 | 200 | 6.066066708s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

This log doesn't show a runner crash.

<!-- gh-comment-id:2717981861 --> @rick-github commented on GitHub (Mar 12, 2025): This log doesn't show a runner crash.
Author
Owner

@afsara-ben commented on GitHub (Mar 17, 2025):

any solution to this?

<!-- gh-comment-id:2730499207 --> @afsara-ben commented on GitHub (Mar 17, 2025): any solution to this?
Author
Owner

@rick-github commented on GitHub (Mar 17, 2025):

No logs, no solution.

<!-- gh-comment-id:2730659743 --> @rick-github commented on GitHub (Mar 17, 2025): No logs, no solution.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6313