[GH-ISSUE #10262] key not found Warnings in Logs When Running Gemma3 12B on AMD EPYC #32496

Closed
opened 2026-04-22 13:49:07 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @bhargav-11 on GitHub (Apr 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10262

What is the issue?

What is the issue?
When running the gemma3 12b model on a machine with an AMD EPYC 9000 series CPU, Ollama serve logs repeatedly show "key not found" warnings. These keys appear to relate to tokenizer and model-specific configuration options. Despite these warnings, the model seems to load and run without crashing, but I'm unsure if this indicates missing functionality, fallback behavior, or misconfiguration.

Expected behavior
I expected the model to initialize cleanly without warnings, assuming it is fully compatible with Ollama and the current runtime setup.

Relevant log output

time=2025-04-14T09:58:54.221Z level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="8.3 GiB"
time=2025-04-14T09:58:54.364Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
time=2025-04-14T09:58:54.512Z level=INFO source=ggml.go:388 msg="compute graph" backend=CPU buffer_type=CPU
time=2025-04-14T09:58:54.513Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-14T09:58:54.615Z level=INFO source=server.go:619 msg="llama runner started in 0.50 seconds"

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.6.5

Originally created by @bhargav-11 on GitHub (Apr 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10262 ### What is the issue? What is the issue? When running the gemma3 12b model on a machine with an AMD EPYC 9000 series CPU, Ollama serve logs repeatedly show "key not found" warnings. These keys appear to relate to tokenizer and model-specific configuration options. Despite these warnings, the model seems to load and run without crashing, but I'm unsure if this indicates missing functionality, fallback behavior, or misconfiguration. Expected behavior I expected the model to initialize cleanly without warnings, assuming it is fully compatible with Ollama and the current runtime setup. ### Relevant log output ```shell time=2025-04-14T09:58:54.221Z level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="8.3 GiB" time=2025-04-14T09:58:54.364Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" time=2025-04-14T09:58:54.512Z level=INFO source=ggml.go:388 msg="compute graph" backend=CPU buffer_type=CPU time=2025-04-14T09:58:54.513Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-14T09:58:54.519Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-14T09:58:54.615Z level=INFO source=server.go:619 msg="llama runner started in 0.50 seconds" ``` ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.6.5
GiteaMirror added the bug label 2026-04-22 13:49:07 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

The model is missing a few KV entries. ollama uses defaults. There's no issue here.

<!-- gh-comment-id:2801231758 --> @rick-github commented on GitHub (Apr 14, 2025): The model is missing a few KV entries. ollama uses defaults. There's no issue here.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32496