[GH-ISSUE #9690] Unable to quantize Gemma3 27b #68382

Closed
opened 2026-05-04 13:43:32 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @lowlyocean on GitHub (Mar 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9690

ollama create -q q2_k gemma3:27b-it-q2_K

Fails when the Modelfile contains:
FROM gemma3:27b-it-fp16

With message:

gathering model components
quantizing F16 model to Q2_K
Error: llama_model_quantize: 1
Originally created by @lowlyocean on GitHub (Mar 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9690 `ollama create -q q2_k gemma3:27b-it-q2_K` Fails when the Modelfile contains: `FROM gemma3:27b-it-fp16` With message: ``` gathering model components quantizing F16 model to Q2_K Error: llama_model_quantize: 1
GiteaMirror added the bug label 2026-05-04 13:43:32 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68382