[GH-ISSUE #9762] Does gemma3 offer a variety of quantization models ? #32140

Closed
opened 2026-04-22 13:06:02 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @gakugaku on GitHub (Mar 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9762

The distribution of gemma3’s GGUF model (link) seems to include fewer quantization types compared to gemma2 (link) and other models.
Are there any plans to offer the same range of quantization types and naming conventions as provided for the other models?

gemma3 gemma2
Image Image
Originally created by @gakugaku on GitHub (Mar 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9762 The distribution of gemma3’s GGUF model ([link](https://ollama.com/library/gemma3/tags)) seems to include fewer quantization types compared to gemma2 ([link](https://ollama.com/library/gemma2/tags)) and other models. Are there any plans to offer the same range of quantization types and naming conventions as provided for the other models? | [gemma3](https://ollama.com/library/gemma3) | [gemma2](https://ollama.com/library/gemma2) | |--------|--------| | ![Image](https://github.com/user-attachments/assets/3524327a-9a10-4c8d-86bc-84ad0dbe2aff) | ![Image](https://github.com/user-attachments/assets/8188e6b0-efa1-4ae4-b8ec-33e7516648d5) |
GiteaMirror added the model label 2026-04-22 13:06:02 -05:00
Author
Owner

@ALLMI78 commented on GitHub (Mar 14, 2025):

meanwhile https://huggingface.co/models?sort=trending&search=gemma-3+gguf

select a gguf file, click use >>> with ollama

<!-- gh-comment-id:2724473564 --> @ALLMI78 commented on GitHub (Mar 14, 2025): meanwhile https://huggingface.co/models?sort=trending&search=gemma-3+gguf select a gguf file, click use >>> with ollama
Author
Owner

@pdevine commented on GitHub (Mar 17, 2025):

In 0.6.2 you'll be able to run ollama create --quantize to quantize to whichever level you want. We were trying to get the QAT based quantized models out the door but Google was running into some problems which were causing performance issues.

<!-- gh-comment-id:2727749989 --> @pdevine commented on GitHub (Mar 17, 2025): In `0.6.2` you'll be able to run `ollama create --quantize` to quantize to whichever level you want. We were trying to get the QAT based quantized models out the door but Google was running into some problems which were causing performance issues.
Author
Owner

@pdevine commented on GitHub (Mar 21, 2025):

OK, 0.6.2 is out and it's very easy to do this yourself.

You can create a Modelfile which has a link to the non-quantized model:

FROM gemma3:4b-it-fp16

Now you can run:

ollama create --quantize q5_k_m -f path/to/Modelfile mymodel
gathering model components
pulling manifest
pulling 8300f2d40f8b... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 8.6 GB
pulling e0a42594d802... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏  358 B
pulling dd084c7d92a3... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 8.4 KB
pulling 0a74a8735bf3... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏   55 B
pulling 162b13f01261... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏  486 B
verifying sha256 digest
writing manifest
success
quantizing F16 model to Q5_K_M
creating new layer sha256:75c4ece0524a4b925ff87fea8f3f84e2b31363f87fb117b4ea82478eb55edb08
using existing layer sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348
using existing layer sha256:dd084c7d92a3c1c14cc09ae77153b903fd2024b64a100a0cc8ec9316063d2dbc
using existing layer sha256:0a74a8735bf3ffff4537b6c6bc9a4bc97a28c48f2fd347e806cca4d5001560f6
writing manifest
success

That will automatically pull the fp16 weights if you don't have them and it will convert it into q5_K_M. You can test it out with:

ollama show -v mymodel | less

which you can verify the output should say:

    quantization        Q5_K_M

I'm going to go ahead and close the issue.

<!-- gh-comment-id:2744470349 --> @pdevine commented on GitHub (Mar 21, 2025): OK, `0.6.2` is out and it's very easy to do this yourself. You can create a Modelfile which has a link to the non-quantized model: ``` FROM gemma3:4b-it-fp16 ``` Now you can run: ``` ollama create --quantize q5_k_m -f path/to/Modelfile mymodel gathering model components pulling manifest pulling 8300f2d40f8b... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 8.6 GB pulling e0a42594d802... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 358 B pulling dd084c7d92a3... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 8.4 KB pulling 0a74a8735bf3... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 55 B pulling 162b13f01261... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████▏ 486 B verifying sha256 digest writing manifest success quantizing F16 model to Q5_K_M creating new layer sha256:75c4ece0524a4b925ff87fea8f3f84e2b31363f87fb117b4ea82478eb55edb08 using existing layer sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348 using existing layer sha256:dd084c7d92a3c1c14cc09ae77153b903fd2024b64a100a0cc8ec9316063d2dbc using existing layer sha256:0a74a8735bf3ffff4537b6c6bc9a4bc97a28c48f2fd347e806cca4d5001560f6 writing manifest success ``` That will automatically pull the fp16 weights if you don't have them and it will convert it into q5_K_M. You can test it out with: ``` ollama show -v mymodel | less ``` which you can verify the output should say: ``` quantization Q5_K_M ``` I'm going to go ahead and close the issue.
Author
Owner

@gakugaku commented on GitHub (Mar 22, 2025):

@pdevine
Thank you for your assistance.

<!-- gh-comment-id:2744897027 --> @gakugaku commented on GitHub (Mar 22, 2025): @pdevine Thank you for your assistance.
Author
Owner

@thot-experiment commented on GitHub (Mar 30, 2025):

Ok so let me get this straight, if I want to run a quantized tune of gemma3 with vision I have to download unquantized fp16 (55gb), then I have to import that into ollama (so now I have at least temporarily have 110gb of gemma3 since the import process duplicates everything) and only then can I quantize things the "ollama way" (unable to do imatrix quants)?

Surely this isn't the user experience we want?

<!-- gh-comment-id:2764707280 --> @thot-experiment commented on GitHub (Mar 30, 2025): Ok so let me get this straight, if I want to run a quantized tune of gemma3 with vision I have to download unquantized fp16 (55gb), then I have to import that into ollama (so now I have at least temporarily have 110gb of gemma3 since the import process duplicates everything) and only then can I quantize things the "ollama way" (unable to do imatrix quants)? Surely this isn't the user experience we want?
Author
Owner

@Kwisss commented on GitHub (Apr 8, 2025):

Ok so let me get this straight, if I want to run a quantized tune of gemma3 with vision I have to download unquantized fp16 (55gb), then I have to import that into ollama (so now I have at least temporarily have 110gb of gemma3 since the import process duplicates everything) and only then can I quantize things the "ollama way" (unable to do imatrix quants)?

Surely this isn't the user experience we want?

I can confirm that i have been using bartowski quants seamingly without problems.

<!-- gh-comment-id:2787778228 --> @Kwisss commented on GitHub (Apr 8, 2025): > Ok so let me get this straight, if I want to run a quantized tune of gemma3 with vision I have to download unquantized fp16 (55gb), then I have to import that into ollama (so now I have at least temporarily have 110gb of gemma3 since the import process duplicates everything) and only then can I quantize things the "ollama way" (unable to do imatrix quants)? > > Surely this isn't the user experience we want? I can confirm that i have been using bartowski quants seamingly without problems.
Author
Owner

@thot-experiment commented on GitHub (Apr 8, 2025):

I can confirm that i have been using bartowski quants seamingly without problems.

You are using the bartowski quants of gemma3 tunes and the vision component works? Which model in particular are you using, many of them do not even have the vision tower included? Are you using a separate vision tower + the base model? iirc that wasn't supported on the new gemma3 arch

<!-- gh-comment-id:2787796834 --> @thot-experiment commented on GitHub (Apr 8, 2025): > I can confirm that i have been using bartowski quants seamingly without problems. You are using the bartowski quants of gemma3 tunes and the vision component works? Which model in particular are you using, many of them do not even have the vision tower included? Are you using a separate vision tower + the base model? iirc that wasn't supported on the new gemma3 arch
Author
Owner

@pdevine commented on GitHub (Apr 10, 2025):

Hey guys, I have been testing out the QAT quants. If you want to give them a shot:

ollama run pdevine/gemma3:1b-qat
ollama run pdevine/gemma3:4b-qat
ollama run pdevine/gemma3:12b-qat
ollama run pdevine/gemma3:27b-qat
<!-- gh-comment-id:2794852570 --> @pdevine commented on GitHub (Apr 10, 2025): Hey guys, I have been testing out the QAT quants. If you want to give them a shot: ``` ollama run pdevine/gemma3:1b-qat ollama run pdevine/gemma3:4b-qat ollama run pdevine/gemma3:12b-qat ollama run pdevine/gemma3:27b-qat ```
Author
Owner

@thot-experiment commented on GitHub (Apr 10, 2025):

Thanks for sharing this, however I'm mostly interested in understanding how to get this stuff working on my end in a consistent and repeatable way rather than relying on other people so that I can explore new tunes as they come out. I haven't had time to do a deep dive on it but for example https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1 claims to have the vision tower intact but I have not been able to get it running using either their quants nor quanting the full fat locally with ollama. If you have tips or a link to a guide on how to achieve this I would be grateful.

<!-- gh-comment-id:2795039806 --> @thot-experiment commented on GitHub (Apr 10, 2025): Thanks for sharing this, however I'm mostly interested in understanding how to get this stuff working on my end in a consistent and repeatable way rather than relying on other people so that I can explore new tunes as they come out. I haven't had time to do a deep dive on it but for example https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1 claims to have the vision tower intact but I have not been able to get it running using either their quants nor quanting the full fat locally with ollama. If you have tips or a link to a guide on how to achieve this I would be grateful.
Author
Owner

@pdevine commented on GitHub (Apr 10, 2025):

@thot-experiment since that repo is using safetensors, you should be able to run ollama create and use the --quantize flag to quantize it to what you want.

<!-- gh-comment-id:2795153873 --> @pdevine commented on GitHub (Apr 10, 2025): @thot-experiment since that repo is using safetensors, you should be able to run `ollama create` and use the `--quantize` flag to quantize it to what you want.
Author
Owner

@thot-experiment commented on GitHub (Apr 10, 2025):

Unfortunately that's exactly what I did and the model doesn't even run, I could run the quants but no vision. I'll try re-downloading everything and running it with a fresh upgrade to 0.6.5 tonight and see if that fixes anything.

<!-- gh-comment-id:2795375790 --> @thot-experiment commented on GitHub (Apr 10, 2025): Unfortunately that's exactly what I did and the model doesn't even run, I could run the quants but no vision. I'll try re-downloading everything and running it with a fresh upgrade to 0.6.5 tonight and see if that fixes anything.
Author
Owner

@blakkd commented on GitHub (Apr 13, 2025):

Hey guys, I have been testing out the QAT quants. If you want to give them a shot:

@pdevine How did you import thegoogle/gemma-3-27b-it-qat-q4_0-gguf in ollama?
As I understand, we can now use the --quantize parameter to quantize safetensors from HF. But as I understand, there is still no way to import a [model + its mmproj] when they are already quantized in GGUF format.

Could you share your method?

Thanks!

EDIT: I'm now confused since I just saw an old doc from OpenBMB where they where doing so by specifying 2 FROM in the modelfile. But when I tried to check if their ollama fork had a special feature allowing this, I noticed their repo is now up to date with upstream ollama, so I guess their changes have been merged since then?
But then, I can't make it for gemma3-qat.

Specifying the 2 ggufs (first the model, then the mmproj) in the modelfile results in import with no error in ollama, but then when trying to run I get this

pulling manifest 
Error: pull model manifest: file does not exist

So I'm now even more confused, but also even more wondering!

Thanks by advance

<!-- gh-comment-id:2799237610 --> @blakkd commented on GitHub (Apr 13, 2025): > Hey guys, I have been testing out the QAT quants. If you want to give them a shot: @pdevine How did you import thegoogle/gemma-3-27b-it-qat-q4_0-gguf in ollama? As I understand, we can now use the --quantize parameter to quantize safetensors from HF. But as I understand, there is still no way to import a [model + its mmproj] when they are already quantized in GGUF format. Could you share your method? Thanks! EDIT: I'm now confused since I just saw [an old doc from OpenBMB ](https://modelbest.feishu.cn/wiki/BcHIwjOLGihJXCkkSdMc2WhbnZf#ODf9dYhLaobVUnxf1UlcQSgAnwc) where they where doing so by specifying 2 `FROM` in the modelfile. But when I tried to check if their ollama fork had a special feature allowing this, I noticed their repo is now up to date with upstream ollama, so I guess their changes have been merged since then? But then, I can't make it for gemma3-qat. Specifying the 2 ggufs (first the model, then the mmproj) in the modelfile results in import with no error in ollama, but then when trying to run I get this ``` pulling manifest Error: pull model manifest: file does not exist ``` So I'm now even more confused, but also even more wondering! Thanks by advance
Author
Owner

@pdevine commented on GitHub (Apr 21, 2025):

@blakkd sorry for the confusion. The QAT models are a little annoying because Google released them in two formats:

  • a safetensors version (which safetensors doesn't natively support quantization); and
  • a gguf version (which was incompatible with Ollama)

I ended up having to restitch everything back together again in order to get this to work, so I don't (yet) have a generic way of doing this. My approach was just to rip out the quantized tensors from the GGUF file, add in the vision tower/projector, and rename the kvs and tensors to what Ollama is expecting.

<!-- gh-comment-id:2819247271 --> @pdevine commented on GitHub (Apr 21, 2025): @blakkd sorry for the confusion. The QAT models are a little annoying because Google released them in two formats: * a safetensors version (which safetensors doesn't natively support quantization); and * a gguf version (which was incompatible with Ollama) I ended up having to restitch everything back together again in order to get this to work, so I don't (yet) have a generic way of doing this. My approach was just to rip out the quantized tensors from the GGUF file, add in the vision tower/projector, and rename the kvs and tensors to what Ollama is expecting.
Author
Owner

@blakkd commented on GitHub (Apr 22, 2025):

My approach was just to rip out the quantized tensors from the GGUF file, add in the vision tower/projector, and rename the kvs and tensors to what Ollama is expecting.

@pdevine Wow, that's far beyond my knowledge. But thanks for sharing! Shall I ask which gguf visualization/manipulation tool(s) you used to do this? I don't want to bother you though, so just in case it's in your usual stack and easy to recall ;)

PS: just noticed ollama.com now provides the qat version directly, so thank you if it comes from your work!

<!-- gh-comment-id:2821305223 --> @blakkd commented on GitHub (Apr 22, 2025): > My approach was just to rip out the quantized tensors from the GGUF file, add in the vision tower/projector, and rename the kvs and tensors to what Ollama is expecting. @pdevine Wow, that's far beyond my knowledge. But thanks for sharing! Shall I ask which gguf visualization/manipulation tool(s) you used to do this? I don't want to bother you though, so just in case it's in your usual stack and easy to recall ;) PS: just noticed ollama.com now provides the qat version directly, so thank you if it comes from your work!
Author
Owner

@pdevine commented on GitHub (Apr 22, 2025):

@blakkd I just wrote my own in golang, but I think the python ggml libraries would let you do this pretty easily. And yes, this is what I used for the QAT models on ollama.com!

<!-- gh-comment-id:2821799938 --> @pdevine commented on GitHub (Apr 22, 2025): @blakkd I just wrote my own in golang, but I think the python ggml libraries would let you do this pretty easily. And yes, this is what I used for the QAT models on ollama.com!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32140