[GH-ISSUE #5211] fp16 shows quantization unknown when running ollama show #3272

Closed
opened 2026-04-12 13:49:07 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jmorganca on GitHub (Jun 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5211

Originally assigned to: @royjhan on GitHub.

What is the issue?

% ollama show gemma:7b-instruct-fp16
  Model                             
  	arch            	gemma  	           
  	parameters      	9B     	           
  	quantization    	unknown	           
  	context length  	8192   	           
  	embedding length	3072  

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @jmorganca on GitHub (Jun 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5211 Originally assigned to: @royjhan on GitHub. ### What is the issue? ``` % ollama show gemma:7b-instruct-fp16 Model arch gemma parameters 9B quantization unknown context length 8192 embedding length 3072 ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 13:49:07 -05:00
Author
Owner

@jmorganca commented on GitHub (Jun 22, 2024):

Note: this may be a specific issue with gemma:7b-instruct-fp16

<!-- gh-comment-id:2183821506 --> @jmorganca commented on GitHub (Jun 22, 2024): Note: this may be a specific issue with `gemma:7b-instruct-fp16`
Author
Owner

@royjhan commented on GitHub (Jun 26, 2024):

This is expected, as per https://ollama.com/library/gemma:7b-instruct-fp16. We could omit it from the display or leave it as unknown.

<!-- gh-comment-id:2192415626 --> @royjhan commented on GitHub (Jun 26, 2024): This is expected, as per https://ollama.com/library/gemma:7b-instruct-fp16. We could omit it from the display or leave it as unknown.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3272