[PR #10647] Follow up to #10363 #13310

Closed
opened 2026-04-13 00:23:28 -05:00 by GiteaMirror · 0 comments
Owner

Original Pull Request: https://github.com/ollama/ollama/pull/10647

State: closed
Merged: Yes


The quantization PR didn't block all unsupported file types, which this PR fixes. The getTensorNewType logic had some no longer relevant checks, which are now removed, along with the test coverage for those. It also updates the API docs to reflect the now reduced set of supported types.

Example when passing an unsupported file type

% ollama create test -f ./test.modelfile -q q6_k
gathering model components 
Error: unsupported quantization type Q6_K - supported types are F32, F16, Q4_K_S, Q4_K_M, Q8_0
**Original Pull Request:** https://github.com/ollama/ollama/pull/10647 **State:** closed **Merged:** Yes --- The quantization PR didn't block all unsupported file types, which this PR fixes. The getTensorNewType logic had some no longer relevant checks, which are now removed, along with the test coverage for those. It also updates the API docs to reflect the now reduced set of supported types. Example when passing an unsupported file type ``` % ollama create test -f ./test.modelfile -q q6_k gathering model components Error: unsupported quantization type Q6_K - supported types are F32, F16, Q4_K_S, Q4_K_M, Q8_0 ```
GiteaMirror added the pull-request label 2026-04-13 00:23:28 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13310