[GH-ISSUE #10544] Please add qwen3 256b fp16 and q8 #68997

Closed
opened 2026-05-04 16:43:29 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Llamadouble999q on GitHub (May 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10544

Please add the full model

Originally created by @Llamadouble999q on GitHub (May 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10544 Please add the full model
GiteaMirror added the model label 2026-05-04 16:43:30 -05:00
Author
Owner

@olumolu commented on GitHub (May 4, 2025):

Please add the full model

https://ollama.com/library/qwen3:235b-a22b-fp16
https://ollama.com/library/qwen3:235b-a22b-q8_0

<!-- gh-comment-id:2848996779 --> @olumolu commented on GitHub (May 4, 2025): > Please add the full model https://ollama.com/library/qwen3:235b-a22b-fp16 https://ollama.com/library/qwen3:235b-a22b-q8_0
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68997