[GH-ISSUE #2874] Support Qwen VL #63794

Closed
opened 2026-05-03 14:58:39 -05:00 by GiteaMirror · 58 comments
Owner

Originally created by @justStarG on GitHub (Mar 2, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2874

Could you please support Qwen VL model

Originally created by @justStarG on GitHub (Mar 2, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2874 Could you please support Qwen VL model
GiteaMirror added the model label 2026-05-03 14:58:39 -05:00
Author
Owner

@Boomboomdunce commented on GitHub (Mar 2, 2024):

Hope to increase the support of Qwen VL

<!-- gh-comment-id:1974795550 --> @Boomboomdunce commented on GitHub (Mar 2, 2024): Hope to increase the support of Qwen VL
Author
Owner

@thiner commented on GitHub (Mar 11, 2024):

This is really a must have model for Chinese users, as it's the SOTA of vision model which can recognize Chinese characters. But I think that won't be easy, because there is not GGUF format model yet on HuggingFace.

<!-- gh-comment-id:1987568055 --> @thiner commented on GitHub (Mar 11, 2024): This is really a must have model for Chinese users, as it's the SOTA of vision model which can recognize Chinese characters. But I think that won't be easy, because there is not GGUF format model yet on HuggingFace.
Author
Owner

@justStarG commented on GitHub (Mar 11, 2024):

There is a int4 model. https://huggingface.co/Qwen/Qwen-VL-Chat-Int4

<!-- gh-comment-id:1988162470 --> @justStarG commented on GitHub (Mar 11, 2024): There is a int4 model. https://huggingface.co/Qwen/Qwen-VL-Chat-Int4
Author
Owner

@thiner commented on GitHub (Mar 12, 2024):

@thesby ollama runs GGUF models only currently. Pytorch or safetensors model need to be converted to gguf firstly. https://github.com/ollama/ollama/blob/main/docs/import.md. But convert Qwen-VL to gguf format is not supported by llama.cpp yet.

<!-- gh-comment-id:1990190611 --> @thiner commented on GitHub (Mar 12, 2024): @thesby ollama runs GGUF models only currently. Pytorch or safetensors model need to be converted to gguf firstly. https://github.com/ollama/ollama/blob/main/docs/import.md. But convert Qwen-VL to gguf format is not supported by llama.cpp yet.
Author
Owner

@monkeycc commented on GitHub (Apr 8, 2024):

Hope to increase the support of Qwen VL +1

<!-- gh-comment-id:2042092769 --> @monkeycc commented on GitHub (Apr 8, 2024): Hope to increase the support of Qwen VL +1
Author
Owner

@ghost commented on GitHub (Apr 8, 2024):

hope + 1

<!-- gh-comment-id:2042342897 --> @ghost commented on GitHub (Apr 8, 2024): hope + 1
Author
Owner

@TKasperczyk commented on GitHub (Apr 11, 2024):

+1

<!-- gh-comment-id:2050248525 --> @TKasperczyk commented on GitHub (Apr 11, 2024): +1
Author
Owner

@leo985 commented on GitHub (Apr 13, 2024):

+1

<!-- gh-comment-id:2053585898 --> @leo985 commented on GitHub (Apr 13, 2024): +1
Author
Owner

@wangqinghuan commented on GitHub (Apr 15, 2024):

+1

<!-- gh-comment-id:2056743993 --> @wangqinghuan commented on GitHub (Apr 15, 2024): +1
Author
Owner

@J0eky commented on GitHub (Apr 26, 2024):

+1

<!-- gh-comment-id:2078536814 --> @J0eky commented on GitHub (Apr 26, 2024): +1
Author
Owner

@liyanshuai2018 commented on GitHub (Apr 28, 2024):

This multimodal model is really great, supported, and it is recommended to adapt

<!-- gh-comment-id:2081405806 --> @liyanshuai2018 commented on GitHub (Apr 28, 2024): This multimodal model is really great, supported, and it is recommended to adapt
Author
Owner

@BayMikyou commented on GitHub (May 12, 2024):

+1

<!-- gh-comment-id:2106241013 --> @BayMikyou commented on GitHub (May 12, 2024): +1
Author
Owner

@meloseven commented on GitHub (May 14, 2024):

+1

<!-- gh-comment-id:2109382932 --> @meloseven commented on GitHub (May 14, 2024): +1
Author
Owner

@bobbyng626 commented on GitHub (Jul 2, 2024):

+1

<!-- gh-comment-id:2201967805 --> @bobbyng626 commented on GitHub (Jul 2, 2024): +1
Author
Owner

@meloseven commented on GitHub (Jul 2, 2024):

这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

<!-- gh-comment-id:2201968424 --> @meloseven commented on GitHub (Jul 2, 2024): 这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
Author
Owner

@aboutmydreams commented on GitHub (Jul 20, 2024):

llama.cpp 尚不支持将 Qwen-VL 转换为 gguf 格式,得去他们仓库下多 dd...

<!-- gh-comment-id:2241238328 --> @aboutmydreams commented on GitHub (Jul 20, 2024): llama.cpp 尚不支持将 Qwen-VL 转换为 gguf 格式,得去他们仓库下多 dd...
Author
Owner

@meloseven commented on GitHub (Jul 20, 2024):

这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

<!-- gh-comment-id:2241238480 --> @meloseven commented on GitHub (Jul 20, 2024): 这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
Author
Owner

@yjgfn commented on GitHub (Jul 26, 2024):

+1

<!-- gh-comment-id:2251978906 --> @yjgfn commented on GitHub (Jul 26, 2024): +1
Author
Owner

@janelovesprogramming commented on GitHub (Aug 7, 2024):

+1

<!-- gh-comment-id:2272774018 --> @janelovesprogramming commented on GitHub (Aug 7, 2024): +1
Author
Owner

@chrisoutwright commented on GitHub (Aug 11, 2024):

Qwen VL works much better than LLava 1.6, would be good to be able to use it with ollama.
The OCR is also much better and supports much more languages.

<!-- gh-comment-id:2282368330 --> @chrisoutwright commented on GitHub (Aug 11, 2024): Qwen VL works much better than LLava 1.6, would be good to be able to use it with ollama. The OCR is also much better and supports much more languages.
Author
Owner

@u9wangcl commented on GitHub (Aug 30, 2024):

+1

<!-- gh-comment-id:2320234052 --> @u9wangcl commented on GitHub (Aug 30, 2024): +1
Author
Owner

@meloseven commented on GitHub (Aug 30, 2024):

这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

<!-- gh-comment-id:2320235144 --> @meloseven commented on GitHub (Aug 30, 2024): 这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
Author
Owner

@wlsoft2006 commented on GitHub (Sep 1, 2024):

+1

<!-- gh-comment-id:2323258462 --> @wlsoft2006 commented on GitHub (Sep 1, 2024): +1
Author
Owner

@mintisan commented on GitHub (Sep 1, 2024):

+1

<!-- gh-comment-id:2323312629 --> @mintisan commented on GitHub (Sep 1, 2024): +1
Author
Owner

@DemonJun commented on GitHub (Sep 4, 2024):

+1

<!-- gh-comment-id:2328052725 --> @DemonJun commented on GitHub (Sep 4, 2024): +1
Author
Owner

@schengyi commented on GitHub (Sep 6, 2024):

+1

<!-- gh-comment-id:2333039083 --> @schengyi commented on GitHub (Sep 6, 2024): +1
Author
Owner

@JAINKRE commented on GitHub (Sep 10, 2024):

+1

<!-- gh-comment-id:2339577681 --> @JAINKRE commented on GitHub (Sep 10, 2024): +1
Author
Owner

@onewesong commented on GitHub (Sep 19, 2024):

+1

<!-- gh-comment-id:2360244442 --> @onewesong commented on GitHub (Sep 19, 2024): +1
Author
Owner

@deathxlent commented on GitHub (Sep 19, 2024):

My own test results show that:Qwen2-vl works better than Llava-llama3 and Llava-phi3,especially in terms of multilingual support. So, still hope Ollama can officially support this feature in the models library, making it more convenient to use. Thank you.

<!-- gh-comment-id:2360462895 --> @deathxlent commented on GitHub (Sep 19, 2024): My own test results show that:Qwen2-vl works better than Llava-llama3 and Llava-phi3,especially in terms of multilingual support. So, still hope Ollama can officially support this feature in the models library, making it more convenient to use. Thank you.
Author
Owner

@eons2long commented on GitHub (Sep 20, 2024):

+1

<!-- gh-comment-id:2362893837 --> @eons2long commented on GitHub (Sep 20, 2024): +1
Author
Owner

@asmit203 commented on GitHub (Sep 25, 2024):

+1

<!-- gh-comment-id:2372692688 --> @asmit203 commented on GitHub (Sep 25, 2024): +1
Author
Owner

@tianyuedoudou commented on GitHub (Sep 25, 2024):

+1

<!-- gh-comment-id:2372748633 --> @tianyuedoudou commented on GitHub (Sep 25, 2024): +1
Author
Owner

@Alexie-Z-Yevich commented on GitHub (Sep 29, 2024):

+1

<!-- gh-comment-id:2381322324 --> @Alexie-Z-Yevich commented on GitHub (Sep 29, 2024): +1
Author
Owner

@AlessandroSpallina commented on GitHub (Oct 5, 2024):

+1

<!-- gh-comment-id:2395007381 --> @AlessandroSpallina commented on GitHub (Oct 5, 2024): +1
Author
Owner

@pandachen7 commented on GitHub (Oct 11, 2024):

+1

<!-- gh-comment-id:2406473447 --> @pandachen7 commented on GitHub (Oct 11, 2024): +1
Author
Owner

@janelovesprogramming commented on GitHub (Oct 11, 2024):

+1

<!-- gh-comment-id:2407414168 --> @janelovesprogramming commented on GitHub (Oct 11, 2024): +1
Author
Owner

@E218PQ commented on GitHub (Oct 22, 2024):

I also have the same experience and the same needs. I hope there can be a better solution. I hope ollama can increase support for RAG models and TTS models.

<!-- gh-comment-id:2427986773 --> @E218PQ commented on GitHub (Oct 22, 2024): I also have the same experience and the same needs. I hope there can be a better solution. I hope ollama can increase support for RAG models and TTS models.
Author
Owner

@kandakji commented on GitHub (Oct 23, 2024):

+1

<!-- gh-comment-id:2432585642 --> @kandakji commented on GitHub (Oct 23, 2024): +1
Author
Owner

@Selenium39 commented on GitHub (Oct 25, 2024):

hope to add

<!-- gh-comment-id:2437047836 --> @Selenium39 commented on GitHub (Oct 25, 2024): hope to add
Author
Owner

@CiaranYoung commented on GitHub (Nov 19, 2024):

so . did it support qwen 2 vl

<!-- gh-comment-id:2484635863 --> @CiaranYoung commented on GitHub (Nov 19, 2024): so . did it support qwen 2 vl
Author
Owner

@xlg-go commented on GitHub (Dec 5, 2024):

+1

<!-- gh-comment-id:2519249054 --> @xlg-go commented on GitHub (Dec 5, 2024): +1
Author
Owner

@meloseven commented on GitHub (Dec 5, 2024):

这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

<!-- gh-comment-id:2519249773 --> @meloseven commented on GitHub (Dec 5, 2024): 这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
Author
Owner

@AmanBhanse commented on GitHub (Dec 5, 2024):

+1

<!-- gh-comment-id:2520501353 --> @AmanBhanse commented on GitHub (Dec 5, 2024): +1
Author
Owner

@wqerrewetw commented on GitHub (Dec 14, 2024):

llama.cpp has added support for Qwen2VL
https://github.com/ggerganov/llama.cpp/pull/10361

<!-- gh-comment-id:2543197893 --> @wqerrewetw commented on GitHub (Dec 14, 2024): llama.cpp has added support for Qwen2VL https://github.com/ggerganov/llama.cpp/pull/10361
Author
Owner

@darkBuddha commented on GitHub (Dec 22, 2024):

Why is Qwen 2 VL not supported?

<!-- gh-comment-id:2558552591 --> @darkBuddha commented on GitHub (Dec 22, 2024): Why is Qwen 2 VL not supported?
Author
Owner

@mason105 commented on GitHub (Jan 5, 2025):

+1

<!-- gh-comment-id:2571510632 --> @mason105 commented on GitHub (Jan 5, 2025): +1
Author
Owner

@gaojs commented on GitHub (Jan 16, 2025):

+1

<!-- gh-comment-id:2595325353 --> @gaojs commented on GitHub (Jan 16, 2025): +1
Author
Owner

@libing64 commented on GitHub (Jan 27, 2025):

+1

<!-- gh-comment-id:2615292920 --> @libing64 commented on GitHub (Jan 27, 2025): +1
Author
Owner

@maxx-marchuk commented on GitHub (Jan 28, 2025):

+1

<!-- gh-comment-id:2619271893 --> @maxx-marchuk commented on GitHub (Jan 28, 2025): +1
Author
Owner

@tm17-abcgen commented on GitHub (Feb 9, 2025):

+1

<!-- gh-comment-id:2646339374 --> @tm17-abcgen commented on GitHub (Feb 9, 2025): +1
Author
Owner

@darkBuddha commented on GitHub (Feb 9, 2025):

+1000

<!-- gh-comment-id:2646340908 --> @darkBuddha commented on GitHub (Feb 9, 2025): +1000
Author
Owner

@darkBuddha commented on GitHub (Feb 19, 2025):

Qwen2.5-VL is so strong !!

<!-- gh-comment-id:2668387005 --> @darkBuddha commented on GitHub (Feb 19, 2025): Qwen2.5-VL is so strong !!
Author
Owner

@Peter-Dai commented on GitHub (Feb 20, 2025):

+1

<!-- gh-comment-id:2670664531 --> @Peter-Dai commented on GitHub (Feb 20, 2025): +1
Author
Owner

@yuisheaven commented on GitHub (Feb 21, 2025):

+1

<!-- gh-comment-id:2673863123 --> @yuisheaven commented on GitHub (Feb 21, 2025): +1
Author
Owner

@E218PQ commented on GitHub (Feb 26, 2025):

Today, the video generation model with low GPU requirements for wan2.1 has been open-sourced. I wonder if Ollama has considered expanding its support to other model types?

<!-- gh-comment-id:2684674455 --> @E218PQ commented on GitHub (Feb 26, 2025): Today, the video generation model with low GPU requirements for wan2.1 has been open-sourced. I wonder if Ollama has considered expanding its support to other model types?
Author
Owner

@anunknowperson commented on GitHub (Feb 26, 2025):

@E218PQ ollama is a wrapper around llama.cpp and its unrelated to non-llm models.

<!-- gh-comment-id:2685982154 --> @anunknowperson commented on GitHub (Feb 26, 2025): @E218PQ ollama is a wrapper around llama.cpp and its unrelated to non-llm models.
Author
Owner

@jmorganca commented on GitHub (May 15, 2025):

Ollama supports Qwen 2.5VL as of 0.7.0: https://github.com/ollama/ollama/releases/tag/v0.7.0 🎉

<!-- gh-comment-id:2885122946 --> @jmorganca commented on GitHub (May 15, 2025): Ollama supports Qwen 2.5VL as of 0.7.0: https://github.com/ollama/ollama/releases/tag/v0.7.0 🎉
Author
Owner

@meloseven commented on GitHub (May 15, 2025):

这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

<!-- gh-comment-id:2885123897 --> @meloseven commented on GitHub (May 15, 2025): 这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63794