[GH-ISSUE #13665] Add Qwen3-VL-Embedding (multi-modal embedding) to Ollama #71036

Closed
opened 2026-05-04 23:50:01 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @lyfuci on GitHub (Jan 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13665

Yesterday, Qwen released a new vision-language embedding model, Qwen3-VL-Embedding-8B. It can generate embeddings for both images and text, which makes it very useful for image retrieval / multi-modal search (e.g., image-to-image and text-to-image retrieval) and RAG-style indexing.

Could you please consider adding support for this model in Ollama?
link: https://huggingface.co/Qwen/Qwen3-VL-Embedding-8B

Originally created by @lyfuci on GitHub (Jan 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13665 Yesterday, Qwen released a new vision-language embedding model, Qwen3-VL-Embedding-8B. It can generate embeddings for both images and text, which makes it very useful for image retrieval / multi-modal search (e.g., image-to-image and text-to-image retrieval) and RAG-style indexing. Could you please consider adding support for this model in Ollama? link: https://huggingface.co/Qwen/Qwen3-VL-Embedding-8B
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71036