[GH-ISSUE #10989] support for qwen3-embedding and qwen3-reranker models #33005

Open
opened 2026-04-22 15:05:56 -05:00 by GiteaMirror · 36 comments
Owner

Originally created by @pamdla on GitHub (Jun 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10989

Hi dear, please add these models, qwen3-embeddings, qwen3-rerankers , which are newly released on https://huggingface.co/Qwen/.

Originally created by @pamdla on GitHub (Jun 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10989 Hi dear, please add these models, qwen3-embeddings, qwen3-rerankers , which are newly released on https://huggingface.co/Qwen/.
GiteaMirror added the model label 2026-04-22 15:05:56 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

ollama pull hf.co/Qwen/Qwen3-Embedding-0.6B-GGUF:Q8_0

ollama doesn't currently support ranking models, #3368.

<!-- gh-comment-id:2946454983 --> @rick-github commented on GitHub (Jun 5, 2025): ``` ollama pull hf.co/Qwen/Qwen3-Embedding-0.6B-GGUF:Q8_0 ``` ollama doesn't currently support ranking models, #3368.
Author
Owner

@pamdla commented on GitHub (Jun 5, 2025):

great, i made it.
however, reranker models are really helpful for rag, please make a plan for them.

<!-- gh-comment-id:2946719575 --> @pamdla commented on GitHub (Jun 5, 2025): great, i made it. however, reranker models are really helpful for rag, please make a plan for them.
Author
Owner

@chaorders1 commented on GitHub (Jun 6, 2025):

Looking forward Ollama support. This is the best embedding models. Waiting for years

<!-- gh-comment-id:2947885126 --> @chaorders1 commented on GitHub (Jun 6, 2025): Looking forward Ollama support. This is the best embedding models. Waiting for years
Author
Owner

@softlgl commented on GitHub (Jun 6, 2025):

I also hope to support qwen3-embedding

<!-- gh-comment-id:2948603005 --> @softlgl commented on GitHub (Jun 6, 2025): I also hope to support qwen3-embedding
Author
Owner

@hanwsf commented on GitHub (Jun 6, 2025):

Qwen embedding can support by Modelfile:
FROM /root/.ollama/Qwen3-Embedding-8B-Q4_K_M.gguf

SYSTEM """Qwen3-Embedding-8B 中文嵌入模型
向量维度: 4096
上下文长度: 32764
"""

TEMPLATE """{{ .Prompt }}"""
PARAMETER temperature 0

<!-- gh-comment-id:2949411941 --> @hanwsf commented on GitHub (Jun 6, 2025): Qwen embedding can support by Modelfile: FROM /root/.ollama/Qwen3-Embedding-8B-Q4_K_M.gguf SYSTEM """Qwen3-Embedding-8B 中文嵌入模型 向量维度: 4096 上下文长度: 32764 """ TEMPLATE """{{ .Prompt }}""" PARAMETER temperature 0
Author
Owner

@rick-github commented on GitHub (Jun 6, 2025):

Just import it as show in https://github.com/ollama/ollama/issues/10989#issuecomment-2946454983.

<!-- gh-comment-id:2949426694 --> @rick-github commented on GitHub (Jun 6, 2025): Just import it as show in https://github.com/ollama/ollama/issues/10989#issuecomment-2946454983.
Author
Owner

@wwjCMP commented on GitHub (Jun 7, 2025):

Just import it as show in #10989 (comment).

Does ollama support changing the embedding length of embedding model?

<!-- gh-comment-id:2951432445 --> @wwjCMP commented on GitHub (Jun 7, 2025): > Just import it as show in [#10989 (comment)](https://github.com/ollama/ollama/issues/10989#issuecomment-2946454983). Does ollama support changing the embedding length of embedding model?
Author
Owner

@rick-github commented on GitHub (Jun 7, 2025):

No.

https://arxiv.org/abs/1708.03629

<!-- gh-comment-id:2951442535 --> @rick-github commented on GitHub (Jun 7, 2025): No. https://arxiv.org/abs/1708.03629
Author
Owner

@zcuder commented on GitHub (Jun 7, 2025):

@rick-github importing by the above modelfile doesn't show as embedding model, not sure if I need anything else in modelfile to make it work as expected

<!-- gh-comment-id:2951680908 --> @zcuder commented on GitHub (Jun 7, 2025): @rick-github importing by the above modelfile doesn't show as embedding model, not sure if I need anything else in modelfile to make it work as expected
Author
Owner

@rick-github commented on GitHub (Jun 7, 2025):

A model needs to have a pooling_type field in the KV metadata in order to have embedding as a listed capability. That's a part of the GGUF file, there's no Modelfile setting that can affect that. Note that any model can be used for embedding, models that have embedding as a capability are exclusively embedding models. In this case it looks like Qwen just took the embedding part of their general purpose model and made it available as a separate component. Because it doesn't have all the parameters of a general purpose model, it runs faster and takes less memory.

<!-- gh-comment-id:2952258518 --> @rick-github commented on GitHub (Jun 7, 2025): A model needs to have a `pooling_type` field in the KV metadata in order to have `embedding` as a listed capability. That's a part of the GGUF file, there's no Modelfile setting that can affect that. Note that any model can be used for embedding, models that have `embedding` as a capability are exclusively embedding models. In this case it looks like Qwen just took the embedding part of their general purpose model and made it available as a separate component. Because it doesn't have all the parameters of a general purpose model, it runs faster and takes less memory.
Author
Owner

@czadikem commented on GitHub (Jun 9, 2025):

Interested in seeing these models as official embedding models in the ollama model library sometime soon.

<!-- gh-comment-id:2955972636 --> @czadikem commented on GitHub (Jun 9, 2025): Interested in seeing these models as official embedding models in the ollama model library sometime soon.
Author
Owner

@GrainArc commented on GitHub (Jun 11, 2025):

What's funny is that previously, developers had implemented support for loading rerank models in Ollama through llama.cpp, but the official team refused to merge the code. Many people have raised the need for rerank model compatibility, but the official team has consistently turned a blind eye to it

<!-- gh-comment-id:2961201833 --> @GrainArc commented on GitHub (Jun 11, 2025): What's funny is that previously, developers had implemented support for loading rerank models in Ollama through llama.cpp, but the official team refused to merge the code. Many people have raised the need for rerank model compatibility, but the official team has consistently turned a blind eye to it
Author
Owner

@jessegross commented on GitHub (Jun 11, 2025):

@fmecool The original author never responded to comments on the code.

<!-- gh-comment-id:2964081972 --> @jessegross commented on GitHub (Jun 11, 2025): @fmecool The original author never responded to comments on the code.
Author
Owner

@yebanliuying commented on GitHub (Jun 12, 2025):

yep,we need it

<!-- gh-comment-id:2965047739 --> @yebanliuying commented on GitHub (Jun 12, 2025): yep,we need it
Author
Owner

@zcuder commented on GitHub (Jun 12, 2025):

@fmecool The original author never responded to comments on the code.

might be so many times back and forth and the author gave up this PR.
Could you guys make another one implementing this?

<!-- gh-comment-id:2965138621 --> @zcuder commented on GitHub (Jun 12, 2025): > [@fmecool](https://github.com/fmecool) The original author never responded to comments on the code. might be so many times back and forth and the author gave up this PR. Could you guys make another one implementing this?
Author
Owner

@Leroy-X commented on GitHub (Jun 18, 2025):

need+1

<!-- gh-comment-id:2984511538 --> @Leroy-X commented on GitHub (Jun 18, 2025): need+1
Author
Owner

@charescape commented on GitHub (Jun 19, 2025):

There are very few embedding models available for Ollama.

https://ollama.com/search?c=embedding

Embedding models are the type of models that Ollama needs to support the most.

<!-- gh-comment-id:2988249870 --> @charescape commented on GitHub (Jun 19, 2025): There are very few embedding models available for Ollama. https://ollama.com/search?c=embedding Embedding models are the type of models that Ollama needs to support the most.
Author
Owner

@GrainArc commented on GitHub (Jun 19, 2025):

Ollama has been compatible with embedding models since version 7.0, and the current focus is on the rerank model.

<!-- gh-comment-id:2988261666 --> @GrainArc commented on GitHub (Jun 19, 2025): Ollama has been compatible with embedding models since version 7.0, and the current focus is on the rerank model.
Author
Owner

@charescape commented on GitHub (Jun 19, 2025):

Since both models are widely needed, this issue can be divided into two PRs for Ollama:

  • one adding Qwen3-embedding model Support
  • and the other adding Qwen3-reranker model Support
<!-- gh-comment-id:2988282582 --> @charescape commented on GitHub (Jun 19, 2025): Since both models are widely needed, this issue can be divided into two PRs for Ollama: - one adding **Qwen3-embedding model Support** - and the other adding **Qwen3-reranker model Support**
Author
Owner

@rick-github commented on GitHub (Jun 19, 2025):

qwen3 embedding is already supported.

<!-- gh-comment-id:2988823516 --> @rick-github commented on GitHub (Jun 19, 2025): qwen3 embedding is already supported.
Author
Owner

@qwerty199369 commented on GitHub (Jun 27, 2025):

Rerank model support for Ollama:

<!-- gh-comment-id:3011658282 --> @qwerty199369 commented on GitHub (Jun 27, 2025): Rerank model support for Ollama: - PR open: https://github.com/ollama/ollama/pull/7219 - PR closed: https://github.com/ollama/ollama/pull/11156
Author
Owner

@khanakia commented on GitHub (Jul 11, 2025):

qwen3 embedding is already supported.

it producing garbage not embedding. Can you tell me please how were you able to run it ?

  Qwen3-Embedding-8B-GGUF curl http://localhost:11434/api/generate -d '{
  "model": "hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest",
  "prompt":"hello"
}'
{"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.923137Z","response":"ค","done":false}
{"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.944965Z","response":":\r\n\r\n","done":false}
{"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.966548Z","response":"DAT","done":false}
{"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.988911Z","response":"dat","done":false}
{"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:45.011855Z","response":"OK","done":false}
{"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:45.034243Z","response":" day","done":false}
<!-- gh-comment-id:3061185015 --> @khanakia commented on GitHub (Jul 11, 2025): > qwen3 embedding is already supported. it producing garbage not embedding. Can you tell me please how were you able to run it ? ``` Qwen3-Embedding-8B-GGUF curl http://localhost:11434/api/generate -d '{ "model": "hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest", "prompt":"hello" }' {"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.923137Z","response":"ค","done":false} {"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.944965Z","response":":\r\n\r\n","done":false} {"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.966548Z","response":"DAT","done":false} {"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:44.988911Z","response":"dat","done":false} {"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:45.011855Z","response":"OK","done":false} {"model":"hf.co/Qwen/Qwen3-Embedding-8B-GGUF:latest","created_at":"2025-07-11T08:11:45.034243Z","response":" day","done":false} ```
Author
Owner

@rick-github commented on GitHub (Jul 11, 2025):

Use the /embed endpoint.

<!-- gh-comment-id:3061234251 --> @rick-github commented on GitHub (Jul 11, 2025): Use the /embed endpoint.
Author
Owner

@khanakia commented on GitHub (Jul 11, 2025):

@rick-github thanks it worked

Also, one more thing, do you know how I set the dimensions Qwen3-Embedding-8B-GGUF generates 4096 tokens currently i want only 1024

<!-- gh-comment-id:3061316964 --> @khanakia commented on GitHub (Jul 11, 2025): @rick-github thanks it worked Also, one more thing, do you know how I set the `dimensions` `Qwen3-Embedding-8B-GGUF` generates 4096 tokens currently i want only `1024`
Author
Owner

@rick-github commented on GitHub (Jul 11, 2025):

#11213

<!-- gh-comment-id:3061457797 --> @rick-github commented on GitHub (Jul 11, 2025): #11213
Author
Owner

@devlux76 commented on GitHub (Jul 14, 2025):

@rick-github thanks it worked

Also, one more thing, do you know how I set the dimensions Qwen3-Embedding-8B-GGUF generates 4096 tokens currently i want only 1024

It's MRL like nomic-embed-text this means you just truncate to the size you want since the most important vectors are first. I'm getting better results on Qwen3-0.6b embed truncated to 32 dimensions than I did on nomic-embed-text at 768 dimensions (or truncated). My use case is legal texts though so YMMV.

<!-- gh-comment-id:3070884164 --> @devlux76 commented on GitHub (Jul 14, 2025): > [@rick-github](https://github.com/rick-github) thanks it worked > > Also, one more thing, do you know how I set the `dimensions` `Qwen3-Embedding-8B-GGUF` generates 4096 tokens currently i want only `1024` It's MRL like nomic-embed-text this means you just truncate to the size you want since the most important vectors are first. I'm getting better results on Qwen3-0.6b embed truncated to 32 dimensions than I did on nomic-embed-text at 768 dimensions (or truncated). My use case is legal texts though so YMMV.
Author
Owner

@lianzhao commented on GitHub (Jul 15, 2025):

It's MRL like nomic-embed-text this means you just truncate to the size you want since the most important vectors are first. I'm getting better results on Qwen3-0.6b embed truncated to 32 dimensions than I did on nomic-embed-text at 768 dimensions (or truncated). My use case is legal texts though so YMMV.

Hello @devlux76, I was wondering if you could kindly clarify what exactly you were embedding? Were you embedding the ​​user’s query​​, or the ​​legal documents themselves​​? Thank you!

<!-- gh-comment-id:3071794371 --> @lianzhao commented on GitHub (Jul 15, 2025): > It's MRL like nomic-embed-text this means you just truncate to the size you want since the most important vectors are first. I'm getting better results on Qwen3-0.6b embed truncated to 32 dimensions than I did on nomic-embed-text at 768 dimensions (or truncated). My use case is legal texts though so YMMV. Hello @devlux76, I was wondering if you could kindly clarify what exactly you were embedding? Were you embedding the ​​user’s query​​, or the ​​legal documents themselves​​? Thank you!
Author
Owner

@devlux76 commented on GitHub (Jul 16, 2025):

It's MRL like nomic-embed-text this means you just truncate to the size you want since the most important vectors are first. I'm getting better results on Qwen3-0.6b embed truncated to 32 dimensions than I did on nomic-embed-text at 768 dimensions (or truncated). My use case is legal texts though so YMMV.

Hello @devlux76, I was wondering if you could kindly clarify what exactly you were embedding? Were you embedding the ​​user’s query​​, or the ​​legal documents themselves​​? Thank you!

I was embedding the documents for search with a clustering target and then generating embeddings for the query to perform a document search. The key here is that legal case texts have a very regular structure to them and need to cluster by more than simple semantic similarity since the laws of each jurisdiction are different and the overlap isn't very broad.

It's slow but works for the most part.

<!-- gh-comment-id:3077039268 --> @devlux76 commented on GitHub (Jul 16, 2025): > > It's MRL like nomic-embed-text this means you just truncate to the size you want since the most important vectors are first. I'm getting better results on Qwen3-0.6b embed truncated to 32 dimensions than I did on nomic-embed-text at 768 dimensions (or truncated). My use case is legal texts though so YMMV. > > Hello [@devlux76](https://github.com/devlux76), I was wondering if you could kindly clarify what exactly you were embedding? Were you embedding the ​​user’s query​​, or the ​​legal documents themselves​​? Thank you! I was embedding the documents for search with a clustering target and then generating embeddings for the query to perform a document search. The key here is that legal case texts have a very regular structure to them and need to cluster by more than simple semantic similarity since the laws of each jurisdiction are different and the overlap isn't very broad. It's slow but works for the most part.
Author
Owner

@kripper commented on GitHub (Aug 26, 2025):

ollama pull https://huggingface.co/Qwen/Qwen3-Embedding-4B-GGUF

<!-- gh-comment-id:3222192752 --> @kripper commented on GitHub (Aug 26, 2025): `ollama pull https://huggingface.co/Qwen/Qwen3-Embedding-4B-GGUF`
Author
Owner

@kripper commented on GitHub (Aug 26, 2025):

Mabye still broken?
https://github.com/ggml-org/llama.cpp/pull/14029

<!-- gh-comment-id:3222226724 --> @kripper commented on GitHub (Aug 26, 2025): Mabye still broken? https://github.com/ggml-org/llama.cpp/pull/14029
Author
Owner

@daniporr commented on GitHub (Sep 6, 2025):

The library includes several embedding models, such as https://ollama.com/library/granite-embedding and https://ollama.com/library/embeddinggemma. The latter is also featured in the latest release: https://github.com/ollama/ollama/releases/tag/v0.11.10

Why not include Qwen3 Embedding as well?

<!-- gh-comment-id:3263217863 --> @daniporr commented on GitHub (Sep 6, 2025): The library includes several embedding models, such as https://ollama.com/library/granite-embedding and https://ollama.com/library/embeddinggemma. The latter is also featured in the latest release: https://github.com/ollama/ollama/releases/tag/v0.11.10 Why not include Qwen3 Embedding as well?
Author
Owner

@rick-github commented on GitHub (Sep 24, 2025):

Now officially in the main library.

https://ollama.com/library/qwen3-embedding

<!-- gh-comment-id:3328215009 --> @rick-github commented on GitHub (Sep 24, 2025): Now officially in the main library. https://ollama.com/library/qwen3-embedding
Author
Owner

@ytwytw commented on GitHub (Sep 28, 2025):

Will ollama have reranker support?

<!-- gh-comment-id:3343580569 --> @ytwytw commented on GitHub (Sep 28, 2025): Will ollama have reranker support?
Author
Owner

@rick-github commented on GitHub (Sep 28, 2025):

https://github.com/ollama/ollama/pull/11389

<!-- gh-comment-id:3343680761 --> @rick-github commented on GitHub (Sep 28, 2025): https://github.com/ollama/ollama/pull/11389
Author
Owner

@PeterWang-dev commented on GitHub (Feb 17, 2026):

Any progress on this issue?

<!-- gh-comment-id:3912966304 --> @PeterWang-dev commented on GitHub (Feb 17, 2026): Any progress on this issue?
Author
Owner

@lyfuci commented on GitHub (Feb 17, 2026):

Any progress on this issue?

​There doesn't seem to be a plan yet.

<!-- gh-comment-id:3913608087 --> @lyfuci commented on GitHub (Feb 17, 2026): > Any progress on this issue? ​There doesn't seem to be a plan yet.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33005