[GH-ISSUE #9340] Nomic Embed text v2 #6100

Closed
opened 2026-04-12 17:25:53 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @NickCis on GitHub (Feb 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9340

As far as I understand the nomic-embed-text listed in ollama website is the v1 (v1.5). Nomic has released some weeks ago, the nomic embed text v2 model. According to their blog there are plans to be released it in Ollama (they feture a "coming Soon" flair). Is there any roadmap regarding that or eta?.

Thanks in advanced!

Originally created by @NickCis on GitHub (Feb 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9340 As far as I understand the `nomic-embed-text` listed in ollama website is the v1 (v1.5). Nomic has released some weeks ago, the nomic embed text v2 model. According to [their blog](https://www.nomic.ai/blog/posts/nomic-embed-text-v2) there are plans to be released it in Ollama (they feture a "coming Soon" flair). Is there any roadmap regarding that or eta?. Thanks in advanced!
GiteaMirror added the model label 2026-04-12 17:25:53 -05:00
Author
Owner

@marcelMaier commented on GitHub (Feb 27, 2025):

Would be great to have this model seems like it's way better for languages other than english 🚀

<!-- gh-comment-id:2688016417 --> @marcelMaier commented on GitHub (Feb 27, 2025): Would be great to have this model seems like it's way better for languages other than english 🚀
Author
Owner

@tianlichunhong commented on GitHub (Mar 21, 2025):

I am waiting for nomic-embed-text-v2-moe in ollama.com. Thank you!

<!-- gh-comment-id:2742318950 --> @tianlichunhong commented on GitHub (Mar 21, 2025): I am waiting for nomic-embed-text-v2-moe in ollama.com. Thank you!
Author
Owner

@RubenMercadePrieto commented on GitHub (Apr 1, 2025):

same here. thanks

<!-- gh-comment-id:2768672829 --> @RubenMercadePrieto commented on GitHub (Apr 1, 2025): same here. thanks
Author
Owner

@tholu commented on GitHub (Apr 3, 2025):

I would love to see it included as well. https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe/discussions/17#67edbac2dd0e05a56a767a23

<!-- gh-comment-id:2774650321 --> @tholu commented on GitHub (Apr 3, 2025): I would love to see it included as well. https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe/discussions/17#67edbac2dd0e05a56a767a23
Author
Owner

@AndriyMulyar commented on GitHub (Apr 3, 2025):

Nomic would like to include it as well!

<!-- gh-comment-id:2776636862 --> @AndriyMulyar commented on GitHub (Apr 3, 2025): Nomic would like to include it as well!
Author
Owner

@NickCis commented on GitHub (Apr 3, 2025):

Nomic would like to include it as well!

Is there anything that we can do in order to contribute it?. I've tried implementing it using ollama's guide but it throws an unsupported architecture error.

<!-- gh-comment-id:2776779925 --> @NickCis commented on GitHub (Apr 3, 2025): > Nomic would like to include it as well! Is there anything that we can do in order to contribute it?. I've tried implementing it using [ollama's guide](https://github.com/ollama/ollama/blob/main/docs/import.md) but it throws an unsupported architecture error.
Author
Owner

@marcelMaier commented on GitHub (Apr 11, 2025):

It's sad to see that it takes so long we really need nomic as first model in my opinion with good multilingual support.

<!-- gh-comment-id:2796803273 --> @marcelMaier commented on GitHub (Apr 11, 2025): It's sad to see that it takes so long we really need nomic as first model in my opinion with good multilingual support.
Author
Owner

@Terramoto commented on GitHub (Apr 24, 2025):

It needs to be supported by llama.cpp before it can be added to Ollama
https://github.com/ggml-org/llama.cpp/pull/12466

<!-- gh-comment-id:2827189940 --> @Terramoto commented on GitHub (Apr 24, 2025): It needs to be supported by llama.cpp before it can be added to Ollama https://github.com/ggml-org/llama.cpp/pull/12466
Author
Owner

@AndriyMulyar commented on GitHub (Apr 28, 2025):

Nomic has implemented support into llama.cpp so Ollama now can implement support for it if they choose!

<!-- gh-comment-id:2835646334 --> @AndriyMulyar commented on GitHub (Apr 28, 2025): Nomic has implemented support into llama.cpp so Ollama now can implement support for it if they choose!
Author
Owner

@AndriyMulyar commented on GitHub (Apr 28, 2025):

The eagle has landed in master: 5f5e39e1ba

Enjoy searching over 100+ languages!

import requests
def dot(va, vb):
    return sum(a*b for a, b in zip(va, vb))
def embed(texts):
    resp = requests.post('http://localhost:8080/v1/embeddings', json=dict(input=texts)).json()
    return [d['embedding'] for d in resp['data']]

docs = ['嵌入很酷', '骆驼很酷']  # 'embeddings are cool', 'llamas are cool'
docs_embed = embed(['search_document: '+d for d in docs])

query = '跟我讲讲嵌入'  # 'tell me about embeddings'
query_embed = embed(['search_query: '+query])[0]
print(f'query: {query!r}')
for d, e in zip(docs, docs_embed):
    print(f'similarity {dot(query_embed, e):.2f}: {d!r}')

Output:

query: '跟我讲讲嵌入'
similarity 0.48: '嵌入很酷'
similarity 0.19: '骆驼很酷'

You can thank cebtenzzre at Nomic for doing the hard kernel work!

<!-- gh-comment-id:2836380838 --> @AndriyMulyar commented on GitHub (Apr 28, 2025): The eagle has landed in master: https://github.com/ggml-org/llama.cpp/commit/5f5e39e1ba5dbea814e41f2a15e035d749a520bc Enjoy searching over 100+ languages! ```python import requests def dot(va, vb): return sum(a*b for a, b in zip(va, vb)) def embed(texts): resp = requests.post('http://localhost:8080/v1/embeddings', json=dict(input=texts)).json() return [d['embedding'] for d in resp['data']] docs = ['嵌入很酷', '骆驼很酷'] # 'embeddings are cool', 'llamas are cool' docs_embed = embed(['search_document: '+d for d in docs]) query = '跟我讲讲嵌入' # 'tell me about embeddings' query_embed = embed(['search_query: '+query])[0] print(f'query: {query!r}') for d, e in zip(docs, docs_embed): print(f'similarity {dot(query_embed, e):.2f}: {d!r}') ``` Output: ```python query: '跟我讲讲嵌入' similarity 0.48: '嵌入很酷' similarity 0.19: '骆驼很酷' ``` You can thank [cebtenzzre](https://github.com/cebtenzzre) at Nomic for doing the hard kernel work!
Author
Owner

@marcelMaier commented on GitHub (Apr 28, 2025):

Nice does anyone know if this will automatically be supported with the next ollama version or is there something we can do to support the implementation?

<!-- gh-comment-id:2836393315 --> @marcelMaier commented on GitHub (Apr 28, 2025): Nice does anyone know if this will automatically be supported with the next ollama version or is there something we can do to support the implementation?
Author
Owner

@rick-github commented on GitHub (May 4, 2025):

This model will technically be supported in 0.6.8 but the embeddings are not consistent, so seems like it needs more work.

$ curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["嵌入很酷", "骆驼很酷", "why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] + .[-3:]'
[-0.02150792,-0.013614742,-0.11065522,"...",0.041796185,-0.07860671,-0.005676387]
[-0.021249399,-0.015446228,-0.094652824,"...",0.017876133,-0.07958552,-0.021361088]
[-0.036460813,0.030190263,-0.11428562,"...",0.011665087,-0.02603364,0.0015948968]
$ curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["嵌入很酷", "骆驼很酷", "why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] + .[-3:]'
[-0.024498621,-0.012290989,-0.10965817,"...",0.03894496,-0.0761127,-0.0071863136]
[-0.02111761,-0.012408579,-0.103786476,"...",0.01648188,-0.07950529,-0.007817455]
[-0.029640654,0.033208888,-0.10780826,"...",0.021175131,-0.031900737,-0.002215559]
<!-- gh-comment-id:2849217150 --> @rick-github commented on GitHub (May 4, 2025): This model will technically be supported in 0.6.8 but the embeddings are not consistent, so seems like it needs more work. ```console $ curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["嵌入很酷", "骆驼很酷", "why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] + .[-3:]' [-0.02150792,-0.013614742,-0.11065522,"...",0.041796185,-0.07860671,-0.005676387] [-0.021249399,-0.015446228,-0.094652824,"...",0.017876133,-0.07958552,-0.021361088] [-0.036460813,0.030190263,-0.11428562,"...",0.011665087,-0.02603364,0.0015948968] $ curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["嵌入很酷", "骆驼很酷", "why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] + .[-3:]' [-0.024498621,-0.012290989,-0.10965817,"...",0.03894496,-0.0761127,-0.0071863136] [-0.02111761,-0.012408579,-0.103786476,"...",0.01648188,-0.07950529,-0.007817455] [-0.029640654,0.033208888,-0.10780826,"...",0.021175131,-0.031900737,-0.002215559] ```
Author
Owner

@rick-github commented on GitHub (May 4, 2025):

I built llama.cpp from the same commit (https://github.com/ggml-org/llama.cpp/commit/e1e8e099) that ollama is using, and used the GGUF file from the ollama model:

$ for i in {1..5} ; do ./build/bin/llama-embedding  -e -p "嵌入很酷" --verbose-prompt -ngl 99 -m /root/.ollama/models/blobs/sha256-4117e4fa8f2907418026f7ffed5e3f1151dd3a56c34beabb6d51feb433efc4a4 2>/dev/null | grep "^embedding" | cut -c1-150 ; done
embedding 0: -0.029581  0.002231 -0.105505 -0.038347 -0.019517  0.008074  0.017749  0.054689  0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408
embedding 0: -0.029581  0.002231 -0.105505 -0.038347 -0.019517  0.008074  0.017749  0.054689  0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408
embedding 0: -0.029581  0.002231 -0.105505 -0.038347 -0.019517  0.008074  0.017749  0.054689  0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408
embedding 0: -0.029581  0.002231 -0.105505 -0.038347 -0.019517  0.008074  0.017749  0.054689  0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408
embedding 0: -0.029581  0.002231 -0.105505 -0.038347 -0.019517  0.008074  0.017749  0.054689  0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408

So this looks like an ollama problem, not a llama.cpp or model problem.

<!-- gh-comment-id:2849413870 --> @rick-github commented on GitHub (May 4, 2025): I built llama.cpp from the same commit (https://github.com/ggml-org/llama.cpp/commit/e1e8e099) that ollama is using, and used the GGUF file from the ollama model: ```console $ for i in {1..5} ; do ./build/bin/llama-embedding -e -p "嵌入很酷" --verbose-prompt -ngl 99 -m /root/.ollama/models/blobs/sha256-4117e4fa8f2907418026f7ffed5e3f1151dd3a56c34beabb6d51feb433efc4a4 2>/dev/null | grep "^embedding" | cut -c1-150 ; done embedding 0: -0.029581 0.002231 -0.105505 -0.038347 -0.019517 0.008074 0.017749 0.054689 0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408 embedding 0: -0.029581 0.002231 -0.105505 -0.038347 -0.019517 0.008074 0.017749 0.054689 0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408 embedding 0: -0.029581 0.002231 -0.105505 -0.038347 -0.019517 0.008074 0.017749 0.054689 0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408 embedding 0: -0.029581 0.002231 -0.105505 -0.038347 -0.019517 0.008074 0.017749 0.054689 0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408 embedding 0: -0.029581 0.002231 -0.105505 -0.038347 -0.019517 0.008074 0.017749 0.054689 0.012325 -0.040079 -0.009443 -0.065497 -0.021140 -0.0408 ``` So this looks like an ollama problem, not a llama.cpp or model problem.
Author
Owner

@rick-github commented on GitHub (May 14, 2025):

This appears fixed as of 0.7.0-rc1

$ for i in {1..5} ; do curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] + .[-3:]' ; done
[-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377]
[-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377]
[-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377]
[-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377]
[-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377]
<!-- gh-comment-id:2880474426 --> @rick-github commented on GitHub (May 14, 2025): This appears fixed as of 0.7.0-rc1 ```console $ for i in {1..5} ; do curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] + .[-3:]' ; done [-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377] [-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377] [-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377] [-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377] [-0.028988255,0.014166402,-0.11155555,"...",0.0054887156,-0.038328182,0.0059419377] ```
Author
Owner

@marcelMaier commented on GitHub (May 14, 2025):

Can confirm it too think we can close this then? @NickCis

<!-- gh-comment-id:2880487201 --> @marcelMaier commented on GitHub (May 14, 2025): Can confirm it too think we can close this then? @NickCis
Author
Owner

@tianlichunhong commented on GitHub (May 16, 2025):

Can someone tell me, if I want to use this model with ollama.How can I do. Because I still not find the model in ollama.com. Thanks!

<!-- gh-comment-id:2886094785 --> @tianlichunhong commented on GitHub (May 16, 2025): Can someone tell me, if I want to use this model with ollama.How can I do. Because I still not find the model in ollama.com. Thanks!
Author
Owner

@rick-github commented on GitHub (May 16, 2025):

ollama pull hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf
<!-- gh-comment-id:2886100213 --> @rick-github commented on GitHub (May 16, 2025): ``` ollama pull hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf ```
Author
Owner

@marcelMaier commented on GitHub (May 16, 2025):

@tianlichunhong U can get it directly from HuggingFace: ollama pull hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf

Be aware that the model name after will also be that long and that u have to use the full name.
A workaround would be to create a modelfile to rename it / give it an alias.

<!-- gh-comment-id:2886102639 --> @marcelMaier commented on GitHub (May 16, 2025): @tianlichunhong U can get it directly from HuggingFace: `ollama pull hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf` Be aware that the model name after will also be that long and that u have to use the full name. A workaround would be to create a modelfile to rename it / give it an alias.
Author
Owner

@rick-github commented on GitHub (May 16, 2025):

A workaround would be to create a modelfile to rename it / give it an alias.

ollama cp hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf nomic-embed-text-v2
<!-- gh-comment-id:2886107334 --> @rick-github commented on GitHub (May 16, 2025): > A workaround would be to create a modelfile to rename it / give it an alias. ``` ollama cp hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf nomic-embed-text-v2 ```
Author
Owner

@robo3945 commented on GitHub (Jun 22, 2025):

A workaround would be to create a modelfile to rename it / give it an alias.

ollama cp hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf nomic-embed-text-v2

It doesn't work, at least for me :)

I receive this error: "Error: model "https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe-GGUF" not found"

I use Ollama version 0.9.2

<!-- gh-comment-id:2993971745 --> @robo3945 commented on GitHub (Jun 22, 2025): > > A workaround would be to create a modelfile to rename it / give it an alias. > > ``` > ollama cp hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf nomic-embed-text-v2 > ``` It doesn't work, at least for me :) I receive this error: "Error: model "https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe-GGUF" not found" I use Ollama version 0.9.2
Author
Owner

@rick-github commented on GitHub (Jun 22, 2025):

You need to pull it before you can create an alias for it.

<!-- gh-comment-id:2994029149 --> @rick-github commented on GitHub (Jun 22, 2025): You need to pull it before you can create an alias for it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6100