[GH-ISSUE #4207] mxbai-embed-large embedding not consistent with original paper #64658

Closed
opened 2026-05-03 18:27:13 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @deadbeef84 on GitHub (May 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4207

What is the issue?

I'm trying to use embeddings from mxbai-embed-large to create a similarity/semantic search functionality, but the quality of the embeddings coming from ollama doesn't seem to be very good.

I've tried replicating the numbers from the original blog post:

import { Ollama } from 'ollama'
import cosineSimilarity from 'compute-cosine-similarity'

const ollama = new Ollama({ host: 'http://127.0.0.1:11434' })

const docs = [
  'Represent this sentence for searching relevant passages: A man is eating a piece of bread',
  'A man is eating food.',
  'A man is eating pasta.',
  'The girl is carrying a baby.',
  'A man is riding a horse.',
]

const [queryEmbedding, ...embeddings] = await Promise.all(
  docs.map(
    async (doc) => (await ollama.embeddings({ model: 'mxbai-embed-large', prompt: doc })).embedding
  )
)

const similarities = embeddings.map((e) => cosineSimilarity(queryEmbedding, e))
console.log(similarities)
[
  0.6231103528590645,
  0.6258446589848462,
  0.5631986516911313,
  0.5891047395895846
]

Those numbers are nowhere close to the original numbers, and if I compare the embedding vectors they are completely different.

The javascript implementation at huggingface produces the same numbers as the original post.

OS

Linux, Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.1.33

Originally created by @deadbeef84 on GitHub (May 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4207 ### What is the issue? I'm trying to use embeddings from `mxbai-embed-large` to create a similarity/semantic search functionality, but the quality of the embeddings coming from ollama doesn't seem to be very good. I've tried replicating the numbers from [the original blog post](https://www.mixedbread.ai/blog/mxbai-embed-large-v1): ```js import { Ollama } from 'ollama' import cosineSimilarity from 'compute-cosine-similarity' const ollama = new Ollama({ host: 'http://127.0.0.1:11434' }) const docs = [ 'Represent this sentence for searching relevant passages: A man is eating a piece of bread', 'A man is eating food.', 'A man is eating pasta.', 'The girl is carrying a baby.', 'A man is riding a horse.', ] const [queryEmbedding, ...embeddings] = await Promise.all( docs.map( async (doc) => (await ollama.embeddings({ model: 'mxbai-embed-large', prompt: doc })).embedding ) ) const similarities = embeddings.map((e) => cosineSimilarity(queryEmbedding, e)) console.log(similarities) ``` ```js [ 0.6231103528590645, 0.6258446589848462, 0.5631986516911313, 0.5891047395895846 ] ``` Those numbers are nowhere close to the original numbers, and if I compare the embedding vectors they are completely different. The [javascript implementation at huggingface](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) produces the same numbers as the original post. ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.33
GiteaMirror added the bug label 2026-05-03 18:27:13 -05:00
Author
Owner

@deadbeef84 commented on GitHub (May 6, 2024):

Same thing with snowflake:

Ollama using snowflake-arctic-embed:137m-m-long-fp16

Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread
0.408 A man is eating food.
0.368 A man is riding a horse.
0.353 A man is eating pasta.
0.259 The girl is carrying a baby.

@xenova/transformers and Snowflake/snowflake-arctic-embed-m-long

Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread
0.581 A man is eating food.
0.538 A man is eating pasta.
0.471 A man is riding a horse.
0.375 The girl is carrying a baby.

I expected these to give same results.

<!-- gh-comment-id:2096946328 --> @deadbeef84 commented on GitHub (May 6, 2024): Same thing with snowflake: Ollama using `snowflake-arctic-embed:137m-m-long-fp16` ``` Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread 0.408 A man is eating food. 0.368 A man is riding a horse. 0.353 A man is eating pasta. 0.259 The girl is carrying a baby. ``` `@xenova/transformers` and `Snowflake/snowflake-arctic-embed-m-long` ``` Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread 0.581 A man is eating food. 0.538 A man is eating pasta. 0.471 A man is riding a horse. 0.375 The girl is carrying a baby. ``` I expected these to give same results.
Author
Owner

@deadbeef84 commented on GitHub (May 7, 2024):

I've now also verified that the embeddings generated by https://github.com/ggerganov/llama.cpp/tree/master/examples/embedding are correct and consistent with the blog post:

./embedding --model ./models/mxbai-embed-large/mxbai-embed-large-v1-f16.gguf --prompt $'Represent this sentence for searching relevant passages: A man is eating a piece of bread\nA man is eating food.\nA man is eating pasta.\nThe girl is carrying a baby.\nA man is riding a horse.'

Output:

embedding 0:  0.031844 -0.020246  0.003061  0.025761 -0.030529  0.007648 -0.003402 -0.006877  0.003626  0.005590  0.021032 -0.048852  0.050770 -0.010658 -0.042844 -0.014537 
embedding 1:  0.018362 -0.016959 -0.009913 -0.000620 -0.031476 -0.012503 -0.004979  0.036731 -0.004214  0.031309  0.030365 -0.014224  0.038043 -0.029713 -0.049113  0.000813 
embedding 2:  0.011478 -0.011224 -0.008358  0.031598 -0.008998 -0.023611 -0.009947  0.029237 -0.000569  0.029407  0.044036 -0.003409  0.034929 -0.028693 -0.053001  0.002418 
embedding 3: -0.025487  0.045029 -0.005886 -0.025535  0.006403  0.000159 -0.009435  0.026796  0.023252  0.004105 -0.019179 -0.007933 -0.007297 -0.007150  0.016169  0.043604 
embedding 4:  0.028173  0.013244  0.045796 -0.018567  0.014471 -0.002285  0.029447  0.018477  0.046593  0.005216  0.031499 -0.007253 -0.030249  0.025316  0.050654 -0.006526 

cosine similarity matrix:

  1.00   0.79   0.64   0.16   0.36 
  0.79   1.00   0.79   0.13   0.38 
  0.64   0.79   1.00   0.17   0.33 
  0.16   0.13   0.17   1.00   0.13 
  0.36   0.38   0.33   0.13   1.00 
<!-- gh-comment-id:2099299927 --> @deadbeef84 commented on GitHub (May 7, 2024): I've now also verified that the embeddings generated by https://github.com/ggerganov/llama.cpp/tree/master/examples/embedding are correct and consistent with the blog post: ```sh ./embedding --model ./models/mxbai-embed-large/mxbai-embed-large-v1-f16.gguf --prompt $'Represent this sentence for searching relevant passages: A man is eating a piece of bread\nA man is eating food.\nA man is eating pasta.\nThe girl is carrying a baby.\nA man is riding a horse.' ``` Output: ``` embedding 0: 0.031844 -0.020246 0.003061 0.025761 -0.030529 0.007648 -0.003402 -0.006877 0.003626 0.005590 0.021032 -0.048852 0.050770 -0.010658 -0.042844 -0.014537 embedding 1: 0.018362 -0.016959 -0.009913 -0.000620 -0.031476 -0.012503 -0.004979 0.036731 -0.004214 0.031309 0.030365 -0.014224 0.038043 -0.029713 -0.049113 0.000813 embedding 2: 0.011478 -0.011224 -0.008358 0.031598 -0.008998 -0.023611 -0.009947 0.029237 -0.000569 0.029407 0.044036 -0.003409 0.034929 -0.028693 -0.053001 0.002418 embedding 3: -0.025487 0.045029 -0.005886 -0.025535 0.006403 0.000159 -0.009435 0.026796 0.023252 0.004105 -0.019179 -0.007933 -0.007297 -0.007150 0.016169 0.043604 embedding 4: 0.028173 0.013244 0.045796 -0.018567 0.014471 -0.002285 0.029447 0.018477 0.046593 0.005216 0.031499 -0.007253 -0.030249 0.025316 0.050654 -0.006526 cosine similarity matrix: 1.00 0.79 0.64 0.16 0.36 0.79 1.00 0.79 0.13 0.38 0.64 0.79 1.00 0.17 0.33 0.16 0.13 0.17 1.00 0.13 0.36 0.38 0.33 0.13 1.00 ```
Author
Owner

@deadbeef84 commented on GitHub (May 7, 2024):

I've now also confirmed the issue is still happening in ollama v0.1.34

<!-- gh-comment-id:2099310858 --> @deadbeef84 commented on GitHub (May 7, 2024): I've now also confirmed the issue is still happening in ollama v0.1.34
Author
Owner

@deadbeef84 commented on GitHub (May 10, 2024):

https://github.com/ollama/ollama/issues/3777
Perhaps related?

<!-- gh-comment-id:2105299962 --> @deadbeef84 commented on GitHub (May 10, 2024): https://github.com/ollama/ollama/issues/3777 Perhaps related?
Author
Owner

@ActionsPerMinute commented on GitHub (May 11, 2024):

I also have this issue v0.1.35. following the blog with mxbai-embed-large gives the wrong results. switching embedding models provide the correct results

<!-- gh-comment-id:2105399866 --> @ActionsPerMinute commented on GitHub (May 11, 2024): I also have this issue v0.1.35. following the blog with mxbai-embed-large gives the wrong results. switching embedding models provide the correct results
Author
Owner

@VMinB12 commented on GitHub (May 11, 2024):

I can also confirm that Ollama embeddings for snowflake-arctic-embed:137m-m-long-fp16 are not behaving as expected. I set up a synthetic benchmark for internal testing. I take 500 articles and use an LLM to generate a question for each article. Then I retrieve based on a given question and check if the top retrieved article matches the article that was used to generate the question.
I'm using LangChain and get the following results:

# embeddings = OpenAIEmbeddings(model="text-embedding-3-small") # 0.8709677419354839
# embeddings = OpenAIEmbeddings(model="text-embedding-3-large") # 0.8951612903225806
# embeddings = OllamaEmbeddings(model="nomic-embed-text")  # 0.7842741935483871
# embeddings = OllamaEmbeddings(
#    model="nomic-embed-text",
#    query_instruction="",
#    embed_instruction="",
#    num_ctx=8192,
#    temperature=0,
#)  # 0.8185483870967742
# embeddings = OllamaEmbeddings(model="mxbai-embed-large")  # 0.6653225806451613
# embeddings = OllamaEmbeddings(model="all-minilm")  # 0.45564516129032256
# embeddings = OllamaEmbeddings(model="snowflake-arctic-embed")  # 0.14516129032258066
# embeddings = OllamaEmbeddings(
#     model="snowflake-arctic-embed:137m-m-long-fp16",
#     query_instruction="",
#     embed_instruction="",
#     num_ctx=8192,
#     temperature=0,
# )  # 0.06854838709677419
# OllamaEmbeddings(model="snowflake-arctic-embed:137m-m-long-fp16") # 0.07661290322580645

I'm not sure if the result from nomic-embed-text is in alignment with expectations. If so, it could be an indication that the problem is not with Ollama itself, but rather the model weights of the snowflake and mxbai models.

Edit: My ollama version is 0.1.28

<!-- gh-comment-id:2105928822 --> @VMinB12 commented on GitHub (May 11, 2024): I can also confirm that Ollama embeddings for `snowflake-arctic-embed:137m-m-long-fp16` are not behaving as expected. I set up a synthetic benchmark for internal testing. I take 500 articles and use an LLM to generate a question for each article. Then I retrieve based on a given question and check if the top retrieved article matches the article that was used to generate the question. I'm using LangChain and get the following results: ``` # embeddings = OpenAIEmbeddings(model="text-embedding-3-small") # 0.8709677419354839 # embeddings = OpenAIEmbeddings(model="text-embedding-3-large") # 0.8951612903225806 # embeddings = OllamaEmbeddings(model="nomic-embed-text") # 0.7842741935483871 # embeddings = OllamaEmbeddings( # model="nomic-embed-text", # query_instruction="", # embed_instruction="", # num_ctx=8192, # temperature=0, #) # 0.8185483870967742 # embeddings = OllamaEmbeddings(model="mxbai-embed-large") # 0.6653225806451613 # embeddings = OllamaEmbeddings(model="all-minilm") # 0.45564516129032256 # embeddings = OllamaEmbeddings(model="snowflake-arctic-embed") # 0.14516129032258066 # embeddings = OllamaEmbeddings( # model="snowflake-arctic-embed:137m-m-long-fp16", # query_instruction="", # embed_instruction="", # num_ctx=8192, # temperature=0, # ) # 0.06854838709677419 # OllamaEmbeddings(model="snowflake-arctic-embed:137m-m-long-fp16") # 0.07661290322580645 ``` I'm not sure if the result from `nomic-embed-text` is in alignment with expectations. If so, it could be an indication that the problem is not with Ollama itself, but rather the model weights of the snowflake and mxbai models. Edit: My ollama version is 0.1.28
Author
Owner

@deadbeef84 commented on GitHub (May 13, 2024):

Similarities after PR #4399:

mxbai-embed-large

Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread
0.791 A man is eating food.
0.636 A man is eating pasta.
0.360 A man is riding a horse.
0.163 The girl is carrying a baby.

snowflake-arctic-embed:137m-m-long-fp16

Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread
0.581 A man is eating food.
0.538 A man is eating pasta.
0.471 A man is riding a horse.
0.375 The girl is carrying a baby.
<!-- gh-comment-id:2108425304 --> @deadbeef84 commented on GitHub (May 13, 2024): Similarities after PR #4399: ### mxbai-embed-large ``` Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread 0.791 A man is eating food. 0.636 A man is eating pasta. 0.360 A man is riding a horse. 0.163 The girl is carrying a baby. ``` ### snowflake-arctic-embed:137m-m-long-fp16 ``` Query: Represent this sentence for searching relevant passages: A man is eating a piece of bread 0.581 A man is eating food. 0.538 A man is eating pasta. 0.471 A man is riding a horse. 0.375 The girl is carrying a baby. ```
Author
Owner

@fredrik-smedberg commented on GitHub (May 14, 2024):

Tack @deadbeef84 for creating this issue. I've been scratching my head over the weekend, changed to different embedding models and had very odd results. Good to know I wasn't crazy and that Ollama actually was a bit broken. I'll update Ollama and see if my ChromaDB <-> Ollama experiment runs better.

Update: I downloaded and built https://github.com/ollama/ollama/pull/4399. I can confirm it indeed fixes obvious issues I had when doing embedding and queries with mxbai-embed-large and nomic-embed-text models.
If can't tell if the PR possibly introduces other issues, but it definitely solved the embedding problem.

<!-- gh-comment-id:2110568710 --> @fredrik-smedberg commented on GitHub (May 14, 2024): Tack @deadbeef84 for creating this issue. I've been scratching my head over the weekend, changed to different embedding models and had very odd results. Good to know I wasn't crazy and that Ollama actually was a bit broken. I'll update Ollama and see if my ChromaDB <-> Ollama experiment runs better. Update: I downloaded and built https://github.com/ollama/ollama/pull/4399. I can confirm it indeed fixes obvious issues I had when doing embedding and queries with `mxbai-embed-large` and `nomic-embed-text` models. If can't tell if the PR possibly introduces other issues, but it definitely solved the embedding problem.
Author
Owner

@hazelwolf commented on GitHub (May 18, 2024):

@deadbeef84 thanks for the fix, this solves the issue.

Edit: This install version works fine with embeddings, however facing a challenge when running Ollama run directly for normal chat prompts and there is a serious lag, could be just me. @deadbeef84

<!-- gh-comment-id:2118756954 --> @hazelwolf commented on GitHub (May 18, 2024): @deadbeef84 thanks for the fix, this solves the issue. Edit: This install version works fine with embeddings, however facing a challenge when running Ollama run directly for normal chat prompts and there is a serious lag, could be just me. @deadbeef84
Author
Owner

@moracca commented on GitHub (Jun 6, 2024):

@jmorganca have you seen this issue and the related PR? Wondering if you have any thoughts on the proposed fix and what we might be able to do to gain some traction on the issue, since as it stands the embeddings generated by ollama are not accurate, breaking most RAG applications. Thanks!

<!-- gh-comment-id:2153518663 --> @moracca commented on GitHub (Jun 6, 2024): @jmorganca have you seen this issue and the related PR? Wondering if you have any thoughts on the proposed fix and what we might be able to do to gain some traction on the issue, since as it stands the embeddings generated by ollama are not accurate, breaking most RAG applications. Thanks!
Author
Owner

@jeugregg commented on GitHub (Jun 7, 2024):

I have the same issue. I would like to know when the the fix can be merged.

<!-- gh-comment-id:2154004348 --> @jeugregg commented on GitHub (Jun 7, 2024): I have the same issue. I would like to know when the the fix can be merged.
Author
Owner

@jeugregg commented on GitHub (Jun 12, 2024):

I think I have tried with the ollama server version 0.1.43 and ollama-python Version: 0.2.1 with the same test as @deadbeef84 before and after the fix and it is still not working.
same wrong results
Is it normal?
this fix is merged?
I still don't have good result with this fix.
@jmorganca may you re-test the @deadbeef84 example please?

<!-- gh-comment-id:2163186050 --> @jeugregg commented on GitHub (Jun 12, 2024): I think I have tried with the ollama server version 0.1.43 and ollama-python Version: 0.2.1 with the same test as @deadbeef84 before and after the fix and it is still not working. same wrong results Is it normal? this fix is merged? I still don't have good result with this fix. @jmorganca may you re-test the @deadbeef84 example please?
Author
Owner

@jeugregg commented on GitHub (Jul 17, 2024):

Actually, for my issue, it is working with ollama-python 0.2.1 and ollama version 0.2.5 but not with langchain !
So the issue is with langchain. sorry.

<!-- gh-comment-id:2233541871 --> @jeugregg commented on GitHub (Jul 17, 2024): Actually, for my issue, it is working with ollama-python 0.2.1 and ollama version 0.2.5 but not with langchain ! So the issue is with langchain. sorry.
Author
Owner

@fredrik-smedberg commented on GitHub (Jul 17, 2024):

Actually, for my issue, it is working with ollama-python 0.2.1 and ollama version 0.2.5 but not with langchain ! So the issue is with langchain. sorry.

Maybe you've forgotten to begin your query prompt with "Represent this sentence for searching relevant passages:"
If you don't, the mxbai model will not work for retrieval. Source: https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1

<!-- gh-comment-id:2234174585 --> @fredrik-smedberg commented on GitHub (Jul 17, 2024): > Actually, for my issue, it is working with ollama-python 0.2.1 and ollama version 0.2.5 but not with langchain ! So the issue is with langchain. sorry. Maybe you've forgotten to begin your query prompt with "Represent this sentence for searching relevant passages:" If you don't, the mxbai model will not work for retrieval. Source: https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1
Author
Owner

@jeugregg commented on GitHub (Jul 18, 2024):

Yes but no. I have reproduced the original blog post test in python.
It works with :

  • Langchain + Llamafile
  • Ollama directly

BUT NOT with :

  • Langchain + Ollama

I understand now :
By default, langchain add prompt:

  • for embedding docs : embed_instruction = "passage: "
  • for query : query_instruction = "query: "

But the mxbai-embed-large blog says to use :

  • for embedding docs : embed_instruction = ""
  • for query : query_instruction = "Represent this sentence for searching relevant passages: "

It works well with that.

<!-- gh-comment-id:2235449062 --> @jeugregg commented on GitHub (Jul 18, 2024): Yes but no. I have reproduced the original blog post test in python. It works with : - Langchain + Llamafile - Ollama directly BUT NOT with : - Langchain + Ollama I understand now : By default, langchain add prompt: - for embedding docs : `embed_instruction = "passage: "` - for query : `query_instruction = "query: "` But the `mxbai-embed-large` blog says to use : - for embedding docs : `embed_instruction = ""` - for query : `query_instruction = "Represent this sentence for searching relevant passages: "` It works well with that.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64658