[GH-ISSUE #14449] Inconsistent embeddings across Ollama versions (nomic-embed-text) #35144

Open
opened 2026-04-22 19:25:59 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @balaji-2k1 on GitHub (Feb 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14449

What is the issue?

I’m seeing inconsistent embedding outputs when using nomic-embed-text across different Ollama versions, and I’m not sure if this is expected behavior or not.

I tested the same prompt on:

  • Ollama v0.4.6

  • Ollama v0.17.0

Using the exact same model name: nomic-embed-text

Prompt used
"What is quantum computing and its applications in energy industry and AI data centres"

I also tested cosine similarity across prompts of different lengths.

What I’m observing between both versions:

Short prompts → high similarity (expected)

Medium prompts → reasonable similarity

Longer prompts → similarity drops significantly

This happens in both versions, but the similarity values differ between versions.

Relevant log output

First few embedding values

v0.4.6

[
  -0.2342112511396408,
  1.9663547277450562,
  -3.5098204612731934,
  -1.2520712614059448,
  1.1139509677886963
]

v0.17.0

[
  -0.09186548739671707,
  1.5818653106689453,
  -3.7861673831939697,
  -1.3896721601486206,
  1.293436884880066
]

OS

Linux

GPU

Nvidia

CPU

Other, AMD, Intel

Ollama version

v0.4.6,v0.17.0

Originally created by @balaji-2k1 on GitHub (Feb 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14449 ### What is the issue? I’m seeing inconsistent embedding outputs when using nomic-embed-text across different Ollama versions, and I’m not sure if this is expected behavior or not. I tested the same prompt on: - Ollama v0.4.6 - Ollama v0.17.0 Using the exact same model name: nomic-embed-text Prompt used `"What is quantum computing and its applications in energy industry and AI data centres"` I also tested cosine similarity across prompts of different lengths. What I’m observing between both versions: Short prompts → high similarity (expected) Medium prompts → reasonable similarity Longer prompts → similarity drops significantly This happens in both versions, but the similarity values differ between versions. ### Relevant log output ```shell First few embedding values v0.4.6 [ -0.2342112511396408, 1.9663547277450562, -3.5098204612731934, -1.2520712614059448, 1.1139509677886963 ] v0.17.0 [ -0.09186548739671707, 1.5818653106689453, -3.7861673831939697, -1.3896721601486206, 1.293436884880066 ] ``` ### OS Linux ### GPU Nvidia ### CPU Other, AMD, Intel ### Ollama version v0.4.6,v0.17.0
GiteaMirror added the needs more infobug labels 2026-04-22 19:25:59 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 26, 2026):

Longer prompts → similarity drops significantly

How long is long?

<!-- gh-comment-id:3969126629 --> @rick-github commented on GitHub (Feb 26, 2026): > Longer prompts → similarity drops significantly How long is long?
Author
Owner

@holzman commented on GitHub (Mar 3, 2026):

Are the embedding outputs closer if it's all lowercase? A bug was introduced in v0.13.1:

https://github.com/ollama/ollama/issues/13942

<!-- gh-comment-id:3993063128 --> @holzman commented on GitHub (Mar 3, 2026): Are the embedding outputs closer if it's all lowercase? A bug was introduced in v0.13.1: https://github.com/ollama/ollama/issues/13942
Author
Owner

@balaji-2k1 commented on GitHub (Mar 4, 2026):

Longer prompts → similarity drops significantly

How long is long?

More than 20 - 30 words .

<!-- gh-comment-id:3998626660 --> @balaji-2k1 commented on GitHub (Mar 4, 2026): > > Longer prompts → similarity drops significantly > > How long is long? More than 20 - 30 words .
Author
Owner

@balaji-2k1 commented on GitHub (Mar 4, 2026):

loser if it's all lowercase? A bug was introduced in v0.13.1:

Still the embeddings are not closer no matter what the worlds case is

<!-- gh-comment-id:3998813496 --> @balaji-2k1 commented on GitHub (Mar 4, 2026): > loser if it's all lowercase? A bug was introduced in v0.13.1: Still the embeddings are not closer no matter what the worlds case is
Author
Owner

@rick-github commented on GitHub (Mar 4, 2026):

More than 20 - 30 words .

Can you provide an example? Server logs may also help.

<!-- gh-comment-id:3998943824 --> @rick-github commented on GitHub (Mar 4, 2026): > More than 20 - 30 words . Can you provide an example? [Server logs](https://docs.ollama.com/troubleshooting) may also help.
Author
Owner

@rick-github commented on GitHub (Mar 4, 2026):

Changes to embedding output aligns with a vendor sync for llama.cpp, except for 0.13.1, where the engine used for nomic-embed-text was switched to ollama, with the resulting difference in case handling found by @holzman.

version lowercase uppercase PR
0.4.6 0.040255778 0.040255778
0.5.1 0.040255778 0.040255778
0.5.2 0.040240098 0.040240098 #7875
0.5.12 0.040240098 0.040240098
0.5.13 0.040219806 0.040219806 #9356
0.6.0 0.040219806 0.040219806
0.11.4 0.040219806 0.040219806
0.11.5 0.040258233 0.040258233 #11823
0.12.6 0.040258233 0.040258233
0.13.0 0.040258233 0.040258233
0.13.1 0.04025823 -0.0014289077 #13144
0.17.6 0.04025823 -0.0014289077
<!-- gh-comment-id:4000131704 --> @rick-github commented on GitHub (Mar 4, 2026): Changes to embedding output aligns with a vendor sync for llama.cpp, except for 0.13.1, where the engine used for nomic-embed-text was switched to ollama, with the resulting difference in case handling found by @holzman. | version | lowercase | uppercase | PR | | -- | -- | -- | -- | | 0.4.6 | 0.040255778 | 0.040255778 | | | 0.5.1 | 0.040255778 | 0.040255778 | | | 0.5.2 | 0.040240098 | 0.040240098 | #7875 | | 0.5.12 | 0.040240098 | 0.040240098 | | | 0.5.13 | 0.040219806 | 0.040219806 | #9356 | | 0.6.0 | 0.040219806 | 0.040219806 | | | 0.11.4 | 0.040219806 | 0.040219806 | | | 0.11.5 | 0.040258233 | 0.040258233 | #11823 | | 0.12.6 | 0.040258233 | 0.040258233 | | | 0.13.0 | 0.040258233 | 0.040258233 | | | 0.13.1 | 0.04025823 | -0.0014289077 | #13144 | | 0.17.6 | 0.04025823 | -0.0014289077 | |
Author
Owner

@balaji-2k1 commented on GitHub (Mar 5, 2026):

Thanks for the clarification.

I ran a few more tests to better illustrate the issue across versions. Below are some examples comparing v0.4.6 and v0.17.0 using the same nomic-embed-text model.

Small prompt test

v0.4.6

Prompt: hi

[
  0.3898167610168457,
  0.4418864846229553,
  -4.295653343200684,
  -0.05946348235011101,
  0.4995376169681549
]

v0.17.0

[
  0.3934938311576843,
  0.44502389430999756,
  -4.293195724487305,
  -0.05641648918390274,
  0.4991135001182556
]

The difference here is minimal. Cosine similarity between these two embeddings is around 0.9999, which is expected.

Medium length prompt

v0.4.6
Prompt:How was it possible that every single person in an airplane crash died but two people survived

[
  1.5005958080291748,
  0.019989021122455597,
  -3.5046772956848145,
  0.8304932117462158,
  1.066129207611084
]

v0.17.0

[
  1.346938967704773,
  -0.19935527443885803,
  -3.535060405731201,
  0.8173263669013977,
  0.9273759722709656
]

Here the cosine similarity between embeddings for the same prompt across versions drops to ~0.96.

Long document test

I also tried embedding a longer document (~1500+ words).
Example topic: explanation of graph databases and use cases.

First few values:

v0.4.6

[0.5222324728965759,
1.4255118370056152,
-2.6311392784118652,
-0.2457878589630127,
0.5322821736335754]

v0.17.0

1.1035101413726807,
-2.7586541175842285,
-0.47445690631866455,
0.8571723699569702]

Cosine similarity between these embeddings drops further to around 0.84.

I observed similar behavior starting from around Ollama v0.11.x onwards, and it does not seem related to text casing.

<!-- gh-comment-id:4002289049 --> @balaji-2k1 commented on GitHub (Mar 5, 2026): Thanks for the clarification. I ran a few more tests to better illustrate the issue across versions. Below are some examples comparing v0.4.6 and v0.17.0 using the same nomic-embed-text model. Small prompt test v0.4.6 Prompt: `hi` ``` [ 0.3898167610168457, 0.4418864846229553, -4.295653343200684, -0.05946348235011101, 0.4995376169681549 ] ``` v0.17.0 ``` [ 0.3934938311576843, 0.44502389430999756, -4.293195724487305, -0.05641648918390274, 0.4991135001182556 ] ``` The difference here is minimal. Cosine similarity between these two embeddings is around 0.9999, which is expected. Medium length prompt v0.4.6 Prompt:`How was it possible that every single person in an airplane crash died but two people survived` ``` [ 1.5005958080291748, 0.019989021122455597, -3.5046772956848145, 0.8304932117462158, 1.066129207611084 ] ``` v0.17.0 ``` [ 1.346938967704773, -0.19935527443885803, -3.535060405731201, 0.8173263669013977, 0.9273759722709656 ] ``` Here the cosine similarity between embeddings for the same prompt across versions drops to ~0.96. Long document test I also tried embedding a longer document (~1500+ words). Example topic: explanation of graph databases and use cases. First few values: v0.4.6 ``` [0.5222324728965759, 1.4255118370056152, -2.6311392784118652, -0.2457878589630127, 0.5322821736335754] ``` v0.17.0 ```[0.6682866811752319, 1.1035101413726807, -2.7586541175842285, -0.47445690631866455, 0.8571723699569702] ``` Cosine similarity between these embeddings drops further to around 0.84. I observed similar behavior starting from around Ollama v0.11.x onwards, and it does not seem related to text casing.
Author
Owner

@balaji-2k1 commented on GitHub (Mar 5, 2026):

Changes to embedding output aligns with a vendor sync for llama.cpp, except for 0.13.1, where the engine used for nomic-embed-text was switched to ollama, with the resulting difference in case handling found by @holzman.

version lowercase uppercase PR
0.4.6 0.040255778 0.040255778
0.5.1 0.040255778 0.040255778
0.5.2 0.040240098 0.040240098 #7875
0.5.12 0.040240098 0.040240098
0.5.13 0.040219806 0.040219806 #9356
0.6.0 0.040219806 0.040219806
0.11.4 0.040219806 0.040219806
0.11.5 0.040258233 0.040258233 #11823
0.12.6 0.040258233 0.040258233
0.13.0 0.040258233 0.040258233
0.13.1 0.04025823 -0.0014289077 #13144
0.17.6 0.04025823 -0.0014289077

More than 20 - 30 words .

Can you provide an example? Server logs may also help.

The logs that v0.4.6 runs through the llama.cpp CUDA runner (GPU offload), while v0.17.x runs through the newer ollama engine and executes in my setup. That might explain the slight differences in embeddings across versions, especially for longer inputs.

<!-- gh-comment-id:4002330363 --> @balaji-2k1 commented on GitHub (Mar 5, 2026): > Changes to embedding output aligns with a vendor sync for llama.cpp, except for 0.13.1, where the engine used for nomic-embed-text was switched to ollama, with the resulting difference in case handling found by [@holzman](https://github.com/holzman). > > version lowercase uppercase PR > 0.4.6 0.040255778 0.040255778 > 0.5.1 0.040255778 0.040255778 > 0.5.2 0.040240098 0.040240098 [#7875](https://github.com/ollama/ollama/pull/7875) > 0.5.12 0.040240098 0.040240098 > 0.5.13 0.040219806 0.040219806 [#9356](https://github.com/ollama/ollama/pull/9356) > 0.6.0 0.040219806 0.040219806 > 0.11.4 0.040219806 0.040219806 > 0.11.5 0.040258233 0.040258233 [#11823](https://github.com/ollama/ollama/pull/11823) > 0.12.6 0.040258233 0.040258233 > 0.13.0 0.040258233 0.040258233 > 0.13.1 0.04025823 -0.0014289077 [#13144](https://github.com/ollama/ollama/pull/13144) > 0.17.6 0.04025823 -0.0014289077 > > More than 20 - 30 words . > > Can you provide an example? [Server logs](https://docs.ollama.com/troubleshooting) may also help. The logs that v0.4.6 runs through the llama.cpp CUDA runner (GPU offload), while v0.17.x runs through the newer ollama engine and executes in my setup. That might explain the slight differences in embeddings across versions, especially for longer inputs.
Author
Owner

@holzman commented on GitHub (Mar 5, 2026):

I tried to reproduce your result running 0.4.6 and 0.17.0 in parallel (using the official docker containers).

AFAICT the drop in cosine similarity was due to the the first word being uppercase:

$ echo "How was it possible that every single person in an airplane crash died but two people survived" | python3 ./embedit.py
Cosine similarity:  0.9660789146817451

$ echo "how was it possible that every single person in an airplane crash died but two people survived" | python3 ./embedit.py
Cosine similarity:  0.9999961804566854
<!-- gh-comment-id:4005895278 --> @holzman commented on GitHub (Mar 5, 2026): I tried to reproduce your result running 0.4.6 and 0.17.0 in parallel (using the official docker containers). AFAICT the drop in cosine similarity was due to the the first word being uppercase: ``` $ echo "How was it possible that every single person in an airplane crash died but two people survived" | python3 ./embedit.py Cosine similarity: 0.9660789146817451 $ echo "how was it possible that every single person in an airplane crash died but two people survived" | python3 ./embedit.py Cosine similarity: 0.9999961804566854 ```
Author
Owner

@holzman commented on GitHub (Mar 5, 2026):

And another data point: I just ran a longer prompt, all lower-case, and saw huge differences between 0.4.6 and 0.17.0:

 cat /tmp/g | python3 ./embedit.py 
[-0.00482436  0.05833034 -0.11684171 -0.09922026  0.06665643]  # v0.4.6
[ 0.03202297  0.09417119 -0.14601417 -0.03951154  0.04100695]  # v0.17.0
Cosine similarity:  0.5716200156185305

But then I compared v0.17.0 to HuggingFace's text-embedding-inference endpoint (ghcr.io/huggingface/text-embeddings-inference:1.8 running nomic/nomic-embed-text-v1.5):

$ cat /tmp/g | python3 ./embed2.py 
[ 0.02960969  0.09581322 -0.1472252  -0.04466681  0.04116145] # t-e-i
[ 0.03202297  0.09417119 -0.14601417 -0.03951154  0.04100695] # ollama v0.17.0
Cosine similarity:  0.9957772449200707

That implies that there were bugs in the older implementation.

prompt in question (just gibberish created by a model):

explanation of graph databases and their use cases (approximately 1500 words)

1. introduction  

in today’s data‑centric world the choice of storage model directly influences performance, scalability, and the ability to gain insight.
relational databases have been the default for decades, but many modern problems involve data that is highly interconnected, where relationships are as important as the entities themselves. graph databases address this by treating relationships as first‑class citizens. they store data as nodes (entities), edges (relationships), and properties (attributes attached to either). this model mirrors many real‑world domains such as social networks, recommendation systems, fraud detection, knowledge graphs, and more. this document explains the fundamentals of graph databases, how they differ from other storage models, core concepts and query languages, and a broad set of real‑world use cases. by the end you should understand when and why to choose a graph database over a traditional relational system and be aware of the leading technologies available today.

2. core concepts  

2.1 nodes, edges, and properties  

element      description                                          example  
-----------  ----------------------------------------------------  ----------------------------------------------  
node          represents an entity or object, can have a label      (:person {name:''alice'', age:34})  
               and a set of properties.  

edge          connects two nodes, has a direction, a type (label)   (:person)-[:friend_of {since:2015}]->(:person)  
               and optionally properties.  

property      a key‑value pair stored on a node or edge. values can  city:''seattle'' on a node, weight:0.8 on an edge.  
               be primitive (string, number, boolean) or arrays.  

because edges are stored explicitly, traversing from a node to its neighbors is constant‑time in most graph engines, regardless of the total graph size. this is a key performance advantage when exploring multi‑hop relationships.

2.2 graph model variants  

- property graph – the most common model, used by neo4j, amazon neptune, azure cosmos db gremlin api. it supports labeled nodes/edges and arbitrary properties.  
<!-- gh-comment-id:4006500630 --> @holzman commented on GitHub (Mar 5, 2026): And another data point: I just ran a longer prompt, all lower-case, and saw huge differences between 0.4.6 and 0.17.0: ``` cat /tmp/g | python3 ./embedit.py [-0.00482436 0.05833034 -0.11684171 -0.09922026 0.06665643] # v0.4.6 [ 0.03202297 0.09417119 -0.14601417 -0.03951154 0.04100695] # v0.17.0 Cosine similarity: 0.5716200156185305 ``` But then I compared v0.17.0 to HuggingFace's text-embedding-inference endpoint (ghcr.io/huggingface/text-embeddings-inference:1.8 running nomic/nomic-embed-text-v1.5): ``` $ cat /tmp/g | python3 ./embed2.py [ 0.02960969 0.09581322 -0.1472252 -0.04466681 0.04116145] # t-e-i [ 0.03202297 0.09417119 -0.14601417 -0.03951154 0.04100695] # ollama v0.17.0 Cosine similarity: 0.9957772449200707 ``` That implies that there were bugs in the older implementation. prompt in question (just gibberish created by a model): ``` explanation of graph databases and their use cases (approximately 1500 words) 1. introduction in today’s data‑centric world the choice of storage model directly influences performance, scalability, and the ability to gain insight. relational databases have been the default for decades, but many modern problems involve data that is highly interconnected, where relationships are as important as the entities themselves. graph databases address this by treating relationships as first‑class citizens. they store data as nodes (entities), edges (relationships), and properties (attributes attached to either). this model mirrors many real‑world domains such as social networks, recommendation systems, fraud detection, knowledge graphs, and more. this document explains the fundamentals of graph databases, how they differ from other storage models, core concepts and query languages, and a broad set of real‑world use cases. by the end you should understand when and why to choose a graph database over a traditional relational system and be aware of the leading technologies available today. 2. core concepts 2.1 nodes, edges, and properties element description example ----------- ---------------------------------------------------- ---------------------------------------------- node represents an entity or object, can have a label (:person {name:''alice'', age:34}) and a set of properties. edge connects two nodes, has a direction, a type (label) (:person)-[:friend_of {since:2015}]->(:person) and optionally properties. property a key‑value pair stored on a node or edge. values can city:''seattle'' on a node, weight:0.8 on an edge. be primitive (string, number, boolean) or arrays. because edges are stored explicitly, traversing from a node to its neighbors is constant‑time in most graph engines, regardless of the total graph size. this is a key performance advantage when exploring multi‑hop relationships. 2.2 graph model variants - property graph – the most common model, used by neo4j, amazon neptune, azure cosmos db gremlin api. it supports labeled nodes/edges and arbitrary properties. ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35144