[GH-ISSUE #4425] joanfm / jina-embeddings-v2-base-en and -de fail with error code 500 #64801

Closed
opened 2026-05-03 18:48:28 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @qsdhj on GitHub (May 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4425

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I tried to integerate the german embedding model joanfm/jina-embeddings-v2-base-de , into my LlamaIndex RAG application. During the creation of the embeddings the process ollama fails with error 500: llama runner process has terminated: exit status 0xc0000409.

When calling:

pass_embedding = Settings.embed_model.get_text_embedding_batch(
    ["This is a passage!", "This is another passage"], show_progress=True
)
ValueError                                Traceback (most recent call last)
Cell In[16], [line 2](vscode-notebook-cell:?execution_count=16&line=2)
      [1](vscode-notebook-cell:?execution_count=16&line=1) # Test the embedding model
----> [2](vscode-notebook-cell:?execution_count=16&line=2) pass_embedding = Settings.embed_model.get_text_embedding_batch(
      [3](vscode-notebook-cell:?execution_count=16&line=3)     ["This is a passage!", "This is another passage"], show_progress=True
      [4](vscode-notebook-cell:?execution_count=16&line=4) )
      [5](vscode-notebook-cell:?execution_count=16&line=5) print(pass_embedding)
      [7](vscode-notebook-cell:?execution_count=16&line=7) query_embedding = Settings.embed_model.get_query_embedding("Where is blue?")

File c:\Users\Stefan.Mueller\AppData\Local\miniconda3\envs\llamaindex\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    [270](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:270) self.span_enter(
    [271](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:271)     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    [272](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:272) )
    [273](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:273) try:
--> [274](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:274)     result = func(*args, **kwargs)
    [275](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:275) except BaseException as e:
    [276](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:276)     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File c:\Users\Stefan.Mueller\AppData\Local\miniconda3\envs\llamaindex\Lib\site-packages\llama_index\core\base\embeddings\base.py:331, in BaseEmbedding.get_text_embedding_batch(self, texts, show_progress, **kwargs)
    [322](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:322) dispatch_event(
    [323](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:323)     EmbeddingStartEvent(
    [324](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:324)         model_dict=self.to_dict(),
    [325](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:325)     )
    [326](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:326) )
...
    [100](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:100)     )
    [102](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:102) try:
    [103](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:103)     return response.json()["embedding"]

With mxbai-embed-large:latest this works without an error.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.37

Originally created by @qsdhj on GitHub (May 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4425 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I tried to integerate the german embedding model **joanfm/jina-embeddings-v2-base-de** , into my LlamaIndex RAG application. During the creation of the embeddings the process ollama fails with **error 500: llama runner process has terminated: exit status 0xc0000409**. When calling: ```python pass_embedding = Settings.embed_model.get_text_embedding_batch( ["This is a passage!", "This is another passage"], show_progress=True ) ``` ```python ValueError Traceback (most recent call last) Cell In[16], [line 2](vscode-notebook-cell:?execution_count=16&line=2) [1](vscode-notebook-cell:?execution_count=16&line=1) # Test the embedding model ----> [2](vscode-notebook-cell:?execution_count=16&line=2) pass_embedding = Settings.embed_model.get_text_embedding_batch( [3](vscode-notebook-cell:?execution_count=16&line=3) ["This is a passage!", "This is another passage"], show_progress=True [4](vscode-notebook-cell:?execution_count=16&line=4) ) [5](vscode-notebook-cell:?execution_count=16&line=5) print(pass_embedding) [7](vscode-notebook-cell:?execution_count=16&line=7) query_embedding = Settings.embed_model.get_query_embedding("Where is blue?") File c:\Users\Stefan.Mueller\AppData\Local\miniconda3\envs\llamaindex\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs) [270](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:270) self.span_enter( [271](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:271) id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id [272](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:272) ) [273](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:273) try: --> [274](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:274) result = func(*args, **kwargs) [275](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:275) except BaseException as e: [276](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/instrumentation/dispatcher.py:276) self.event(SpanDropEvent(span_id=id_, err_str=str(e))) File c:\Users\Stefan.Mueller\AppData\Local\miniconda3\envs\llamaindex\Lib\site-packages\llama_index\core\base\embeddings\base.py:331, in BaseEmbedding.get_text_embedding_batch(self, texts, show_progress, **kwargs) [322](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:322) dispatch_event( [323](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:323) EmbeddingStartEvent( [324](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:324) model_dict=self.to_dict(), [325](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:325) ) [326](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/core/base/embeddings/base.py:326) ) ... [100](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:100) ) [102](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:102) try: [103](file:///C:/Users/Stefan.Mueller/AppData/Local/miniconda3/envs/llamaindex/Lib/site-packages/llama_index/embeddings/ollama/base.py:103) return response.json()["embedding"] ``` With **mxbai-embed-large:latest** this works without an error. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.37
GiteaMirror added the bug label 2026-05-03 18:48:28 -05:00
Author
Owner

@thinkverse commented on GitHub (May 14, 2024):

Ollama doesn't currently support Jina Embeddings v2, it should be supported after https://github.com/ollama/ollama/pull/4414 gets merged, so you'd likely have to wait for the new Ollama release or build from source after the PR has been merged.

<!-- gh-comment-id:2110346239 --> @thinkverse commented on GitHub (May 14, 2024): Ollama doesn't currently support Jina Embeddings v2, it should be supported after https://github.com/ollama/ollama/pull/4414 gets merged, so you'd likely have to wait for the new Ollama release or build from source after the PR has been merged.
Author
Owner

@JoanFM commented on GitHub (May 14, 2024):

hey @qsdhj,

Indeed, there is a need for ollama to update its dependency on llama.cpp and to release a new version for Jina Embeddings V2 to be available.

I created and tested those models by building it manually.

<!-- gh-comment-id:2110815368 --> @JoanFM commented on GitHub (May 14, 2024): hey @qsdhj, Indeed, there is a need for ollama to update its dependency on `llama.cpp` and to release a new version for Jina Embeddings V2 to be available. I created and tested those models by building it manually.
Author
Owner

@qsdhj commented on GitHub (May 15, 2024):

hey @JoanFM,

thanks for your reply.
Do you or some other here, now what the status of batch processing of embeddings with ollama is?
Without it, the feature is useless for my intended use.

<!-- gh-comment-id:2112713873 --> @qsdhj commented on GitHub (May 15, 2024): hey @JoanFM, thanks for your reply. Do you or some other here, now what the status of batch processing of embeddings with ollama is? Without it, the feature is useless for my intended use.
Author
Owner

@JoanFM commented on GitHub (May 15, 2024):

Hey @qsdhj ,

I am not sure about this

<!-- gh-comment-id:2112724528 --> @JoanFM commented on GitHub (May 15, 2024): Hey @qsdhj , I am not sure about this
Author
Owner

@dhiltgen commented on GitHub (Jul 3, 2024):

Is this still a problem on the latest version with the llama.cpp update?

<!-- gh-comment-id:2207223769 --> @dhiltgen commented on GitHub (Jul 3, 2024): Is this still a problem on the latest version with the llama.cpp update?
Author
Owner

@JoanFM commented on GitHub (Jul 4, 2024):

This works with the latest llama.cpp update.

BTW, I recommend using the jina/jina-embeddings-v2-base-en as this one was just a test account

<!-- gh-comment-id:2208498460 --> @JoanFM commented on GitHub (Jul 4, 2024): This works with the latest llama.cpp update. BTW, I recommend using the `jina/jina-embeddings-v2-base-en` as this one was just a test account
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64801