[GH-ISSUE #2416] /v1/embeddings OpenAI compatible API endpoint #47920

Closed
opened 2026-04-28 05:52:54 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @pamelafox on GitHub (Feb 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2416

Originally assigned to: @bmizerany on GitHub.

Your blog post mentions you're considering it. We'd love it so that we can point our RAG apps at ollama. Thanks!

Originally created by @pamelafox on GitHub (Feb 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2416 Originally assigned to: @bmizerany on GitHub. Your blog post mentions you're considering it. We'd love it so that we can point our RAG apps at ollama. Thanks!
GiteaMirror added the embeddingscompatibilityfeature request labels 2026-04-28 05:52:55 -05:00
Author
Owner

@MuhammadHadiofficial commented on GitHub (Feb 9, 2024):

@pamelafox Its there now -> https://ollama.com/blog/openai-compatibility

<!-- gh-comment-id:1936741718 --> @MuhammadHadiofficial commented on GitHub (Feb 9, 2024): @pamelafox Its there now -> https://ollama.com/blog/openai-compatibility
Author
Owner

@pamelafox commented on GitHub (Feb 10, 2024):

It specifically says that embeddings API is not yet supported on that page (at the bottom).

<!-- gh-comment-id:1936755446 --> @pamelafox commented on GitHub (Feb 10, 2024): It specifically says that embeddings API is not yet supported on that page (at the bottom).
Author
Owner

@glorat commented on GitHub (Feb 12, 2024):

One concrete point of incompatibility is the inability to pass in a vector of embedding requests. If one looks at the ollama source code, EmbeddingRequest clearly only takes in a single item rather than an array.

Would appreciate if maintainers would triage to bug / won't fix / feature request so I can adjust expectations accordingly. (Maintainers, you guys do a fantastic job with this project, any response is acceptable).

<!-- gh-comment-id:1938767906 --> @glorat commented on GitHub (Feb 12, 2024): One concrete point of incompatibility is the inability to pass in a vector of embedding requests. If one looks at the ollama source code, EmbeddingRequest clearly only takes in a single item rather than an array. Would appreciate if maintainers would triage to bug / won't fix / feature request so I can adjust expectations accordingly. (Maintainers, you guys do a fantastic job with this project, any response is acceptable).
Author
Owner

@odrobnik commented on GitHub (Apr 19, 2024):

So, why is the Ollama endpoint at /api/embeddings and not at /v1/embeddings where it should be to be compatible with OpenAI or for example LM Studio?

<!-- gh-comment-id:2067082568 --> @odrobnik commented on GitHub (Apr 19, 2024): So, why is the Ollama endpoint at `/api/embeddings` and not at `/v1/embeddings` where it should be to be compatible with OpenAI or for example LM Studio?
Author
Owner

@john-c-kane commented on GitHub (Apr 29, 2024):

I found this issue because i was trying to use Ollama Embeddings API for the Microsoft Semantic Kernel Memory functionality using the OPENAI provider with Ollama URL but I discovered the application is sending JSON format to API as "model" and "input" but Ollama embeddings api expects "model" and "prompt". So not sure if this is being considered as part of this OPENAI interface compatibility request but wanted to make you aware.

<!-- gh-comment-id:2082426079 --> @john-c-kane commented on GitHub (Apr 29, 2024): I found this issue because i was trying to use Ollama Embeddings API for the Microsoft Semantic Kernel Memory functionality using the OPENAI provider with Ollama URL but I discovered the application is sending JSON format to API as "model" and "input" but Ollama embeddings api expects "model" and "prompt". So not sure if this is being considered as part of this OPENAI interface compatibility request but wanted to make you aware.
Author
Owner

@odrobnik commented on GitHub (Apr 29, 2024):

There is no v1 embeddings endpoint yet, only the standard one at api

<!-- gh-comment-id:2082555885 --> @odrobnik commented on GitHub (Apr 29, 2024): There is no `v1` embeddings endpoint yet, only the standard one at `api`
Author
Owner

@yuriwa commented on GitHub (May 1, 2024):

I've just noticed the pull request (2476) and wanted to express my appreciation for the work put into it. Thank you for your contribution! Looking forward to its potential merge.

<!-- gh-comment-id:2089116127 --> @yuriwa commented on GitHub (May 1, 2024): I've just noticed the pull request [(2476)](https://github.com/ollama/ollama/pull/2476) and wanted to express my appreciation for the work put into it. Thank you for your contribution! Looking forward to its potential merge.
Author
Owner

@jmatsushita commented on GitHub (May 2, 2024):

There seems to be an active PR for this https://github.com/ollama/ollama/pull/2925 by @tazarov I haven't tested it yet.

<!-- gh-comment-id:2091260166 --> @jmatsushita commented on GitHub (May 2, 2024): There seems to be an active PR for this https://github.com/ollama/ollama/pull/2925 by @tazarov I haven't tested it yet.
Author
Owner

@JpEncausse commented on GitHub (May 26, 2024):

Very curious of availability and capabilities.

My usecase is straight forward:

  • During BUILD I need to create Embeddings of a lot of content locally in order to give a try, reduce cost, and bootstrap a database.
  • During RUN I'll calling online APIs when it's needed.

When Ollama will be compatible with text-embedding-3-large, text-embedding-3-small and text-embedding-ada-002 it will open the door to run embedding both locally (with a good PC for Free) and Online

<!-- gh-comment-id:2132231369 --> @JpEncausse commented on GitHub (May 26, 2024): Very curious of availability and capabilities. My usecase is straight forward: - During BUILD I need to create Embeddings of a lot of content locally in order to give a try, reduce cost, and bootstrap a database. - During RUN I'll calling online APIs when it's needed. When Ollama will be compatible with `text-embedding-3-large`, `text-embedding-3-small` and `text-embedding-ada-002` it will open the door to run embedding both locally (with a good PC for Free) and Online
Author
Owner

@odrobnik commented on GitHub (May 26, 2024):

@JpEncausse the Models you mention as OpenAI proprietary AFAIK. So you will never see them on Ollama.

<!-- gh-comment-id:2132235092 --> @odrobnik commented on GitHub (May 26, 2024): @JpEncausse the Models you mention as OpenAI proprietary AFAIK. So you will never see them on Ollama.
Author
Owner

@cristoslc commented on GitHub (Jun 6, 2024):

I think I see two PRs related to this (https://github.com/ollama/ollama/pull/2925 and https://github.com/ollama/ollama/pull/3642). As others have said, the fact that the api/embeddings endpoint doesn't accept an array of inputs AND the difference in the request structure vs. OpenAI's structure (per https://github.com/ollama/ollama/issues/2416#issuecomment-2082426079) are both major blocks to using Ollama in a variety of RAG applications. Any word on where those PRs are in priority?

Thank you all for your hard work on the project!

<!-- gh-comment-id:2151357086 --> @cristoslc commented on GitHub (Jun 6, 2024): I think I see two PRs related to this (https://github.com/ollama/ollama/pull/2925 and https://github.com/ollama/ollama/pull/3642). As others have said, the fact that the `api/embeddings` endpoint doesn't accept an array of inputs AND the difference in the request structure vs. OpenAI's structure (per https://github.com/ollama/ollama/issues/2416#issuecomment-2082426079) are both major blocks to using Ollama in a variety of RAG applications. Any word on where those PRs are in priority? Thank you all for your hard work on the project!
Author
Owner

@aletfa commented on GitHub (Jun 26, 2024):

+1 here. Please implement the complete compatibility

<!-- gh-comment-id:2192228973 --> @aletfa commented on GitHub (Jun 26, 2024): +1 here. Please implement the complete compatibility
Author
Owner

@JPMoresmau commented on GitHub (Jul 18, 2024):

I'm trying 0.2.6 that has the PR in it that is bringing the compatibility API, and I'm getting a 400 error when /v1/embeddings is called from the python openai client : {'error': {'message': 'invalid input type', 'type': 'api_error', 'param': None, 'code': None}}", "taskName": "Task-3"}. Not sure how to debug this further.

<!-- gh-comment-id:2236938126 --> @JPMoresmau commented on GitHub (Jul 18, 2024): I'm trying 0.2.6 that has the PR in it that is bringing the compatibility API, and I'm getting a 400 error when /v1/embeddings is called from the python openai client : ` {'error': {'message': 'invalid input type', 'type': 'api_error', 'param': None, 'code': None}}", "taskName": "Task-3"}`. Not sure how to debug this further.
Author
Owner

@arch7tect commented on GitHub (Jul 29, 2024):

Here is a temporary solution while v1/embeddings in ollama is broken
https://github.com/severian42/GraphRAG-Local-UI/blob/main/embedding_proxy.py

<!-- gh-comment-id:2255001969 --> @arch7tect commented on GitHub (Jul 29, 2024): Here is a temporary solution while v1/embeddings in ollama is broken https://github.com/severian42/GraphRAG-Local-UI/blob/main/embedding_proxy.py
Author
Owner

@royjhan commented on GitHub (Jul 29, 2024):

@JPMoresmau how are you calling v1/embeddings? I'm unable to reproduce the error, this code is working as expected

from openai import OpenAI
 
client = OpenAI(
    base_url='http://localhost:11434/v1/',
    api_key='ollama'
)

response = client.embeddings.create(
  model="all-minilm",
  input=["input1", "input2"], // "input1" also works, no other input type is supported
)

print(response)
<!-- gh-comment-id:2256807748 --> @royjhan commented on GitHub (Jul 29, 2024): @JPMoresmau how are you calling v1/embeddings? I'm unable to reproduce the error, this code is working as expected ``` from openai import OpenAI client = OpenAI( base_url='http://localhost:11434/v1/', api_key='ollama' ) response = client.embeddings.create( model="all-minilm", input=["input1", "input2"], // "input1" also works, no other input type is supported ) print(response) ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47920