[GH-ISSUE #5870] The embeddings api interface is not working properly. #3661

Closed
opened 2026-04-12 14:27:17 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @xldistance on GitHub (Jul 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5870

What is the issue?

I use the bge-m3 model in graphrag with the following parameters

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: asyncio
  llm:
    api_key: 
    type: openai_embedding # or azure_openai_embedding
    model: chatfire/bge-m3:q8_0
    api_base: http://localhost:11434/api

The following error is returned

17:11:30,126 httpx INFO HTTP Request: POST http://localhost:11434/api/embeddings "HTTP/1.1 200 OK"
17:11:30,129 datashaper.workflow.workflow ERROR Error executing verb "text_embed" in create_final_entities: 'NoneType' object is not iterable
Traceback (most recent call last):
  File "E:\Langchain-Chatchat\glut\lib\site-packages\datashaper\workflow\workflow.py", line 415, in _execute_verb
    result = await result
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\text_embed.py", line 105, in text_embed
    return await _text_embed_in_memory(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\text_embed.py", line 130, in _text_embed_in_memory
    result = await strategy_exec(texts, callbacks, cache, strategy_args)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 61, in run
    embeddings = await _execute(llm, text_batches, ticker, semaphore)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 105, in _execute
    results = await asyncio.gather(*futures)
  File "E:\Langchain-Chatchat\glut\lib\asyncio\tasks.py", line 304, in __wakeup
    future.result()
  File "E:\Langchain-Chatchat\glut\lib\asyncio\tasks.py", line 232, in __step
    result = coro.send(None)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 99, in embed
    chunk_embeddings = await llm(chunk)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\caching_llm.py", line 104, in __call__
    result = await self._delegate(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 177, in __call__
    result, start = await execute_with_retry()
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 159, in execute_with_retry
    async for attempt in retryer:
  File "E:\Langchain-Chatchat\glut\lib\site-packages\tenacity\_asyncio.py", line 71, in __anext__
    do = self.iter(retry_state=self._retry_state)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\tenacity\__init__.py", line 314, in iter
    return fut.result()
  File "E:\Langchain-Chatchat\glut\lib\concurrent\futures\_base.py", line 451, in result
    return self.__get_result()
  File "E:\Langchain-Chatchat\glut\lib\concurrent\futures\_base.py", line 403, in __get_result
    raise self._exception
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 165, in execute_with_retry
    return await do_attempt(), start
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 147, in do_attempt
    return await self._delegate(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\base_llm.py", line 49, in __call__
    return await self._invoke(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\base_llm.py", line 53, in _invoke
    output = await self._execute_llm(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py", line 36, in _execute_llm
    embedding = await self.client.embeddings.create(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\resources\embeddings.py", line 215, in create
    return await self._post(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1826, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1519, in request
    return await self._request(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1622, in _request
    return await self._process_response(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1714, in _process_response
    return await api_response.parse()
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_response.py", line 419, in parse
    parsed = self._options.post_parser(parsed)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\resources\embeddings.py", line 203, in parser
    for embedding in obj.data:
TypeError: 'NoneType' object is not iterable
17:11:30,131 graphrag.index.reporting.file_workflow_callbacks INFO Error executing verb "text_embed" in create_final_entities: 'NoneType' object is not iterable details=None
17:11:30,142 graphrag.index.run ERROR error running workflow create_final_entities

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.2.8

Originally created by @xldistance on GitHub (Jul 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5870 ### What is the issue? I use the bge-m3 model in graphrag with the following parameters ``` embeddings: ## parallelization: override the global parallelization settings for embeddings async_mode: asyncio llm: api_key: type: openai_embedding # or azure_openai_embedding model: chatfire/bge-m3:q8_0 api_base: http://localhost:11434/api ``` The following error is returned ``` 17:11:30,126 httpx INFO HTTP Request: POST http://localhost:11434/api/embeddings "HTTP/1.1 200 OK" 17:11:30,129 datashaper.workflow.workflow ERROR Error executing verb "text_embed" in create_final_entities: 'NoneType' object is not iterable Traceback (most recent call last): File "E:\Langchain-Chatchat\glut\lib\site-packages\datashaper\workflow\workflow.py", line 415, in _execute_verb result = await result File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\text_embed.py", line 105, in text_embed return await _text_embed_in_memory( File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\text_embed.py", line 130, in _text_embed_in_memory result = await strategy_exec(texts, callbacks, cache, strategy_args) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 61, in run embeddings = await _execute(llm, text_batches, ticker, semaphore) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 105, in _execute results = await asyncio.gather(*futures) File "E:\Langchain-Chatchat\glut\lib\asyncio\tasks.py", line 304, in __wakeup future.result() File "E:\Langchain-Chatchat\glut\lib\asyncio\tasks.py", line 232, in __step result = coro.send(None) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 99, in embed chunk_embeddings = await llm(chunk) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\caching_llm.py", line 104, in __call__ result = await self._delegate(input, **kwargs) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 177, in __call__ result, start = await execute_with_retry() File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 159, in execute_with_retry async for attempt in retryer: File "E:\Langchain-Chatchat\glut\lib\site-packages\tenacity\_asyncio.py", line 71, in __anext__ do = self.iter(retry_state=self._retry_state) File "E:\Langchain-Chatchat\glut\lib\site-packages\tenacity\__init__.py", line 314, in iter return fut.result() File "E:\Langchain-Chatchat\glut\lib\concurrent\futures\_base.py", line 451, in result return self.__get_result() File "E:\Langchain-Chatchat\glut\lib\concurrent\futures\_base.py", line 403, in __get_result raise self._exception File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 165, in execute_with_retry return await do_attempt(), start File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 147, in do_attempt return await self._delegate(input, **kwargs) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\base_llm.py", line 49, in __call__ return await self._invoke(input, **kwargs) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\base_llm.py", line 53, in _invoke output = await self._execute_llm(input, **kwargs) File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py", line 36, in _execute_llm embedding = await self.client.embeddings.create( File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\resources\embeddings.py", line 215, in create return await self._post( File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1826, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1519, in request return await self._request( File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1622, in _request return await self._process_response( File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1714, in _process_response return await api_response.parse() File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_response.py", line 419, in parse parsed = self._options.post_parser(parsed) File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\resources\embeddings.py", line 203, in parser for embedding in obj.data: TypeError: 'NoneType' object is not iterable 17:11:30,131 graphrag.index.reporting.file_workflow_callbacks INFO Error executing verb "text_embed" in create_final_entities: 'NoneType' object is not iterable details=None 17:11:30,142 graphrag.index.run ERROR error running workflow create_final_entities ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.2.8
GiteaMirror added the bug label 2026-04-12 14:27:17 -05:00
Author
Owner

@xldistance commented on GitHub (Jul 23, 2024):

I switched to xinference's embedding api and it works fine!

<!-- gh-comment-id:2244724160 --> @xldistance commented on GitHub (Jul 23, 2024): I switched to xinference's embedding api and it works fine!
Author
Owner

@xldistance commented on GitHub (Jul 23, 2024):

It looks like the post request http://localhost:11434/api/embeddings is working, and I'm not sure why the embedded model isn't working properly.

<!-- gh-comment-id:2244736752 --> @xldistance commented on GitHub (Jul 23, 2024): It looks like the post request http://localhost:11434/api/embeddings is working, and I'm not sure why the embedded model isn't working properly.
Author
Owner

@xldistance commented on GitHub (Jul 23, 2024):

Is the openai interface to call embedding models not supported yet?

<!-- gh-comment-id:2244757757 --> @xldistance commented on GitHub (Jul 23, 2024): Is the openai interface to call embedding models not supported yet?
Author
Owner

@rick-github commented on GitHub (Jul 23, 2024):

The ollama embedding endpoint is localhost:11434/api/embed.
The openai compatiable embedding endpoint is localhost:11434/v1/embeddings.
The returned data is a different format, too (I've removed the embeddings from the examples for space):

$ curl -s localhost:11434/api/embed -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.embeddings=[]' 
{
  "model": "chatfire/bge-m3:q8_0",
  "embeddings": []
}
$ curl -s localhost:11434/v1/embeddings -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.data[].embedding=[]' 
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [],
      "index": 0
    }
  ],
  "model": "chatfire/bge-m3:q8_0"
}
<!-- gh-comment-id:2245317776 --> @rick-github commented on GitHub (Jul 23, 2024): The ollama embedding endpoint is `localhost:11434/api/embed`. The openai compatiable embedding endpoint is `localhost:11434/v1/embeddings`. The returned data is a different format, too (I've removed the embeddings from the examples for space): ``` $ curl -s localhost:11434/api/embed -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.embeddings=[]' { "model": "chatfire/bge-m3:q8_0", "embeddings": [] } ``` ``` $ curl -s localhost:11434/v1/embeddings -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.data[].embedding=[]' { "object": "list", "data": [ { "object": "embedding", "embedding": [], "index": 0 } ], "model": "chatfire/bge-m3:q8_0" } ```
Author
Owner

@rick-github commented on GitHub (Jul 23, 2024):

localhost:11434/api/embeddings requires a different request (prompt instead of input) and generates a different format response with embeddings that are a different precision to the other two methods:

$ curl -s localhost:11434/api/embeddings -d '{"model":"chatfire/bge-m3:q8_0","prompt":"Your text string goes here"}'  | jq '.embedding=[]'
{
  "embedding": []
}
<!-- gh-comment-id:2245389590 --> @rick-github commented on GitHub (Jul 23, 2024): `localhost:11434/api/embeddings` requires a different request (`prompt` instead of `input`) and generates a different format response with embeddings that are a different precision to the other two methods: ``` $ curl -s localhost:11434/api/embeddings -d '{"model":"chatfire/bge-m3:q8_0","prompt":"Your text string goes here"}' | jq '.embedding=[]' { "embedding": [] } ```
Author
Owner

@ttkrpink commented on GitHub (Jul 25, 2024):

Same problem with GraphRAG, will the next release be compatible with OpenAI embedding API?

<!-- gh-comment-id:2249351980 --> @ttkrpink commented on GitHub (Jul 25, 2024): Same problem with GraphRAG, will the next release be compatible with OpenAI embedding API?
Author
Owner

@xldistance commented on GitHub (Jul 25, 2024):

ollama 嵌入端点是localhost:11434/api/embed。 openai 兼容嵌入端点是localhost:11434/v1/embeddings。 返回的数据也是不同的格式(为了节省空间,我从示例中删除了嵌入):

$ curl -s localhost:11434/api/embed -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.embeddings=[]' 
{
  "model": "chatfire/bge-m3:q8_0",
  "embeddings": []
}
$ curl -s localhost:11434/v1/embeddings -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.data[].embedding=[]' 
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [],
      "index": 0
    }
  ],
  "model": "chatfire/bge-m3:q8_0"
}
    text_embedder = OpenAIEmbedding(
        api_key="ollama",
        api_base="http://localhost:11434/v1",
        model="chatfire/bge-m3:q8_0",
        deployment_name="chatfire/bge-m3:q8_0",
        api_type=OpenaiApiType.OpenAI,
        max_retries=20,
    )

Code Run Error
2024-07-25 18:05:22,568 - httpx - INFO - HTTP Request: POST http://localhost:11434/v1/embeddings "HTTP/1.1 400 Bad Request"

<!-- gh-comment-id:2249963319 --> @xldistance commented on GitHub (Jul 25, 2024): > ollama 嵌入端点是`localhost:11434/api/embed`。 openai 兼容嵌入端点是`localhost:11434/v1/embeddings`。 返回的数据也是不同的格式(为了节省空间,我从示例中删除了嵌入): > > ``` > $ curl -s localhost:11434/api/embed -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.embeddings=[]' > { > "model": "chatfire/bge-m3:q8_0", > "embeddings": [] > } > ``` > > ``` > $ curl -s localhost:11434/v1/embeddings -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.data[].embedding=[]' > { > "object": "list", > "data": [ > { > "object": "embedding", > "embedding": [], > "index": 0 > } > ], > "model": "chatfire/bge-m3:q8_0" > } > ``` ``` text_embedder = OpenAIEmbedding( api_key="ollama", api_base="http://localhost:11434/v1", model="chatfire/bge-m3:q8_0", deployment_name="chatfire/bge-m3:q8_0", api_type=OpenaiApiType.OpenAI, max_retries=20, ) ``` Code Run Error 2024-07-25 18:05:22,568 - httpx - INFO - HTTP Request: POST http://localhost:11434/v1/embeddings "HTTP/1.1 400 Bad Request"
Author
Owner

@rick-github commented on GitHub (Jul 25, 2024):

Your app is sending a request that is not understood by ollama. You need to see what that request is, either by having the client log it, or by capturing the network traffic with a tool like wireshark, tcpdump, tcpflow, etc.

<!-- gh-comment-id:2250019708 --> @rick-github commented on GitHub (Jul 25, 2024): Your app is sending a request that is not understood by ollama. You need to see what that request is, either by having the client log it, or by capturing the network traffic with a tool like wireshark, tcpdump, tcpflow, etc.
Author
Owner

@royjhan commented on GitHub (Jul 30, 2024):

api/embed docs: https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings
OpenAI Embeddings Compatibility Docs: #5470

keep an eye on the new releases

<!-- gh-comment-id:2258899636 --> @royjhan commented on GitHub (Jul 30, 2024): api/embed docs: https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings OpenAI Embeddings Compatibility Docs: #5470 keep an eye on the new releases
Author
Owner

@52doho commented on GitHub (Aug 6, 2024):

Switch api_base from http://localhost:11434/api to http://localhost:11434/v1, and get an OpenAI compatibility.
Refer to: OpenAI compatibility

embeddings:
  llm:
    api_base: http://localhost:11434/v1

After debug GraphRAG query code, you will find the problem is ollama /v1/embeddings api input field does not support array of tokens.

A simple fix: https://github.com/microsoft/graphrag/issues/663#issuecomment-2246108121

<!-- gh-comment-id:2270524986 --> @52doho commented on GitHub (Aug 6, 2024): Switch api_base from http://localhost:11434/api to http://localhost:11434/v1, and get an OpenAI compatibility. Refer to: [OpenAI compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md) ``` embeddings: llm: api_base: http://localhost:11434/v1 ``` After debug [GraphRAG query code](https://github.com/microsoft/graphrag/blob/53268406fe2ce0a57f9535d87b9e68bf00f72d0f/graphrag/query/llm/oai/embedding.py#L85), you will find the problem is ollama /v1/embeddings api input field does not support array of tokens. A simple fix: [https://github.com/microsoft/graphrag/issues/663#issuecomment-2246108121](https://github.com/microsoft/graphrag/issues/663#issuecomment-2246108121)
Author
Owner

@Eshan276 commented on GitHub (Jan 4, 2025):

how to do this for bge-m3?

Switch api_base from http://localhost:11434/api to http://localhost:11434/v1, and get an OpenAI compatibility. Refer to: OpenAI compatibility

embeddings:
  llm:
    api_base: http://localhost:11434/v1

After debug GraphRAG query code, you will find the problem is ollama /v1/embeddings api input field does not support array of tokens.

A simple fix: microsoft/graphrag#663 (comment)

<!-- gh-comment-id:2571251745 --> @Eshan276 commented on GitHub (Jan 4, 2025): how to do this for bge-m3? > Switch api_base from http://localhost:11434/api to http://localhost:11434/v1, and get an OpenAI compatibility. Refer to: [OpenAI compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md) > > ``` > embeddings: > llm: > api_base: http://localhost:11434/v1 > ``` > > After debug [GraphRAG query code](https://github.com/microsoft/graphrag/blob/53268406fe2ce0a57f9535d87b9e68bf00f72d0f/graphrag/query/llm/oai/embedding.py#L85), you will find the problem is ollama /v1/embeddings api input field does not support array of tokens. > > A simple fix: [microsoft/graphrag#663 (comment)](https://github.com/microsoft/graphrag/issues/663#issuecomment-2246108121)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3661