[GH-ISSUE #4476] langchain-python-rag-privategpt "Cannot submit more than 5,461 embeddings at once" #64835

Closed
opened 2026-05-03 18:55:26 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @dcasota on GitHub (May 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4476

What is the issue?

In langchain-python-rag-privategpt, there is a bug 'Cannot submit more than x embeddings at once' which already has been mentioned in various different constellations, lately see https://github.com/ollama/ollama/issues/2572.

Now with Ollama version 0.1.38 the chromadb version already has been updated to 0.47, but the max_batch_size calculation still seems to produce issues, see actual issue case https://github.com/chroma-core/chroma/issues/2181.

Meanwhile, is there a workaround for Ollama?

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$ python ./ingest.py
Creating new vectorstore
Loading documents from source_documents
Loading new documents: 100%|████████████████| 1355/1355 [00:15<00:00, 88.77it/s]
Loaded 80043 new documents from source_documents
Split into 478012 chunks of text (max. 500 tokens each)
Creating embeddings. May take some minutes...
Traceback (most recent call last):
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 161, in <module>
    main()
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 153, in main
    db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 612, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 576, in from_texts
    chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 222, in add_texts
    raise e
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 208, in add_texts
    self._collection.upsert(
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 298, in upsert
    self._client._upsert(
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/segment.py", line 290, in _upsert
    self._producer.submit_embeddings(coll["topic"], records_to_submit)
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/db/mixins/embeddings_queue.py", line 127, in submit_embeddings
    raise ValueError(
ValueError:
                Cannot submit more than 5,461 embeddings at once.
                Please submit your embeddings in batches of size
                5,461 or less.

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$

OS

WSL2

GPU

Nvidia

CPU

Intel

Ollama version

0.1.38

Research findings

In ingest.py, in def maint(), I've modified the else condition as following but it didn't help (same issue).
image

Originally created by @dcasota on GitHub (May 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4476 ### What is the issue? In langchain-python-rag-privategpt, there is a bug 'Cannot submit more than x embeddings at once' which already has been mentioned in various different constellations, lately see https://github.com/ollama/ollama/issues/2572. Now with Ollama version 0.1.38 the chromadb version already has been updated to 0.47, but the `max_batch_size` calculation still seems to produce issues, see actual issue case https://github.com/chroma-core/chroma/issues/2181. Meanwhile, is there a workaround for Ollama? ``` (.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$ python ./ingest.py Creating new vectorstore Loading documents from source_documents Loading new documents: 100%|████████████████| 1355/1355 [00:15<00:00, 88.77it/s] Loaded 80043 new documents from source_documents Split into 478012 chunks of text (max. 500 tokens each) Creating embeddings. May take some minutes... Traceback (most recent call last): File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 161, in <module> main() File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 153, in main db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 612, in from_documents return cls.from_texts( ^^^^^^^^^^^^^^^ File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 576, in from_texts chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 222, in add_texts raise e File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 208, in add_texts self._collection.upsert( File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 298, in upsert self._client._upsert( File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/segment.py", line 290, in _upsert self._producer.submit_embeddings(coll["topic"], records_to_submit) File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/db/mixins/embeddings_queue.py", line 127, in submit_embeddings raise ValueError( ValueError: Cannot submit more than 5,461 embeddings at once. Please submit your embeddings in batches of size 5,461 or less. (.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$ ``` ### OS WSL2 ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.38 ### Research findings In `ingest.py`, in `def maint()`, I've modified the `else` condition as following but it didn't help (same issue). ![image](https://github.com/ollama/ollama/assets/14890243/a356f8e6-dbcd-44c7-bc70-32cbb263ef0a)
GiteaMirror added the bug label 2026-05-03 18:55:26 -05:00
Author
Owner

@dcasota commented on GitHub (May 16, 2024):

It may be yet another subcomponent issue. With v0.1.38, langchain version is 0.0.274.

pip3 list | grep langchain
langchain                0.0.274

Not use of e.g. langchain_community.

As workaround, I've updated all components. This is not recommended because usually it creates more side effects and it's more difficult to reproduce issues.

pip --disable-pip-version-check list --outdated --format=json | python -c "import json, sys; print('\n'.join([x['name'] for x in json.load(sys.stdin)]))" | sudo xargs -n1 pip install -U

Afterwards, the following langchain packages are installed:

pip3 list |grep langchain
langchain                0.1.20
langchain-community      0.0.38
langchain-core           0.1.52
langchain-text-splitters 0.0.2

python ingest.py and python privateGPT.py run successfully, but the output contains warnings with various deprecated langchain components. From the findings so far, a curated requirements.txt list would be helpful.

python ingest.py always starts with Creating new vectorstore. It does not preserve already loaded documents. Why?

<!-- gh-comment-id:2116265450 --> @dcasota commented on GitHub (May 16, 2024): It may be yet another subcomponent issue. With v0.1.38, langchain version is 0.0.274. ``` pip3 list | grep langchain langchain 0.0.274 ``` Not use of e.g. langchain_community. As workaround, I've updated all components. This is not recommended because usually it creates more side effects and it's more difficult to reproduce issues. ``` pip --disable-pip-version-check list --outdated --format=json | python -c "import json, sys; print('\n'.join([x['name'] for x in json.load(sys.stdin)]))" | sudo xargs -n1 pip install -U ``` Afterwards, the following langchain packages are installed: ``` pip3 list |grep langchain langchain 0.1.20 langchain-community 0.0.38 langchain-core 0.1.52 langchain-text-splitters 0.0.2 ``` `python ingest.py` and `python privateGPT.py` run successfully, but the output contains warnings with various deprecated langchain components. From the findings so far, a curated requirements.txt list would be helpful. `python ingest.py` always starts with `Creating new vectorstore`. It does not preserve already loaded documents. Why?
Author
Owner

@dcasota commented on GitHub (May 26, 2024):

Same issue with v0.1.39. Luckily the workaround works, with Nvidia drivers 552 (see https://github.com/ollama/ollama/issues/4563).

edited June 5th: Same with v0.1.41.
edited June 19th: Same with 0.1.44. Add pip install chromadb==0.5.0

<!-- gh-comment-id:2132146280 --> @dcasota commented on GitHub (May 26, 2024): Same issue with v0.1.39. Luckily the workaround works, with Nvidia drivers 552 (see https://github.com/ollama/ollama/issues/4563). edited June 5th: Same with v0.1.41. edited June 19th: Same with 0.1.44. Add `pip install chromadb==0.5.0`
Author
Owner

@jmorganca commented on GitHub (Sep 12, 2024):

This should be fixed in https://github.com/ollama/ollama/pull/5139

<!-- gh-comment-id:2345100593 --> @jmorganca commented on GitHub (Sep 12, 2024): This should be fixed in https://github.com/ollama/ollama/pull/5139
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64835