[GH-ISSUE #1329] Out of nowhere when I run my script I get this error randomically: raise ValueError("No data received from Ollama stream.") #691

Closed
opened 2026-04-12 10:22:09 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @alelagamba on GitHub (Nov 30, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1329

Traceback (most recent call last):
File "/Users//Desktop/python-test/display_attribute.py", line 34, in
answer = llm("Given this text:" + str(first_column_value) + "Does it talk about a display or screen of the product? Answer only 'Yes' or 'No'.")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 879, in call
self.generate(
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 656, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 543, in _generate_helper
raise e
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 530, in _generate_helper
self._generate(
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain/llms/ollama.py", line 241, in _generate
final_chunk = super()._stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain/llms/ollama.py", line 190, in _stream_with_aggregation
raise ValueError("No data received from Ollama stream.")
ValueError: No data received from Ollama stream.
(.venv) sh-3.2$

I really have no clue because previously it all worked really fine. It's just a simple script that takes a string from a csv and puts it inside the question for the LLM like so:
answer = llm("Given this text:" + str(first_column_value) + "Does it talk about a display or screen of the product? Answer only 'Yes' or 'No'.")

Originally created by @alelagamba on GitHub (Nov 30, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1329 Traceback (most recent call last): File "/Users//Desktop/python-test/display_attribute.py", line 34, in <module> answer = llm("Given this text:" + str(first_column_value) + "Does it talk about a display or screen of the product? Answer only 'Yes' or 'No'.") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 879, in __call__ self.generate( File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 656, in generate output = self._generate_helper( ^^^^^^^^^^^^^^^^^^^^^^ File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 543, in _generate_helper raise e File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 530, in _generate_helper self._generate( File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain/llms/ollama.py", line 241, in _generate final_chunk = super()._stream_with_aggregation( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users//Desktop/python-test/.venv/lib/python3.11/site-packages/langchain/llms/ollama.py", line 190, in _stream_with_aggregation raise ValueError("No data received from Ollama stream.") ValueError: No data received from Ollama stream. (.venv) sh-3.2$ I really have no clue because previously it all worked really fine. It's just a simple script that takes a string from a csv and puts it inside the question for the LLM like so: answer = llm("Given this text:" + str(first_column_value) + "Does it talk about a display or screen of the product? Answer only 'Yes' or 'No'.")
Author
Owner

@Govind-S-B commented on GitHub (Dec 2, 2023):

Same issue here, my thing was working fine last week but now it just breaks , not sure why its breaking either. Related issue : https://github.com/whitead/paper-qa/issues/213

Refer : 41ee3be95f/libs/langchain/langchain/llms/ollama.py (L176-L200)

<!-- gh-comment-id:1837171311 --> @Govind-S-B commented on GitHub (Dec 2, 2023): Same issue here, my thing was working fine last week but now it just breaks , not sure why its breaking either. Related issue : https://github.com/whitead/paper-qa/issues/213 Refer : https://github.com/langchain-ai/langchain/blob/41ee3be95f51d18b51f5f05874e8bcef0f673e47/libs/langchain/langchain/llms/ollama.py#L176-L200
Author
Owner

@BruceMacD commented on GitHub (Dec 4, 2023):

This seems like it could be an error in Langchain. Maybe from an older version. Is anyone able to reproduce using one of our examples?
https://github.com/jmorganca/ollama/tree/main/examples/langchain-python-simple

<!-- gh-comment-id:1839694702 --> @BruceMacD commented on GitHub (Dec 4, 2023): This seems like it could be an error in Langchain. Maybe from an older version. Is anyone able to reproduce using one of our examples? https://github.com/jmorganca/ollama/tree/main/examples/langchain-python-simple
Author
Owner

@Govind-S-B commented on GitHub (Dec 5, 2023):

Yeah it seems to be working fine with the hello world application , I wonder what went wrong with my code , will look into it later.

But here is the result for the example :
image

<!-- gh-comment-id:1841005435 --> @Govind-S-B commented on GitHub (Dec 5, 2023): Yeah it seems to be working fine with the hello world application , I wonder what went wrong with my code , will look into it later. But here is the result for the example : ![image](https://github.com/jmorganca/ollama/assets/62943847/68cb0e51-7a7d-4238-9bd1-9835f2585f50)
Author
Owner

@alelagamba commented on GitHub (Dec 5, 2023):

Yeah it seems to be working fine with the hello world application , I wonder what went wrong with my code , will look into it later.

But here is the result for the example : image

The thing is that also for my code it runs pretty well usually, but sometimes out of nowhere it shows me the error I wrote in the original post. Don't actually know what to do, for sure the only thing that partially solves the problem is that when I use the Run&Debug mode it tends to bring the error much later than when I run it normally.

<!-- gh-comment-id:1841034306 --> @alelagamba commented on GitHub (Dec 5, 2023): > Yeah it seems to be working fine with the hello world application , I wonder what went wrong with my code , will look into it later. > > But here is the result for the example : ![image](https://private-user-images.githubusercontent.com/62943847/288089552-68cb0e51-7a7d-4238-9bd1-9835f2585f50.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDE3OTA1NTcsIm5iZiI6MTcwMTc5MDI1NywicGF0aCI6Ii82Mjk0Mzg0Ny8yODgwODk1NTItNjhjYjBlNTEtN2E3ZC00MjM4LTliZDEtOTgzNWYyNTg1ZjUwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFJV05KWUFYNENTVkVINTNBJTJGMjAyMzEyMDUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjMxMjA1VDE1MzA1N1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTg3NTE0NDUyMDVjNjAwYTczMDNmZTI1OWE5YjdiYjEzODZjOTllNWY5ZTgyNmQyMDA5ZGY0OGJhOGIwMTA2NzUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.OndAT0JEcul6jFmBxooRLCn3h8fIy85c5Ku2NfrEe1Q) The thing is that also for my code it runs pretty well usually, but sometimes out of nowhere it shows me the error I wrote in the original post. Don't actually know what to do, for sure the only thing that partially solves the problem is that when I use the Run&Debug mode it tends to bring the error much later than when I run it normally.
Author
Owner

@BruceMacD commented on GitHub (Dec 5, 2023):

Any idea which version of langchain's python library you are seeing this on?

<!-- gh-comment-id:1841773765 --> @BruceMacD commented on GitHub (Dec 5, 2023): Any idea which version of langchain's python library you are seeing this on?
Author
Owner

@dennis-thevara commented on GitHub (Dec 7, 2023):

So I've been having this issue as well, and it's quite strange, because like OP I've been able to run things fine last week. I was trying to run the Private Multi-Modal RAG Cookbook in the Langchain repo.

For what it's worth, when I ran it, Ollama is indeed streaming a sequence of tokens. I've verified this with the StreamingStdOutCallbackHandler. What I've noticed is the aforementioned error only hits at the very end of the streaming response, once the model has concluded its output.

By the traceback, in ollama.py, _stream_with_aggregation raises the error when the final chunk it gets from Ollama is None. Out of curiosity I changed this to set the final chunk to a GenerationChunk object with the text attribute set to an empty string. My assumption was that the final token from Ollama would be a nonetype when it's done, so instead of raising an error, I had it return an analogue to an end-of-text object. This appears to fix the issue for my specific use case, but I highly doubt this is sensible for general use. I haven't looked into whether it will break anything else. I just figured I'd chime in, in case it provides a clue about what's causing the issue.

For reference, I am using langchain==0.0.344 and this is the change I made in _stream_with_aggregation:

if final_chunk is None:
    final_chunk = GenerationChunk(text="")
    # raise ValueError("No data received from Ollama stream.")

EDIT: Update on Dec 11: Reversing this edit, and updating ollama from 0.1.12 to 0.1.14 fixed the issue for me.

<!-- gh-comment-id:1845450488 --> @dennis-thevara commented on GitHub (Dec 7, 2023): So I've been having this issue as well, and it's quite strange, because like OP I've been able to run things fine last week. I was trying to run the Private Multi-Modal RAG Cookbook in the Langchain repo. For what it's worth, when I ran it, Ollama is indeed streaming a sequence of tokens. I've verified this with the `StreamingStdOutCallbackHandler`. What I've noticed is the aforementioned error only hits at the very end of the streaming response, once the model has concluded its output. By the traceback, in `ollama.py`, `_stream_with_aggregation` raises the error when the final chunk it gets from Ollama is `None`. Out of curiosity I changed this to set the final chunk to a `GenerationChunk` object with the `text` attribute set to an empty string. My assumption was that the final token from Ollama would be a nonetype when it's done, so instead of raising an error, I had it return an analogue to an end-of-text object. This appears to fix the issue for my specific use case, but I highly doubt this is sensible for general use. I haven't looked into whether it will break anything else. I just figured I'd chime in, in case it provides a clue about what's causing the issue. For reference, I am using `langchain==0.0.344` and this is the change I made in `_stream_with_aggregation`: ```python if final_chunk is None: final_chunk = GenerationChunk(text="") # raise ValueError("No data received from Ollama stream.") ``` EDIT: Update on Dec 11: Reversing this edit, and updating ollama from `0.1.12` to `0.1.14` fixed the issue for me.
Author
Owner

@kwong99 commented on GitHub (Dec 7, 2023):

I ran into the same issue when I have 2 calls to chain.invoke()
The first chain.invoke always works. Only the second chain.invoke call failed.
What I found out is that if I used LLMChain and it would work but the placement of the import statement is important.
If the import is in the beginning of the script then it would failed the same way.
But with the the import of LLMChain right before the second chain.invoke or chain() and it would work fine

langchain Version: 0.0.340
ollama version 0.1.13
Note it used to all works fine with: ollama version 0.1.8

Here the script failed with LLMChain defined at the beginning of the script

Entering new LLMChain chain...
Prompt after formatting:
Translate the text that is delimited by triple backticks
into a style that is a polite tone that speaks in Spanish.
text: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya!


Traceback (most recent call last):
  File "/home/kenneth/learning/deeplearning/deeplearning_langchain_llm_course/1_chain.py", line 53, in <module>
    response = chain2.invoke({"style": service_style_pirate, "service_reply": service_reply})
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 87, in invoke
    return self(
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
    raise e
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 108, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 120, in generate
    return self.llm.generate_prompt(
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 507, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 656, in generate
    output = self._generate_helper(
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 544, in _generate_helper
    raise e
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 531, in _generate_helper
    self._generate(
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/ollama.py", line 242, in _generate
    final_chunk = super()._stream_with_aggregation(
  File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/ollama.py", line 191, in _stream_with_aggregation
    raise ValueError("No data received from Ollama stream.")
ValueError: No data received from Ollama stream.
(venv) kenneth@kpc:~/learning/deeplearning/deeplearning_langchain_llm_course$ python 1_chain.py 
prompt: input_variables=['customer_email', 'style'] template='Translate the text that is delimited by triple backticks \ninto a style that is {style}.\ntext: ```{customer_email}```\n'
222 chain2: verbose=True prompt=PromptTemplate(input_variables=['service_reply', 'style'], template='Translate the text that is delimited by triple backticks \ninto a style that is {style}.\ntext: ```{service_reply}```\n') llm=Ollama(model='mistral:instruct')

===========
Here the script works fine with LLMChain at the beginning of the script commented out

> Entering new LLMChain chain...
Prompt after formatting:
Translate the text that is delimited by triple backticks 
into a style that is a polite tone that speaks in Spanish.
text: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya!

Finished chain.
Service response: >>>{'style': 'a polite tone that speaks in Spanish', 'service_reply': "Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya!\n", 'text': 'Spanish: ¡Hola amigo/amiga, el garantía no cubre los gastos de limpieza para tu cocina, ya que es culpa tuya por mal uso de tu licuadora olvidando colocar la tapa antes de empezar a licuar. Lo siento mucho! Adiós!'}<<<
(venv) kenneth@kpc:~/learning/deeplearning/deeplearning_langchain_llm_course$

======
This script is based on the course from deeplearning.ai

from langchain.llms import Ollama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
#from langchain.chains import LLMChain

customer_email = """
Arrr, I be fuming that me blender lid
flew off and splattered me kitchen walls
with smoothie! And to make matters worse,
the warranty don't cover the cost of
cleaning up me kitchen. I need yer help
right now, matey!
"""
style = """American English
in a calm and respectful tone
"""
customer_template = """Translate the text
that is delimited by triple backticks
into a style that is {style}.
text: {customer_email}
"""
customer_prompt = PromptTemplate(input_variables=["style", "customer_email"], template=customer_template)
print(f"prompt: {customer_prompt}")
llm = Ollama(model="mistral:instruct")
chain1 = customer_prompt | llm
response = chain1.invoke({"style": style, "customer_email": customer_email})

Service reply

service_reply = """Hey there customer,
the warranty does not cover
cleaning expenses for your kitchen
because it's your fault that
you misused your blender
by forgetting to put the lid on before
starting the blender.
Tough luck! See ya!
"""
service_style_pirate = """
a polite tone
that speaks in Spanish
"""
service_template = """Translate the text
that is delimited by triple backticks
into a style that is {style}.
text: {service_reply}
"""
service_prompt = PromptTemplate(input_variables=["style", "service_reply"], template=service_template)
#chain2 = service_prompt | llm
from langchain.chains import LLMChain
chain2 = LLMChain(llm=llm, prompt=service_prompt, verbose=True)
print(f"222 chain2: {chain2}")
response = chain2.invoke({"style": service_style_pirate, "service_reply": service_reply})
print(f"Service response: >>>{response}<<<")

<!-- gh-comment-id:1846203041 --> @kwong99 commented on GitHub (Dec 7, 2023): I ran into the same issue when I have 2 calls to chain.invoke() The first chain.invoke always works. Only the second chain.invoke call failed. What I found out is that if I used LLMChain and it would work but the placement of the import statement is important. If the import is in the beginning of the script then it would failed the same way. But with the the import of LLMChain right before the second chain.invoke or chain() and it would work fine langchain Version: 0.0.340 ollama version 0.1.13 Note it used to all works fine with: ollama version 0.1.8 Here the script failed with LLMChain defined at the beginning of the script > Entering new LLMChain chain... Prompt after formatting: Translate the text that is delimited by triple backticks into a style that is a polite tone that speaks in Spanish. text: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya! ``` Traceback (most recent call last): File "/home/kenneth/learning/deeplearning/deeplearning_langchain_llm_course/1_chain.py", line 53, in <module> response = chain2.invoke({"style": service_style_pirate, "service_reply": service_reply}) File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 87, in invoke return self( File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__ raise e File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__ self._call(inputs, run_manager=run_manager) File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 108, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 120, in generate return self.llm.generate_prompt( File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 507, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 656, in generate output = self._generate_helper( File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 544, in _generate_helper raise e File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 531, in _generate_helper self._generate( File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/ollama.py", line 242, in _generate final_chunk = super()._stream_with_aggregation( File "/home/kenneth/learning/venv/lib/python3.10/site-packages/langchain/llms/ollama.py", line 191, in _stream_with_aggregation raise ValueError("No data received from Ollama stream.") ValueError: No data received from Ollama stream. (venv) kenneth@kpc:~/learning/deeplearning/deeplearning_langchain_llm_course$ python 1_chain.py prompt: input_variables=['customer_email', 'style'] template='Translate the text that is delimited by triple backticks \ninto a style that is {style}.\ntext: ```{customer_email}```\n' 222 chain2: verbose=True prompt=PromptTemplate(input_variables=['service_reply', 'style'], template='Translate the text that is delimited by triple backticks \ninto a style that is {style}.\ntext: ```{service_reply}```\n') llm=Ollama(model='mistral:instruct') =========== Here the script works fine with LLMChain at the beginning of the script commented out > Entering new LLMChain chain... Prompt after formatting: Translate the text that is delimited by triple backticks into a style that is a polite tone that speaks in Spanish. text: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya! ``` > Finished chain. Service response: >>>{'style': 'a polite tone that speaks in Spanish', 'service_reply': "Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya!\n", 'text': '```Spanish: ¡Hola amigo/amiga, el garantía no cubre los gastos de limpieza para tu cocina, ya que es culpa tuya por mal uso de tu licuadora olvidando colocar la tapa antes de empezar a licuar. Lo siento mucho! Adiós!```'}<<< (venv) kenneth@kpc:~/learning/deeplearning/deeplearning_langchain_llm_course$ ====== This script is based on the course from deeplearning.ai from langchain.llms import Ollama from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.prompts import PromptTemplate #from langchain.chains import LLMChain customer_email = """ Arrr, I be fuming that me blender lid \ flew off and splattered me kitchen walls \ with smoothie! And to make matters worse, \ the warranty don't cover the cost of \ cleaning up me kitchen. I need yer help \ right now, matey! """ style = """American English \ in a calm and respectful tone """ customer_template = """Translate the text \ that is delimited by triple backticks into a style that is {style}. text: ```{customer_email}``` """ customer_prompt = PromptTemplate(input_variables=["style", "customer_email"], template=customer_template) print(f"prompt: {customer_prompt}") llm = Ollama(model="mistral:instruct") chain1 = customer_prompt | llm response = chain1.invoke({"style": style, "customer_email": customer_email}) # Service reply service_reply = """Hey there customer, \ the warranty does not cover \ cleaning expenses for your kitchen \ because it's your fault that \ you misused your blender \ by forgetting to put the lid on before \ starting the blender. \ Tough luck! See ya! """ service_style_pirate = """\ a polite tone \ that speaks in Spanish\ """ service_template = """Translate the text \ that is delimited by triple backticks into a style that is {style}. text: ```{service_reply}``` """ service_prompt = PromptTemplate(input_variables=["style", "service_reply"], template=service_template) #chain2 = service_prompt | llm from langchain.chains import LLMChain chain2 = LLMChain(llm=llm, prompt=service_prompt, verbose=True) print(f"222 chain2: {chain2}") response = chain2.invoke({"style": service_style_pirate, "service_reply": service_reply}) print(f"Service response: >>>{response}<<<")
Author
Owner

@kennethwork101 commented on GitHub (Dec 10, 2023):

Installed ollama version is 0.1.14 and is working now.
ollama version 0.1.13 was failing.
No change to langchain Version: 0.0.340

<!-- gh-comment-id:1849059746 --> @kennethwork101 commented on GitHub (Dec 10, 2023): Installed ollama version is 0.1.14 and is working now. ollama version 0.1.13 was failing. No change to langchain Version: 0.0.340
Author
Owner

@dennis-thevara commented on GitHub (Dec 11, 2023):

Can confirm that updating ollama fixed it for me too. Mine was 0.1.12 before update.

<!-- gh-comment-id:1849658419 --> @dennis-thevara commented on GitHub (Dec 11, 2023): Can confirm that updating ollama fixed it for me too. Mine was 0.1.12 before update.
Author
Owner

@BruceMacD commented on GitHub (Dec 11, 2023):

Thanks for letting us know this is working now. I suspect this was a concurrent request bug that I fixed in the last release. Resolving this now.

<!-- gh-comment-id:1851006844 --> @BruceMacD commented on GitHub (Dec 11, 2023): Thanks for letting us know this is working now. I suspect this was a concurrent request bug that I fixed in the last release. Resolving this now.
Author
Owner

@kwong99 commented on GitHub (Dec 12, 2023):

Hi,
Thanks for fixing the issue.
Is there a way to install a previous version of ollama?
I am hoping to run a previous version if the current version is having
issues.
Thanks.
kenneth

On Mon, Dec 11, 2023 at 2:40 PM Bruce MacDonald @.***>
wrote:

Thanks for letting us know this is working now. I suspect this was a
concurrent request bug that I fixed in the last release. Resolving this now.


Reply to this email directly, view it on GitHub
https://github.com/jmorganca/ollama/issues/1329#issuecomment-1851006844,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ARIZ4ITLTBLHMHSHY6XOX2LYI6DV7AVCNFSM6AAAAABABDPSSGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJRGAYDMOBUGQ
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:1851087425 --> @kwong99 commented on GitHub (Dec 12, 2023): Hi, Thanks for fixing the issue. Is there a way to install a previous version of ollama? I am hoping to run a previous version if the current version is having issues. Thanks. kenneth On Mon, Dec 11, 2023 at 2:40 PM Bruce MacDonald ***@***.***> wrote: > Thanks for letting us know this is working now. I suspect this was a > concurrent request bug that I fixed in the last release. Resolving this now. > > — > Reply to this email directly, view it on GitHub > <https://github.com/jmorganca/ollama/issues/1329#issuecomment-1851006844>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARIZ4ITLTBLHMHSHY6XOX2LYI6DV7AVCNFSM6AAAAABABDPSSGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJRGAYDMOBUGQ> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@BruceMacD commented on GitHub (Dec 12, 2023):

@kwong99 if you're seeing the error in this issue using the most recent version will most likely fix the issue, otherwise you can download old versions from here:
https://github.com/jmorganca/ollama/releases

<!-- gh-comment-id:1852298045 --> @BruceMacD commented on GitHub (Dec 12, 2023): @kwong99 if you're seeing the error in this issue using the most recent version will most likely fix the issue, otherwise you can download old versions from here: https://github.com/jmorganca/ollama/releases
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#691