[GH-ISSUE #2424] Always getting a timeout error while querying using mistral using Ollama #63452

Closed
opened 2026-05-03 13:34:43 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @Chakit22 on GitHub (Feb 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2424

Originally assigned to: @bmizerany on GitHub.

Traceback (most recent call last):
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
    yield
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 126, in read
    return self._sock.recv(max_bytes)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
TimeoutError: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
    yield
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 231, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
    raise exc
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
    response = connection.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
    return self._connection.handle_request(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 133, in handle_request
    raise exc
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 111, in handle_request
    ) = self._receive_response_headers(**kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 176, in _receive_response_headers
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event
    data = self._network_stream.read(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 124, in read
    with map_exceptions(exc_map):
  File "/opt/homebrew/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/chakitrocks/Desktop/llm/index.py", line 57, in <module>
    response = query_engine.query("What does the author think about Star Trek? Give details.")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/core/base_query_engine.py", line 40, in query
    return self._query(str_or_query_bundle)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/query_engine/retriever_query_engine.py", line 172, in _query
    response = self._response_synthesizer.synthesize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/base.py", line 168, in synthesize
    response_str = self.get_response(
                   ^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/compact_and_refine.py", line 38, in get_response
    return super().get_response(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 146, in get_response
    response = self._give_response_single(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 202, in _give_response_single
    program(
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 64, in __call__
    answer = self._llm.predict(
             ^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/llms/llm.py", line 239, in predict
    chat_response = self.chat(messages)
                    ^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/llms/base.py", line 100, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/llms/ollama.py", line 102, in chat
    response = client.post(
               ^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 1146, in post
    return self.request(
           ^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 828, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 915, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 943, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 1016, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 230, in handle_request
    with map_httpcore_exceptions():
  File "/opt/homebrew/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout: timed out

I seem to get this error while I am trying to fetch the top 20 values using VectorStoreIndex.
Here's the link to the blog I was trying to implement: https://blog.llamaindex.ai/running-mixtral-8x7-locally-with-llamaindex-e6cebeabe0ab

I am getting a timeout while I am querying with similarity_top_k=20.

What is the workaround this?

Originally created by @Chakit22 on GitHub (Feb 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2424 Originally assigned to: @bmizerany on GitHub. ``` Traceback (most recent call last): File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions yield File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 126, in read return self._sock.recv(max_bytes) ^^^^^^^^^^^^^^^^^^^^^^^^^^ TimeoutError: timed out The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions yield File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 231, in handle_request resp = self._pool.handle_request(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request raise exc File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request response = connection.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 103, in handle_request return self._connection.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 133, in handle_request raise exc File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 111, in handle_request ) = self._receive_response_headers(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 176, in _receive_response_headers event = self._receive_event(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event data = self._network_stream.read( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 124, in read with map_exceptions(exc_map): File "/opt/homebrew/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in __exit__ self.gen.throw(typ, value, traceback) File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ReadTimeout: timed out The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/chakitrocks/Desktop/llm/index.py", line 57, in <module> response = query_engine.query("What does the author think about Star Trek? Give details.") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/core/base_query_engine.py", line 40, in query return self._query(str_or_query_bundle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/query_engine/retriever_query_engine.py", line 172, in _query response = self._response_synthesizer.synthesize( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/base.py", line 168, in synthesize response_str = self.get_response( ^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/compact_and_refine.py", line 38, in get_response return super().get_response( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 146, in get_response response = self._give_response_single( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 202, in _give_response_single program( File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 64, in __call__ answer = self._llm.predict( ^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/llms/llm.py", line 239, in predict chat_response = self.chat(messages) ^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/llms/base.py", line 100, in wrapped_llm_chat f_return_val = f(_self, messages, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/llama_index/llms/ollama.py", line 102, in chat response = client.post( ^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 1146, in post return self.request( ^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 828, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 915, in send response = self._send_handling_auth( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 943, in _send_handling_auth response = self._send_handling_redirects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 980, in _send_handling_redirects response = self._send_single_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_client.py", line 1016, in _send_single_request response = transport.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 230, in handle_request with map_httpcore_exceptions(): File "/opt/homebrew/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in __exit__ self.gen.throw(typ, value, traceback) File "/Users/chakitrocks/Desktop/llm/env/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ReadTimeout: timed out ``` I seem to get this error while I am trying to fetch the top 20 values using VectorStoreIndex. Here's the link to the blog I was trying to implement: https://blog.llamaindex.ai/running-mixtral-8x7-locally-with-llamaindex-e6cebeabe0ab I am getting a timeout while I am querying with `similarity_top_k=20`. What is the workaround this?
Author
Owner

@Chakit22 commented on GitHub (Feb 9, 2024):

I am running this on MAC M2 PRO 16 GB RAM.
@jmorganca Any workaround for this?

<!-- gh-comment-id:1935459956 --> @Chakit22 commented on GitHub (Feb 9, 2024): I am running this on `MAC M2 PRO 16 GB RAM`. @jmorganca Any workaround for this?
Author
Owner

@stefan1981 commented on GitHub (Feb 21, 2024):

Same problem here ... using GeForce RTX 4060 8MB. Ollama from llama-index

<!-- gh-comment-id:1957288521 --> @stefan1981 commented on GitHub (Feb 21, 2024): Same problem here ... using GeForce RTX 4060 8MB. Ollama from llama-index
Author
Owner

@jmaronas commented on GitHub (Feb 22, 2024):

same here. From all the samples i run I am getting this only in three of the prompts I am using

<!-- gh-comment-id:1959816260 --> @jmaronas commented on GitHub (Feb 22, 2024): same here. From all the samples i run I am getting this only in three of the prompts I am using
Author
Owner

@AbdullahAlAsad commented on GitHub (Feb 23, 2024):

Same
whenever I am trying to generate a long response I am getting the time out error

`Traceback (most recent call last):
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 126, in read
return self._sock.recv(max_bytes)
TimeoutError: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpx/_transports/default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
raise exc
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
response = connection.handle_request(request)
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
return self._connection.handle_request(request)
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 133, in handle_request
raise exc
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 111, in handle_request
) = self._receive_response_headers(**kwargs)
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 176, in _receive_response_headers
event = self._receive_event(timeout=timeout)
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event
data = self._network_stream.read(
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 124, in read
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ReadTimeout: timed out`

<!-- gh-comment-id:1960727871 --> @AbdullahAlAsad commented on GitHub (Feb 23, 2024): Same whenever I am trying to generate a long response I am getting the time out error `Traceback (most recent call last): File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions yield File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 126, in read return self._sock.recv(max_bytes) TimeoutError: timed out The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions yield File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpx/_transports/default.py", line 231, in handle_request resp = self._pool.handle_request(req) File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request raise exc File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request response = connection.handle_request(request) File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 103, in handle_request return self._connection.handle_request(request) File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 133, in handle_request raise exc File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 111, in handle_request ) = self._receive_response_headers(**kwargs) File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 176, in _receive_response_headers event = self._receive_event(timeout=timeout) File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event data = self._network_stream.read( File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 124, in read with map_exceptions(exc_map): File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/rgeorge/ws_fine_tune/ftenv/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ReadTimeout: timed out`
Author
Owner

@stefan1981 commented on GitHub (Feb 23, 2024):

I could solve the error by using the ollama library for python directly. No the llamaindex-ollama one. You can also play with the ollama installation on terminal.

<!-- gh-comment-id:1961173682 --> @stefan1981 commented on GitHub (Feb 23, 2024): I could solve the error by using the ollama library for python directly. No the llamaindex-ollama one. You can also play with the ollama installation on terminal.
Author
Owner

@bmizerany commented on GitHub (Mar 11, 2024):

@Chakit22 Does using the official ollama python library solve your issue?

<!-- gh-comment-id:1989238844 --> @bmizerany commented on GitHub (Mar 11, 2024): @Chakit22 Does using the [official ollama python](https://github.com/ollama/ollama-python) library solve your issue?
Author
Owner

@Felix-hans commented on GitHub (Apr 11, 2024):

For people who might be forced to use the llama_index internal Ollama deploy, I suggest trying to increase the request_timeout: Ollama(model="mistral",request_timeout=60.0)

<!-- gh-comment-id:2049058527 --> @Felix-hans commented on GitHub (Apr 11, 2024): For people who might be forced to use the llama_index internal Ollama deploy, I suggest trying to increase the request_timeout: Ollama(model="mistral",request_timeout=60.0)
Author
Owner

@jjzhoujun commented on GitHub (Apr 16, 2024):

For people who might be forced to use the llama_index internal Ollama deploy, I suggest trying to increase the request_timeout: Ollama(model="mistral",request_timeout=60.0)

How to increase the request_timeout in local ollama ?

<!-- gh-comment-id:2058298611 --> @jjzhoujun commented on GitHub (Apr 16, 2024): > For people who might be forced to use the llama_index internal Ollama deploy, I suggest trying to increase the request_timeout: Ollama(model="mistral",request_timeout=60.0) How to increase the request_timeout in local ollama ?
Author
Owner

@findnix commented on GitHub (Apr 16, 2024):

I run ollama on very lean hardware. This means that the response times are very poor. It happens very often that the client side terminates with a timeout. Also in python/langchain and on the commandline. The remedy is to increase the request_timeout parameter.
e.g.: Ollama(model="mistral",request_timeout=...)
It would be nice to be able to configure this parameter.

Thanks for your help
Ronald

<!-- gh-comment-id:2059921470 --> @findnix commented on GitHub (Apr 16, 2024): I run ollama on very lean hardware. This means that the response times are very poor. It happens very often that the client side terminates with a timeout. Also in python/langchain and on the commandline. The remedy is to increase the request_timeout parameter. e.g.: Ollama(model="mistral",request_timeout=...) It would be nice to be able to configure this parameter. Thanks for your help Ronald
Author
Owner

@caio-vinicius commented on GitHub (Apr 20, 2024):

I'm with the same problem. I just found out that I was using 60 seconds as timeout and it was not enough. Using 500 worked. Here's my code:

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.core.embeddings import resolve_embed_model
from llama_index.llms.ollama import Ollama

documents = SimpleDirectoryReader("data").load_data()
Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")
Settings.llm = Ollama(model="mistral", request_timeout=60) # with 500 it works
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(llm=Settings.llm)
response = query_engine.query("The essay is cool?")

*prints I had was removed

Result

Loaded 1 documents
EMBED MODEL: model_name='BAAI/bge-small-en-v1.5' embed_batch_size=10 callback_manager=<llama_index.core.callbacks.base.CallbackManager object at 0x7fb6b9c4b2c0> max_length=512 normalize=True query_instruction=None text_instruction=None cache_folder=None
LLM: callback_manager=<llama_index.core.callbacks.base.CallbackManager object at 0x7fb6b9c4b2c0> system_prompt=None messages_to_prompt=<function messages_to_prompt at 0x7fb76ac38900> completion_to_prompt=<function default_completion_to_prompt at 0x7fb76ac6bf60> output_parser=None pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'> query_wrapper_prompt=None base_url='http://localhost:11434' model='mistral' temperature=0.75 context_window=3900 request_timeout=500.0 prompt_key='prompt' additional_kwargs={}
INDEX: <llama_index.core.indices.vector_store.base.VectorStoreIndex object at 0x7fb6b93cc8f0>
QUERY ENGINE: <llama_index.core.query_engine.retriever_query_engine.RetrieverQueryEngine object at 0x7fb6b036d100>
RESPONSE:  The essay appears to be about the author's experiences, reflections, and insights related to various topics such as essay writing, Florence, abstract concepts, old computers, art, startups, and Y Combinator. It seems to touch upon themes of adaptation, independence, rapid change, and the influence of customs on certain fields. However, without directly referencing the provided context, it is essential to keep in mind that this answer is a general interpretation based on the given excerpts.

<!-- gh-comment-id:2067769552 --> @caio-vinicius commented on GitHub (Apr 20, 2024): I'm with the same problem. I just found out that I was using 60 seconds as timeout and it was not enough. Using 500 worked. Here's my code: ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings from llama_index.core.embeddings import resolve_embed_model from llama_index.llms.ollama import Ollama documents = SimpleDirectoryReader("data").load_data() Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5") Settings.llm = Ollama(model="mistral", request_timeout=60) # with 500 it works index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(llm=Settings.llm) response = query_engine.query("The essay is cool?") ``` *prints I had was removed Result ```bash Loaded 1 documents EMBED MODEL: model_name='BAAI/bge-small-en-v1.5' embed_batch_size=10 callback_manager=<llama_index.core.callbacks.base.CallbackManager object at 0x7fb6b9c4b2c0> max_length=512 normalize=True query_instruction=None text_instruction=None cache_folder=None LLM: callback_manager=<llama_index.core.callbacks.base.CallbackManager object at 0x7fb6b9c4b2c0> system_prompt=None messages_to_prompt=<function messages_to_prompt at 0x7fb76ac38900> completion_to_prompt=<function default_completion_to_prompt at 0x7fb76ac6bf60> output_parser=None pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'> query_wrapper_prompt=None base_url='http://localhost:11434' model='mistral' temperature=0.75 context_window=3900 request_timeout=500.0 prompt_key='prompt' additional_kwargs={} INDEX: <llama_index.core.indices.vector_store.base.VectorStoreIndex object at 0x7fb6b93cc8f0> QUERY ENGINE: <llama_index.core.query_engine.retriever_query_engine.RetrieverQueryEngine object at 0x7fb6b036d100> RESPONSE: The essay appears to be about the author's experiences, reflections, and insights related to various topics such as essay writing, Florence, abstract concepts, old computers, art, startups, and Y Combinator. It seems to touch upon themes of adaptation, independence, rapid change, and the influence of customs on certain fields. However, without directly referencing the provided context, it is essential to keep in mind that this answer is a general interpretation based on the given excerpts. ```
Author
Owner

@chethanmh commented on GitHub (May 1, 2024):

I faced the same timeout issue on Mac even after 120secs timeout. I changed to 600 after seeing the above post. It works now!

<!-- gh-comment-id:2088781170 --> @chethanmh commented on GitHub (May 1, 2024): I faced the same timeout issue on Mac even after 120secs timeout. I changed to 600 after seeing the above post. It works now!
Author
Owner

@Aryan-Deshpande commented on GitHub (May 4, 2024):

has this problem been solved ?

<!-- gh-comment-id:2094001507 --> @Aryan-Deshpande commented on GitHub (May 4, 2024): has this problem been solved ?
Author
Owner

@firasarfa commented on GitHub (May 7, 2024):

Hey everyone ... I basically had the same problem because of my poor connection and did some research and turns out that you have to keep increasing the request_timeout until it works (it worked for me).. Have fun troubleshooting :)

<!-- gh-comment-id:2099464909 --> @firasarfa commented on GitHub (May 7, 2024): Hey everyone ... I basically had the same problem because of my poor connection and did some research and turns out that you have to keep increasing the **request_timeout** until it works (it worked for me).. Have fun troubleshooting :)
Author
Owner

@jmorganca commented on GitHub (May 7, 2024):

Hi folks, yes sometimes models take well over 60s to load on machines with slower memory i/o. I would configure request_timeout to be at least 10 minutes for this case. Hope this helps!

<!-- gh-comment-id:2099490982 --> @jmorganca commented on GitHub (May 7, 2024): Hi folks, yes sometimes models take well over 60s to load on machines with slower memory i/o. I would configure `request_timeout` to be at least 10 minutes for this case. Hope this helps!
Author
Owner

@chethanmh commented on GitHub (May 8, 2024):

I tried storing the vectors into a db. Later in a python script, load the vectors and query a question. Even with this, the time taken is 3 minutes. I was under the impression that generating the vectors takes a long time. Anyone seen this where a script loading vectors from db and querying takes time also?

<!-- gh-comment-id:2099589450 --> @chethanmh commented on GitHub (May 8, 2024): I tried storing the vectors into a db. Later in a python script, load the vectors and query a question. Even with this, the time taken is 3 minutes. I was under the impression that generating the vectors takes a long time. Anyone seen this where a script loading vectors from db and querying takes time also?
Author
Owner

@stefan1981 commented on GitHub (May 16, 2024):

Just use llama-cpp instead of ollama. I switched and it works.

<!-- gh-comment-id:2115689459 --> @stefan1981 commented on GitHub (May 16, 2024): Just use llama-cpp instead of ollama. I switched and it works.
Author
Owner

@monkrus commented on GitHub (May 16, 2024):

Worked well with 600. Just ensure you add a timeout to every model used in your code.

<!-- gh-comment-id:2115798543 --> @monkrus commented on GitHub (May 16, 2024): Worked well with 600. Just ensure you add a timeout to every model used in your code.
Author
Owner

@Arslan-Mehmood1 commented on GitHub (Dec 27, 2024):

to keep ollama model loaded in memory when using langchain ollama:
pass keep_alive=-1

<!-- gh-comment-id:2563934884 --> @Arslan-Mehmood1 commented on GitHub (Dec 27, 2024): to keep ollama model loaded in memory when using langchain ollama: pass keep_alive=-1
Author
Owner

@felixmarch commented on GitHub (Jan 16, 2025):

How can we pass the request_timeout to the API?

I tried like this but it did not seem to work 😕

$ while true ; do curl http://localhost:11434/api/chat -d '{ "model": "llama3.1:70b", "keep_alive": "0", "options": {"request_timeout": "900.0", "num_thread": 16},  "messages": [{"role": "user", "content": "Why is the sky blue?"}]}' ; sleep 1 ; done
...
...
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
...
...
<!-- gh-comment-id:2594972689 --> @felixmarch commented on GitHub (Jan 16, 2025): How can we pass the request_timeout to the API? I tried like this but it did not seem to work 😕 ``` $ while true ; do curl http://localhost:11434/api/chat -d '{ "model": "llama3.1:70b", "keep_alive": "0", "options": {"request_timeout": "900.0", "num_thread": 16}, "messages": [{"role": "user", "content": "Why is the sky blue?"}]}' ; sleep 1 ; done ... ... <html><body><h1>504 Gateway Time-out</h1> The server didn't respond in time. </body></html> ... ... ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63452