[GH-ISSUE #1411] Issue connecting to 11434 for local model query following sample #62788

Closed
opened 2026-05-03 10:19:08 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @OpenSpacesAndPlaces on GitHub (Dec 7, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1411

I'm following this example as a basis for getting started:
https://www.youtube.com/watch?v=tvs350imHLY
https://gist.github.com/mneedham/eec9246a5ce95dc792f2e73b16dfe78e

Everything is working good except for actually running the query:
response = query_engine.query("What is my question?")

Which throws an error connecting to the Ollama service that was started:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa201bb77f0>: Failed to establish a new connection: [Errno 111] Connection refused'))

Also tried this instead of query_engine.query - but same error:

url = "http://localhost:11434/api/generate"
data = {
    "model": "llama2",
    "prompt": "What is my question?"
}
response = requests.post(url, json=data)

Running:
WSL - Ubuntu 22.04.3 LTS
Python 3.10

Any help appreciated!!!!

Originally created by @OpenSpacesAndPlaces on GitHub (Dec 7, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1411 I'm following this example as a basis for getting started: https://www.youtube.com/watch?v=tvs350imHLY https://gist.github.com/mneedham/eec9246a5ce95dc792f2e73b16dfe78e Everything is working good except for actually running the query: `response = query_engine.query("What is my question?")` Which throws an error connecting to the Ollama service that was started: `requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa201bb77f0>: Failed to establish a new connection: [Errno 111] Connection refused'))` Also tried this instead of query_engine.query - but same error: ``` url = "http://localhost:11434/api/generate" data = { "model": "llama2", "prompt": "What is my question?" } response = requests.post(url, json=data) ``` Running: WSL - Ubuntu 22.04.3 LTS Python 3.10 Any help appreciated!!!!
Author
Owner

@OpenSpacesAndPlaces commented on GitHub (Dec 7, 2023):

@mneedham in-case you have any ideas from when you made this demo.
Thanks for any thoughts!

<!-- gh-comment-id:1845463793 --> @OpenSpacesAndPlaces commented on GitHub (Dec 7, 2023): @mneedham in-case you have any ideas from when you made this demo. Thanks for any thoughts!
Author
Owner

@OpenSpacesAndPlaces commented on GitHub (Dec 7, 2023):

I think I'm past this error.

I needed to specifically install ollama:
curl https://ollama.ai/install.sh | sh

Then open a separate prompt to run:
ollama serve

Appears to make the connection and fail with:
ValueError: Ollama call failed with status code 404. Details: model 'zephyr' not found, try pulling it first

Which should be fixed with:
ollama pull zephyr

Best I can see the original example missed some startup steps that were likely done already for another demo.

<!-- gh-comment-id:1845815091 --> @OpenSpacesAndPlaces commented on GitHub (Dec 7, 2023): I think I'm past this error. I needed to specifically install ollama: `curl https://ollama.ai/install.sh | sh` Then open a separate prompt to run: `ollama serve` Appears to make the connection and fail with: `ValueError: Ollama call failed with status code 404. Details: model 'zephyr' not found, try pulling it first` Which should be fixed with: `ollama pull zephyr` Best I can see the original example missed some startup steps that were likely done already for another demo.
Author
Owner

@BruceMacD commented on GitHub (Dec 7, 2023):

Thanks for providing the resolution @OpenSpacesAndPlaces, normally when ollama is installed via the install.sh script it starts a service running in the background, but if that isn't available it is necessary to run ollama serve.

Resolving this for now since there is no more to do here. Feel free to let us know if you hit any other issues.

<!-- gh-comment-id:1845930737 --> @BruceMacD commented on GitHub (Dec 7, 2023): Thanks for providing the resolution @OpenSpacesAndPlaces, normally when ollama is installed via the install.sh script it starts a service running in the background, but if that isn't available it is necessary to run `ollama serve`. Resolving this for now since there is no more to do here. Feel free to let us know if you hit any other issues.
Author
Owner

@ayttop commented on GitHub (Sep 8, 2024):

curl http://localhost:11434/api/chat -d "{"model": "llama3.1:8b", "stream": false, "messages": [{"role": "user", "content": "Why is the sky green?"}]}"

<!-- gh-comment-id:2336834701 --> @ayttop commented on GitHub (Sep 8, 2024): curl http://localhost:11434/api/chat -d "{\"model\": \"llama3.1:8b\", \"stream\": false, \"messages\": [{\"role\": \"user\", \"content\": \"Why is the sky green?\"}]}"
Author
Owner

@Mugane commented on GitHub (Feb 23, 2025):

http://localhost:11434/api/chat -d "{"model": "llama3.1:8b", "stream": false, "messages": [{"role": "user", "content": "Why is the sky green?"}]}"

How to do this using the url only? Is it possible (GET not POST)?

<!-- gh-comment-id:2677151736 --> @Mugane commented on GitHub (Feb 23, 2025): > http://localhost:11434/api/chat -d "{"model": "llama3.1:8b", "stream": false, "messages": [{"role": "user", "content": "Why is the sky green?"}]}" How to do this using the url only? Is it possible (GET not POST)?
Author
Owner

@BruceMacD commented on GitHub (Feb 24, 2025):

@Mugane No that isn't possible.

<!-- gh-comment-id:2679185369 --> @BruceMacD commented on GitHub (Feb 24, 2025): @Mugane No that isn't possible.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62788