[GH-ISSUE #1579] Error while running ollama locally. #868

Closed
opened 2026-04-12 10:31:50 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @nehalmathew1996 on GitHub (Dec 18, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1579

Originally assigned to: @dhiltgen on GitHub.

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002298AE1EF50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

Originally created by @nehalmathew1996 on GitHub (Dec 18, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1579 Originally assigned to: @dhiltgen on GitHub. ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002298AE1EF50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Author
Owner

@duhow commented on GitHub (Dec 18, 2023):

Maybe this can help?

86b0dd4b16/docs/faq.md (how-can-i-expose-ollama-on-my-network)

<!-- gh-comment-id:1860088662 --> @duhow commented on GitHub (Dec 18, 2023): Maybe this can help? https://github.com/jmorganca/ollama/blob/86b0dd4b165497e08ec331e3c2c2aa229beb09db/docs/faq.md#how-can-i-expose-ollama-on-my-network
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

@nehalmathew1996 can you tell us more about what you are trying to do?

<!-- gh-comment-id:1863320586 --> @technovangelist commented on GitHub (Dec 19, 2023): @nehalmathew1996 can you tell us more about what you are trying to do?
Author
Owner

@Hidayathamir commented on GitHub (Dec 20, 2023):

Get Started

  1. Run the Ollama Docker container:
sudo docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

For more detailed information, refer to the Ollama Quickstart Docker. Please note we are using CPU only, the AI will response slow, if you have GPU, you can follow the instruction to run the docker and using your GPU to improve performance.

  1. Pull the llama2 model:
curl --location 'http://localhost:11434/api/pull' \
--header 'Content-Type: application/json' \
--data '{
    "name": "llama2:7b"
}'
  1. Chat with llama2
curl --location 'http://localhost:11434/api/chat' \
--header 'Content-Type: application/json' \
--data '{
    "model": "llama2:7b",
    "messages": [
        {
            "role": "user",
            "content": "why sky blue"
        }
    ]
}'

I create a PR about this.

https://github.com/jmorganca/ollama/pull/1622

<!-- gh-comment-id:1863882802 --> @Hidayathamir commented on GitHub (Dec 20, 2023): ## Get Started 1. Run the Ollama Docker container: ```shell sudo docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` For more detailed information, refer to the [Ollama Quickstart Docker](https://hub.docker.com/r/ollama/ollama). Please note we are using CPU only, the AI will response slow, if you have GPU, you can follow the instruction to run the docker and using your GPU to improve performance. 2. Pull the llama2 model: ```shell curl --location 'http://localhost:11434/api/pull' \ --header 'Content-Type: application/json' \ --data '{ "name": "llama2:7b" }' ``` 3. Chat with llama2 ````shell curl --location 'http://localhost:11434/api/chat' \ --header 'Content-Type: application/json' \ --data '{ "model": "llama2:7b", "messages": [ { "role": "user", "content": "why sky blue" } ] }' ```` I create a PR about this. https://github.com/jmorganca/ollama/pull/1622
Author
Owner

@shreyasd2301 commented on GitHub (Jan 20, 2024):

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002298AE1EF50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

what was the issue?

i'm facing similar problem

<!-- gh-comment-id:1902268567 --> @shreyasd2301 commented on GitHub (Jan 20, 2024): > ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002298AE1EF50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) what was the issue? i'm facing similar problem
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@nehalmathew1996 are you still having problems? Can you upgrade to 0.1.22 and see if that resolves you problem?

<!-- gh-comment-id:1912894920 --> @dhiltgen commented on GitHub (Jan 27, 2024): @nehalmathew1996 are you still having problems? Can you upgrade to 0.1.22 and see if that resolves you problem?
Author
Owner

@dhiltgen commented on GitHub (Feb 1, 2024):

If you're still having problems with 0.1.22 or newer, please re-open.

<!-- gh-comment-id:1922462229 --> @dhiltgen commented on GitHub (Feb 1, 2024): If you're still having problems with 0.1.22 or newer, please re-open.
Author
Owner

@eons2long commented on GitHub (Mar 17, 2024):

I have a same problem, used with docker, sometime is happened with this:
[ollama] Connection Error, HTTPConnectionPool(host='172.17.0.1', port=11434): Read timed out. (read timeout=60)

my ollama version is 0.1.28

<!-- gh-comment-id:2002479908 --> @eons2long commented on GitHub (Mar 17, 2024): I have a same problem, used with docker, sometime is happened with this: [ollama] Connection Error, HTTPConnectionPool(host='172.17.0.1', port=11434): Read timed out. (read timeout=60) my ollama version is 0.1.28
Author
Owner

@imen-ben-atig commented on GitHub (Mar 25, 2024):

same problem here

<!-- gh-comment-id:2017138519 --> @imen-ben-atig commented on GitHub (Mar 25, 2024): same problem here
Author
Owner

@dhiltgen commented on GitHub (Mar 25, 2024):

@yixian3500 @imen-ben-atig Ollama is a client-server architecture, and this error is the client failing to connect to the server. The underlying problem is likely a crash or hang in the server, and most likely your problems are unrelated to whatever was going wrong in Dec 18th when this issue was opened. Please go ahead and file new issues for your individual connection problem and please include the server.log so we can investigate.

See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for guidance on how to find your server log.

<!-- gh-comment-id:2017888780 --> @dhiltgen commented on GitHub (Mar 25, 2024): @yixian3500 @imen-ben-atig Ollama is a client-server architecture, and this error is the client failing to connect to the server. The underlying problem is likely a crash or hang in the server, and most likely your problems are unrelated to whatever was going wrong in Dec 18th when this issue was opened. Please go ahead and file new issues for your individual connection problem and please include the server.log so we can investigate. See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for guidance on how to find your server log.
Author
Owner

@mlatti commented on GitHub (Apr 5, 2024):

Install ollama on your system.
This may not cover all of the occurences, but if you're like me just getting started with llm's and your 101 tutorial doesn't mention this, you need to download and install ollama

<!-- gh-comment-id:2040531821 --> @mlatti commented on GitHub (Apr 5, 2024): Install ollama on your system. This may not cover all of the occurences, but if you're like me just getting started with llm's and your 101 tutorial doesn't mention this, you need to download and install ollama
Author
Owner

@thepyper commented on GitHub (Apr 5, 2024):

Ok, I think I've solved that in my setup, let's see if that's useful to anybody else...

My setup is:

  1. Windows 10, where I installed ollama (with OllamaSetup.exe)
  2. WSL + Ubuntu, where I installed OpenDevin

Actually the issue is made of the following issues:

  1. You need to check that ollama is actually running, so try in windows 10 (ms-dos prompt or powershell)
    curl 127.0.0.1:11434
    You should get a "ollama is running" message
  2. You need to understand that WSL is like a virtual machine, then "127.0.0.1" inside WSL does NOT mean connecting to windows 10, but connecting into the virtual environment in WSL.
  3. You need to figure out the actual IP of the windows 10 machine seen from WSL.
    I did it with a "traceroute www.google.com" command, it gave me the following:
traceroute to www.google.com (142.250.180.132), 30 hops max, 60 byte packets
 1  DESKTOP-K5HF2NK.mshome.net (172.19.80.1)  0.315 ms  0.230 ms  0.207 ms
... more stuff...

So my windows 10 is seen from WSL + Ubuntu as 172.19.80.1.
So, my config.toml file in OpenDevin looks like:

LLM_MODEL="ollama/llama2"
LLM_API_KEY="na"
LLM_BASE_URL="http://172.19.80.1:11434"
LLM_EMBEDDING_MODEL="llama2"
WORKSPACE_DIR="./workspace"
  1. Still things does not work, because by default ollama is only accepting local network connections.
    So, you need to add an environment variable:
    OLLAMA_HOST="0.0.0.0"
    in your windows 10.
    You can test quickly that in PowerShell, just quit ollama then open PowerShell and give:
$env:OLLAMA_HOST="0.0.0.0"
ollama serve
  1. Now opening "localhost:3001" in browser (in windows 10) should give you a working OpenDevin.
    At least mine is doing something with ollama, I can see in the console but I'm still waiting my response to "hello" :)

... yep I got a silly reply, it's working !!!

Hope this helps....

<!-- gh-comment-id:2040632540 --> @thepyper commented on GitHub (Apr 5, 2024): Ok, I think I've solved that in my setup, let's see if that's useful to anybody else... My setup is: 1) Windows 10, where I installed ollama (with OllamaSetup.exe) 2) WSL + Ubuntu, where I installed OpenDevin Actually the issue is made of the following issues: 1) You need to check that ollama is actually running, so try in windows 10 (ms-dos prompt or powershell) curl 127.0.0.1:11434 You should get a "ollama is running" message 2) You need to understand that WSL is like a virtual machine, then "127.0.0.1" inside WSL does NOT mean connecting to windows 10, but connecting into the virtual environment in WSL. 3) You need to figure out the actual IP of the windows 10 machine seen from WSL. I did it with a "traceroute www.google.com" command, it gave me the following: ``` traceroute to www.google.com (142.250.180.132), 30 hops max, 60 byte packets 1 DESKTOP-K5HF2NK.mshome.net (172.19.80.1) 0.315 ms 0.230 ms 0.207 ms ... more stuff... ``` So my windows 10 is seen from WSL + Ubuntu as 172.19.80.1. So, my config.toml file in OpenDevin looks like: ``` LLM_MODEL="ollama/llama2" LLM_API_KEY="na" LLM_BASE_URL="http://172.19.80.1:11434" LLM_EMBEDDING_MODEL="llama2" WORKSPACE_DIR="./workspace" ``` 4) Still things does not work, because by default ollama is only accepting local network connections. So, you need to add an environment variable: OLLAMA_HOST="0.0.0.0" in your windows 10. You can test quickly that in PowerShell, just quit ollama then open PowerShell and give: ``` $env:OLLAMA_HOST="0.0.0.0" ollama serve ``` 5) Now opening "localhost:3001" in browser (in windows 10) should give you a working OpenDevin. At least mine is doing something with ollama, I can see in the console but I'm still waiting my response to "hello" :) ... yep I got a silly reply, it's working !!! Hope this helps....
Author
Owner

@mzeeshanarshad commented on GitHub (Feb 27, 2025):

Anyone still having this error should try increasing the time out.

<!-- gh-comment-id:2686774155 --> @mzeeshanarshad commented on GitHub (Feb 27, 2025): Anyone still having this error should try increasing the time out.
Author
Owner

@wybaby commented on GitHub (Mar 3, 2025):

  1. set the URL be "http://host.docker.internal:11434" or your IP, instead of 127.0.0.1 or localhost.
  2. add an enviroment variable OLLAMA_HOST="0.0.0.0" (for Windows).
<!-- gh-comment-id:2693667615 --> @wybaby commented on GitHub (Mar 3, 2025): 1. set the URL be "http://host.docker.internal:11434" or your IP, instead of 127.0.0.1 or localhost. 2. add an enviroment variable OLLAMA_HOST="0.0.0.0" (for Windows).
Author
Owner

@aleksadvaisly commented on GitHub (Mar 5, 2025):

Don't do that.

Instead OLLAMA_HOST="0.0.0.0"
Set OLLAMA_HOST="127.0.0.1"

I found 0.0.0.0 not working when using python ollama library

<!-- gh-comment-id:2702254581 --> @aleksadvaisly commented on GitHub (Mar 5, 2025): Don't do that. Instead OLLAMA_HOST="0.0.0.0" Set OLLAMA_HOST="127.0.0.1" I found 0.0.0.0 not working when using python ollama library
Author
Owner

@cadubarbosabr commented on GitHub (Jun 18, 2025):

Had the same issue.

Instead of using
base_url: str = "http://localhost:11434"):
Try
base_url: str = "http://127.0.0.1:11434"):

<!-- gh-comment-id:2985672300 --> @cadubarbosabr commented on GitHub (Jun 18, 2025): Had the same issue. Instead of using base_url: str = "http://**localhost**:11434"): Try base_url: str = "http://**127.0.0.1**:11434"):
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#868