[GH-ISSUE #2132] How to solve ConnectionError ([Errno 111] Connection refused) #26978

Closed
opened 2026-04-22 03:47:38 -05:00 by GiteaMirror · 28 comments
Owner

Originally created by @yliu2702 on GitHub (Jan 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2132

Hello, I tried to access 'llama 2' and 'mistral' model to build a local open-source LLM chatbot. However, maybe I access your website too ofter during debugging, I met this error : 'ConnectionError: HTTPConnectionPool(host=‘0.0.0.0’, port=11434): Max retries exceeded with url: /api/chat (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7fe32765ca30>: Failed to establish a new connection: [Errno 111] Connection refused’))'.
I tried my code through r = requests.post(
"http://0.0.0.0:11434/api/chat",
json={"model": model, "messages": messages, "stream": True, "options": {
"temperature": temp
}},
) and also through langchain, but all failed.
So, how can I solve this problem? So I can use Ollama again? Thanks!

Originally created by @yliu2702 on GitHub (Jan 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2132 Hello, I tried to access 'llama 2' and 'mistral' model to build a local open-source LLM chatbot. However, maybe I access your website too ofter during debugging, I met this error : 'ConnectionError: HTTPConnectionPool(host=‘0.0.0.0’, port=11434): Max retries exceeded with url: /api/chat (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7fe32765ca30>: Failed to establish a new connection: [Errno 111] Connection refused’))'. I tried my code through r = requests.post( "http://0.0.0.0:11434/api/chat", json={"model": model, "messages": messages, "stream": True, "options": { "temperature": temp }}, ) and also through langchain, but all failed. So, how can I solve this problem? So I can use Ollama again? Thanks!
Author
Owner

@jmorganca commented on GitHub (Jan 22, 2024):

@yliu2702 sorry you're hitting this error! May I ask if this is on macOS or Linux?

<!-- gh-comment-id:1903229043 --> @jmorganca commented on GitHub (Jan 22, 2024): @yliu2702 sorry you're hitting this error! May I ask if this is on macOS or Linux?
Author
Owner

@juancalderonbustillo commented on GitHub (Jan 22, 2024):

In case this helps, I am experiencing the same issue on a Mac, I believe since thursday. For more reference, when run the following commands on bash, I get the following errors:

--> ollama run mistral
Error: could not connect to ollama app, is it running?

--> ollama serve
2024/01/22 11:04:11 images.go:737: total blobs: 84
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x10 pc=0x10518cd0c]

goroutine 1 [running]:
github.com/jmorganca/ollama/server.deleteUnusedLayers.func1({0x1400007ad90, 0x6c}, {0x10577ae68?, 0x14000466b60?}, {0x1400007ad90?, 0x6c?})
/Users/jmorgan/workspace/ollama/server/images.go:686 +0x41c
path/filepath.walk({0x1400007ad90, 0x6c}, {0x10577ae68, 0x14000466b60}, 0x1400031f930)
/opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:492 +0xc8
path/filepath.walk({0x14000108d20, 0x52}, {0x10577ae68, 0x14000466a90}, 0x1400031f930)
/opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4
path/filepath.walk({0x14000406a00, 0x3f}, {0x10577ae68, 0x140004669c0}, 0x1400031f930)
/opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4
path/filepath.walk({0x14000406940, 0x37}, {0x10577ae68, 0x140004668f0}, 0x1400031f930)
/opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4
path/filepath.walk({0x14000417dd0, 0x24}, {0x10577ae68, 0x14000466820}, 0x1400031f930)
/opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4
path/filepath.Walk({0x14000417dd0, 0x24}, 0x1400031f930)
/opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:587 +0x6c
github.com/jmorganca/ollama/server.deleteUnusedLayers(0x0, 0x1400031faa8, 0x0)
/Users/jmorgan/workspace/ollama/server/images.go:690 +0x6c
github.com/jmorganca/ollama/server.PruneLayers()
/Users/jmorgan/workspace/ollama/server/images.go:739 +0x248
github.com/jmorganca/ollama/server.Serve({0x1057785b8, 0x140004455a0})
/Users/jmorgan/workspace/ollama/server/routes.go:875 +0x3c
github.com/jmorganca/ollama/cmd.RunServer(0x1400046c300?, {0x105b984a0?, 0x4?, 0x1051ac07a?})
/Users/jmorgan/workspace/ollama/cmd/cmd.go:1038 +0x178
github.com/spf13/cobra.(*Command).execute(0x1400044d500, {0x105b984a0, 0x0, 0x0})
/Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x658
github.com/spf13/cobra.(*Command).ExecuteC(0x1400044c900)
/Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320
github.com/spf13/cobra.(*Command).Execute(...)
/Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/Users/jmorgan/workspace/ollama/main.go:11 +0x54

<!-- gh-comment-id:1903647608 --> @juancalderonbustillo commented on GitHub (Jan 22, 2024): In case this helps, I am experiencing the same issue on a Mac, I believe since thursday. For more reference, when run the following commands on bash, I get the following errors: --> ollama run mistral Error: could not connect to ollama app, is it running? --> ollama serve 2024/01/22 11:04:11 images.go:737: total blobs: 84 panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x2 addr=0x10 pc=0x10518cd0c] goroutine 1 [running]: github.com/jmorganca/ollama/server.deleteUnusedLayers.func1({0x1400007ad90, 0x6c}, {0x10577ae68?, 0x14000466b60?}, {0x1400007ad90?, 0x6c?}) /Users/jmorgan/workspace/ollama/server/images.go:686 +0x41c path/filepath.walk({0x1400007ad90, 0x6c}, {0x10577ae68, 0x14000466b60}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:492 +0xc8 path/filepath.walk({0x14000108d20, 0x52}, {0x10577ae68, 0x14000466a90}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000406a00, 0x3f}, {0x10577ae68, 0x140004669c0}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000406940, 0x37}, {0x10577ae68, 0x140004668f0}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000417dd0, 0x24}, {0x10577ae68, 0x14000466820}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.Walk({0x14000417dd0, 0x24}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:587 +0x6c github.com/jmorganca/ollama/server.deleteUnusedLayers(0x0, 0x1400031faa8, 0x0) /Users/jmorgan/workspace/ollama/server/images.go:690 +0x6c github.com/jmorganca/ollama/server.PruneLayers() /Users/jmorgan/workspace/ollama/server/images.go:739 +0x248 github.com/jmorganca/ollama/server.Serve({0x1057785b8, 0x140004455a0}) /Users/jmorgan/workspace/ollama/server/routes.go:875 +0x3c github.com/jmorganca/ollama/cmd.RunServer(0x1400046c300?, {0x105b984a0?, 0x4?, 0x1051ac07a?}) /Users/jmorgan/workspace/ollama/cmd/cmd.go:1038 +0x178 github.com/spf13/cobra.(*Command).execute(0x1400044d500, {0x105b984a0, 0x0, 0x0}) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x658 github.com/spf13/cobra.(*Command).ExecuteC(0x1400044c900) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320 github.com/spf13/cobra.(*Command).Execute(...) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 main.main() /Users/jmorgan/workspace/ollama/main.go:11 +0x54
Author
Owner

@juancalderonbustillo commented on GitHub (Jan 22, 2024):

My error was solved by just uninstalling and re-installing.... maybe some file got corrupted.

<!-- gh-comment-id:1904041676 --> @juancalderonbustillo commented on GitHub (Jan 22, 2024): My error was solved by just uninstalling and re-installing.... maybe some file got corrupted.
Author
Owner

@smanmay commented on GitHub (Jan 22, 2024):

Hi @jmorganca
I have installed OLLAMA using install.sh in my EC2 machine (LINUX).
I am able to access the services inside the EC2 using localhost/127.0.0.1/0.0.0.0:11434.
But when I tried to access it using the private/public IP of the system, its failing saying "Failed to connect to IP port 11434: Connection refused".
I tried to use OLLAMA_ORIGINS using both private and public IP, still the same error is showing.
Basically I want to aces the ollama service from outside of the EC2 machine. I have opened all the ports for the same also in aws.
Not sure how to solve the problem. Could you help.

<!-- gh-comment-id:1904479165 --> @smanmay commented on GitHub (Jan 22, 2024): Hi @jmorganca I have installed OLLAMA using install.sh in my EC2 machine (LINUX). I am able to access the services inside the EC2 using localhost/127.0.0.1/0.0.0.0:11434. But when I tried to access it using the private/public IP of the system, its failing saying "Failed to connect to IP port 11434: Connection refused". I tried to use OLLAMA_ORIGINS using both private and public IP, still the same error is showing. Basically I want to aces the ollama service from outside of the EC2 machine. I have opened all the ports for the same also in aws. Not sure how to solve the problem. Could you help.
Author
Owner

@mxyng commented on GitHub (Jan 22, 2024):

Connection refused indicates the service is not exposed/listening on this address/port.

Is ollama configured to listen on 0.0.0.0? It only listens on localhost by default so if you want to use it remotely, configuring OLLAMA_HOST is a requirement

<!-- gh-comment-id:1904515289 --> @mxyng commented on GitHub (Jan 22, 2024): `Connection refused` indicates the service is not exposed/listening on this address/port. Is ollama configured to listen on 0.0.0.0? It only listens on localhost by default so if you want to use it remotely, [configuring](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network) `OLLAMA_HOST` is a requirement
Author
Owner

@yliu2702 commented on GitHub (Jan 22, 2024):

@yliu2702 sorry you're hitting this error! May I ask if this is on macOS or Linux?

on macOS; But I also run it in Linux environment, same issues. I'll try to reinstall Ollama in the environment. Looking forward to your guidance or solutions. Thanks!

<!-- gh-comment-id:1904957674 --> @yliu2702 commented on GitHub (Jan 22, 2024): > @yliu2702 sorry you're hitting this error! May I ask if this is on macOS or Linux? on macOS; But I also run it in Linux environment, same issues. I'll try to reinstall Ollama in the environment. Looking forward to your guidance or solutions. Thanks!
Author
Owner

@smanmay commented on GitHub (Jan 23, 2024):

Connection refused indicates the service is not exposed/listening on this address/port.

Is ollama configured to listen on 0.0.0.0? It only listens on localhost by default so if you want to use it remotely, configuring OLLAMA_HOST is a requirement

Thank you for your help. The updated documentation worked. Following the working configuration for AWS.

[Service]
Environment="OLLAMA_HOST=private_ip"
Environment="OLLAMA_ORIGINS=http://public_ip:11434"

<!-- gh-comment-id:1905900870 --> @smanmay commented on GitHub (Jan 23, 2024): > `Connection refused` indicates the service is not exposed/listening on this address/port. > > Is ollama configured to listen on 0.0.0.0? It only listens on localhost by default so if you want to use it remotely, [configuring](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network) `OLLAMA_HOST` is a requirement Thank you for your help. The updated documentation worked. Following the working configuration for AWS. [Service] Environment="OLLAMA_HOST=private_ip" Environment="OLLAMA_ORIGINS=http://public_ip:11434"
Author
Owner

@ganakee commented on GitHub (Jan 24, 2024):

I am having this same issue. After compiling ollama for AMD GPUS, I used the manual install method.
I put the ollama.service file in /etc/systemd/system.

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/s/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=192.168.200.71:11434"
Environment="OLLAMA_ORIGINS=http://192.168.200.71:11434"

[Install]
WantedBy=default.target

I do sudo systemctl daemon-reload and sudo systemctl restart ollama. I have also rebooted several times.

I go to http://192.168.200.71:11434/ in the browser and see Ollama is running

However, I cannot connect to this server.

Using litellm, I use a simple

response = completion(
                model="ollama/llama2", 
                messages = [{ "content": user_prompt,"role": "user"}], 
                api_base="http://192.168.200.71:11434"

This fails with litellm.exceptions.APIConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

I added to ~/.bashrc

export OLLAMA_HOST=192.168.200.71
export OLLAMA_ORIGINS=http://192.168.200.71:11434

If I try to run ollama run llama2 I get
Error: Post "http://192.168.200.71:11434/api/chat": EOF

I was able, once, to get llama run llama2 to download the llama2 model but nothing since then.

<!-- gh-comment-id:1908853027 --> @ganakee commented on GitHub (Jan 24, 2024): I am having this same issue. After compiling ollama for AMD GPUS, I used the manual install method. I put the ollama.service file in /etc/systemd/system. ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/home/s/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment="OLLAMA_HOST=192.168.200.71:11434" Environment="OLLAMA_ORIGINS=http://192.168.200.71:11434" [Install] WantedBy=default.target ``` I do `sudo systemctl daemon-reload` and `sudo systemctl restart ollama`. I have also rebooted several times. I go to `http://192.168.200.71:11434/` in the browser and see **_Ollama is running_** However, I cannot connect to this server. Using litellm, I use a simple ``` response = completion( model="ollama/llama2", messages = [{ "content": user_prompt,"role": "user"}], api_base="http://192.168.200.71:11434" ``` This fails with `litellm.exceptions.APIConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))` I added to `~/.bashrc` ``` export OLLAMA_HOST=192.168.200.71 export OLLAMA_ORIGINS=http://192.168.200.71:11434 ``` If I try to run `ollama run llama2` I get `Error: Post "http://192.168.200.71:11434/api/chat": EOF` I was able, once, to get llama run llama2 to download the llama2 model but nothing since then.
Author
Owner

@ganakee commented on GitHub (Jan 25, 2024):

I did several more hours of work on this.

The issue seems to be somehow with copying the custom-compiled file to /usr/bin/local/ollama.gpu . No matter what I do, if I try to use systemd to load the ollama service with the GPU version, it does NOT work. If I do a fresh install of ollama that does work. I checked the permissions and ownership and they are identifcal for ollama. ollama.gpu (my version). I can run my custom-compiled version from a command line and get it to bind to 192.168.200.71 but cannot get it to run via systemd.

<!-- gh-comment-id:1910612186 --> @ganakee commented on GitHub (Jan 25, 2024): I did several more hours of work on this. The issue seems to be somehow with copying the custom-compiled file to /usr/bin/local/ollama.gpu . No matter what I do, if I try to use systemd to load the ollama service with the GPU version, it does NOT work. If I do a fresh install of ollama that does work. I checked the permissions and ownership and they are identifcal for ollama. ollama.gpu (my version). I can run my custom-compiled version from a command line and get it to bind to 192.168.200.71 but cannot get it to run via systemd.
Author
Owner

@ganakee commented on GitHub (Jan 25, 2024):

OK. If anyone else gets this issue, the problem for me was with the custom-compiled version of ollama and a missing override environment variable in the systemd config file.

I compiled ollama for AMD systems using the AMD RX 6650M card. That card has GPU capacity but is not officially supported by AMD for GPU use. I can, with tweaking, get this to compile anyway.

The issue for me with failed connections was the /etc/systemd/system/ollama.service file needed:
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
This is necessary for the technically-unsupported AMD GPU to downgrade the gfx instruction set to 1030.
Since this was missing, the ollama service started but journalctl -n 50 -u ollama showed that ollama subtly complained that it could not find the gfx1032 instruction file for Tensor files. This is exactly what Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" fixes.

(I have export "HSA_OVERRIDE_GFX_VERSION=10.3.0" in my ~/.bashrc file but, obviously, the systemd service does not "see" this user environment variable.)

Only after careful review of the journalctl did I see the possible source of the error. Note, ollama still reports as running. It just cannot "do" anything apparently due to the reliance on the GPU drivers which were wrong without the HSA-OVERRIDE.

<!-- gh-comment-id:1910776748 --> @ganakee commented on GitHub (Jan 25, 2024): OK. If anyone else gets this issue, the problem for me was with the custom-compiled version of ollama and a missing override environment variable in the systemd config file. I compiled ollama for AMD systems using the AMD RX 6650M card. That card has GPU capacity but is not officially supported by AMD for GPU use. I can, with tweaking, get this to compile anyway. The issue for me with failed connections was the `/etc/systemd/system/ollama.service` file needed: `Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"` This is necessary for the technically-unsupported AMD GPU to downgrade the gfx instruction set to 1030. Since this was missing, the ollama service started but `journalctl -n 50 -u ollama `showed that ollama subtly complained that it could not find the gfx1032 instruction file for Tensor files. This is exactly what `Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"` fixes. (I have export "HSA_OVERRIDE_GFX_VERSION=10.3.0" in my ~/.bashrc file but, obviously, the systemd service does not "see" this user environment variable.) Only after careful review of the journalctl did I see the possible source of the error. Note, ollama still reports as running. It just cannot "do" anything apparently due to the reliance on the GPU drivers which were wrong without the HSA-OVERRIDE.
Author
Owner

@yliu2702 commented on GitHub (Jan 27, 2024):

Has anyone solved this issue by resetting environment? I still don't know what to do, after re-install Ollama. Need help from the developer. Or does anyone know how to load model from hugging face?

<!-- gh-comment-id:1912936821 --> @yliu2702 commented on GitHub (Jan 27, 2024): Has anyone solved this issue by resetting environment? I still don't know what to do, after re-install Ollama. Need help from the developer. Or does anyone know how to load model from hugging face?
Author
Owner

@Z33DD commented on GitHub (Feb 21, 2024):

How I resolved this issue

It looks like the default CORS policy is to allow only localhost, so you need to change it with environment variables.

As root, edit this file: /etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434"

[Install]
WantedBy=default.target

The only changes were the lines:

Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434"

After it, you need to reload the daemon and the service:

sudo systemctl daemon-reload 
sudo systemctl restart ollama.service

Also, ensure your firewall is not blocking the port 11434:

sudo ufw allow 11434
sudo ufw reload
<!-- gh-comment-id:1957877350 --> @Z33DD commented on GitHub (Feb 21, 2024): # How I resolved this issue It looks like the default CORS policy is to allow only localhost, so you need to change it with environment variables. As root, edit this file: `/etc/systemd/system/ollama.service` ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434" [Install] WantedBy=default.target ``` The only changes were the lines: ``` Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434" ``` After it, you need to reload the daemon and the service: ```bash sudo systemctl daemon-reload sudo systemctl restart ollama.service ``` Also, ensure your firewall is not blocking the port 11434: ```bash sudo ufw allow 11434 sudo ufw reload ```
Author
Owner

@NetRxn commented on GitHub (Mar 1, 2024):

In case this helps, I am experiencing the same issue on a Mac, I believe since thursday. For more reference, when run the following commands on bash, I get the following errors:

--> ollama run mistral Error: could not connect to ollama app, is it running?

--> ollama serve 2024/01/22 11:04:11 images.go:737: total blobs: 84 panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x2 addr=0x10 pc=0x10518cd0c]

goroutine 1 [running]: github.com/jmorganca/ollama/server.deleteUnusedLayers.func1({0x1400007ad90, 0x6c}, {0x10577ae68?, 0x14000466b60?}, {0x1400007ad90?, 0x6c?}) /Users/jmorgan/workspace/ollama/server/images.go:686 +0x41c path/filepath.walk({0x1400007ad90, 0x6c}, {0x10577ae68, 0x14000466b60}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:492 +0xc8 path/filepath.walk({0x14000108d20, 0x52}, {0x10577ae68, 0x14000466a90}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000406a00, 0x3f}, {0x10577ae68, 0x140004669c0}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000406940, 0x37}, {0x10577ae68, 0x140004668f0}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000417dd0, 0x24}, {0x10577ae68, 0x14000466820}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.Walk({0x14000417dd0, 0x24}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:587 +0x6c github.com/jmorganca/ollama/server.deleteUnusedLayers(0x0, 0x1400031faa8, 0x0) /Users/jmorgan/workspace/ollama/server/images.go:690 +0x6c github.com/jmorganca/ollama/server.PruneLayers() /Users/jmorgan/workspace/ollama/server/images.go:739 +0x248 github.com/jmorganca/ollama/server.Serve({0x1057785b8, 0x140004455a0}) /Users/jmorgan/workspace/ollama/server/routes.go:875 +0x3c github.com/jmorganca/ollama/cmd.RunServer(0x1400046c300?, {0x105b984a0?, 0x4?, 0x1051ac07a?}) /Users/jmorgan/workspace/ollama/cmd/cmd.go:1038 +0x178 github.com/spf13/cobra.(*Command).execute(0x1400044d500, {0x105b984a0, 0x0, 0x0}) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x658 github.com/spf13/cobra.(*Command).ExecuteC(0x1400044c900) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320 github.com/spf13/cobra.(*Command).Execute(...) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 main.main() /Users/jmorgan/workspace/ollama/main.go:11 +0x54

In my case, things were working well originally. I had uninstalled ollama, but had not deleted the models I had used from the previous install.

When I tried to re-install ollama, the above error occurred and continued, and persisted through numerous uninstall/reinstalls.

Once the models from the old install were removed, everything returned to normal. 😂

<!-- gh-comment-id:1973494728 --> @NetRxn commented on GitHub (Mar 1, 2024): > In case this helps, I am experiencing the same issue on a Mac, I believe since thursday. For more reference, when run the following commands on bash, I get the following errors: > > --> ollama run mistral Error: could not connect to ollama app, is it running? > > --> ollama serve 2024/01/22 11:04:11 images.go:737: total blobs: 84 panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x2 addr=0x10 pc=0x10518cd0c] > > goroutine 1 [running]: github.com/jmorganca/ollama/server.deleteUnusedLayers.func1({0x1400007ad90, 0x6c}, {0x10577ae68?, 0x14000466b60?}, {0x1400007ad90?, 0x6c?}) /Users/jmorgan/workspace/ollama/server/images.go:686 +0x41c path/filepath.walk({0x1400007ad90, 0x6c}, {0x10577ae68, 0x14000466b60}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:492 +0xc8 path/filepath.walk({0x14000108d20, 0x52}, {0x10577ae68, 0x14000466a90}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000406a00, 0x3f}, {0x10577ae68, 0x140004669c0}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000406940, 0x37}, {0x10577ae68, 0x140004668f0}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.walk({0x14000417dd0, 0x24}, {0x10577ae68, 0x14000466820}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:516 +0x1d4 path/filepath.Walk({0x14000417dd0, 0x24}, 0x1400031f930) /opt/homebrew/Cellar/go/1.21.3/libexec/src/path/filepath/path.go:587 +0x6c github.com/jmorganca/ollama/server.deleteUnusedLayers(0x0, 0x1400031faa8, 0x0) /Users/jmorgan/workspace/ollama/server/images.go:690 +0x6c github.com/jmorganca/ollama/server.PruneLayers() /Users/jmorgan/workspace/ollama/server/images.go:739 +0x248 github.com/jmorganca/ollama/server.Serve({0x1057785b8, 0x140004455a0}) /Users/jmorgan/workspace/ollama/server/routes.go:875 +0x3c github.com/jmorganca/ollama/cmd.RunServer(0x1400046c300?, {0x105b984a0?, 0x4?, 0x1051ac07a?}) /Users/jmorgan/workspace/ollama/cmd/cmd.go:1038 +0x178 github.com/spf13/cobra.(*Command).execute(0x1400044d500, {0x105b984a0, 0x0, 0x0}) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x658 github.com/spf13/cobra.(*Command).ExecuteC(0x1400044c900) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320 github.com/spf13/cobra.(*Command).Execute(...) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) /Users/jmorgan/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 main.main() /Users/jmorgan/workspace/ollama/main.go:11 +0x54 In my case, things were working well originally. I had uninstalled ollama, but had not deleted the models I had used from the previous install. When I tried to re-install ollama, the above error occurred and continued, and persisted through numerous uninstall/reinstalls. Once the models from the old install were removed, everything returned to normal. 😂
Author
Owner

@mxyng commented on GitHub (Mar 11, 2024):

ConnectionError: HTTPConnectionPool(host=‘0.0.0.0’, port=11434):

@yliu2702 you're making a request to 0.0.0.0 port 11434, did you configure ollama to bind to 0.0.0.0:11434? If it's not configured to bind to 0.0.0.0, this failure is expected. See my comment here: https://github.com/ollama/ollama/issues/2132#issuecomment-1904515289

Try making the same request to 127.0.0.1:11434. You'll also likely run into a 404 error if the model doesn't exist on the system. Consider calling ollama pull model or POST /api/pull {'model': model}' before the call to /api/post

<!-- gh-comment-id:1989261354 --> @mxyng commented on GitHub (Mar 11, 2024): > ConnectionError: HTTPConnectionPool(host=‘0.0.0.0’, port=11434): @yliu2702 you're making a request to 0.0.0.0 port 11434, did you configure ollama to bind to 0.0.0.0:11434? If it's not configured to bind to 0.0.0.0, this failure is expected. See my comment here: https://github.com/ollama/ollama/issues/2132#issuecomment-1904515289 Try making the same request to 127.0.0.1:11434. You'll also likely run into a 404 error if the model doesn't exist on the system. Consider calling `ollama pull model` or `POST /api/pull {'model': model}'` before the call to `/api/post`
Author
Owner

@applebiter commented on GitHub (Mar 28, 2024):

Same situation here, Linux Mint 21.3 Cinnamon, Linux Kernel 5.15.0-101-lowlatency. Older machine, AMD A10-5800B APU, 31.3 GB RAM, NVidia GTX 1050 Ti (lol). Everything seemed to be working fine. Both my development machine and the machine the ollama system was hosted on are identical, and I've been hacking away at a python app that is supposed to integrate the ollama system. Then I started getting this error and I can't isolate anything on my side causing it. It looks like I've violated some policy by sending malformed requests (I'm developing, so what) and now my dev machine is blacklisted. Maybe that's totally BS and the problem is in my code, but if there is some kind of mechanism in ollama for this, we ought know about it and how to override it.

<!-- gh-comment-id:2026132338 --> @applebiter commented on GitHub (Mar 28, 2024): Same situation here, Linux Mint 21.3 Cinnamon, Linux Kernel 5.15.0-101-lowlatency. Older machine, AMD A10-5800B APU, 31.3 GB RAM, NVidia GTX 1050 Ti (lol). Everything seemed to be working fine. Both my development machine and the machine the ollama system was hosted on are identical, and I've been hacking away at a python app that is supposed to integrate the ollama system. Then I started getting this error and I can't isolate anything on my side causing it. It looks like I've violated some policy by sending malformed requests (I'm developing, so what) and now my dev machine is blacklisted. Maybe that's totally BS and the problem is in my code, but if there is some kind of mechanism in ollama for this, we ought know about it and how to override it.
Author
Owner

@RELNO commented on GitHub (Apr 3, 2024):

This appears to be related to this issue https://github.com/ollama/ollama/issues/3476. However I see Error: Post "http://0.0.0.0:11434/api/chat": EOF only when running the latest version (0.1.3) and only on vision models (i.e llava). When running docker of older versions, it works well

<!-- gh-comment-id:2035825902 --> @RELNO commented on GitHub (Apr 3, 2024): This appears to be related to this issue https://github.com/ollama/ollama/issues/3476. However I see `Error: Post "http://0.0.0.0:11434/api/chat": EOF` only when running the latest version (0.1.3) and only on vision models (i.e llava). When running docker of older versions, it works well
Author
Owner

@berchan commented on GitHub (Apr 19, 2024):

launchctl setenv not work in my macos. I add export OLLAMA_HOST="0.0.0.0" in my .bash_profile, then source .bash_profile, it works

<!-- gh-comment-id:2065978286 --> @berchan commented on GitHub (Apr 19, 2024): launchctl setenv not work in my macos. I add export OLLAMA_HOST="0.0.0.0" in my .bash_profile, then source .bash_profile, it works
Author
Owner

@ArslanKAS commented on GitHub (Apr 24, 2024):

How I resolved this issue

It looks like the default CORS policy is to allow only localhost, so you need to change it with environment variables.

As root, edit this file: /etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434"

[Install]
WantedBy=default.target

The only changes were the lines:

Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434"

After it, you need to reload the daemon and the service:

sudo systemctl daemon-reload 
sudo systemctl restart ollama.service

Also, ensure your firewall is not blocking the port 11434:

sudo ufw allow 11434
sudo ufw reload

Genius. This worked 100%

<!-- gh-comment-id:2074807718 --> @ArslanKAS commented on GitHub (Apr 24, 2024): > # How I resolved this issue > It looks like the default CORS policy is to allow only localhost, so you need to change it with environment variables. > > As root, edit this file: `/etc/systemd/system/ollama.service` > > ``` > [Unit] > Description=Ollama Service > After=network-online.target > > [Service] > ExecStart=/usr/local/bin/ollama serve > User=ollama > Group=ollama > Restart=always > RestartSec=3 > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > Environment="OLLAMA_HOST=0.0.0.0:11434" > Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434" > > [Install] > WantedBy=default.target > ``` > > The only changes were the lines: > > ``` > Environment="OLLAMA_HOST=0.0.0.0:11434" > Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434" > ``` > > After it, you need to reload the daemon and the service: > > ```shell > sudo systemctl daemon-reload > sudo systemctl restart ollama.service > ``` > > Also, ensure your firewall is not blocking the port 11434: > > ```shell > sudo ufw allow 11434 > sudo ufw reload > ``` Genius. This worked 100%
Author
Owner

@narensrini-ds commented on GitHub (Apr 25, 2024):

How I resolved this issue

It looks like the default CORS policy is to allow only localhost, so you need to change it with environment variables.
As root, edit this file: /etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434"

[Install]
WantedBy=default.target

The only changes were the lines:

Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434"

After it, you need to reload the daemon and the service:

sudo systemctl daemon-reload 
sudo systemctl restart ollama.service

Also, ensure your firewall is not blocking the port 11434:

sudo ufw allow 11434
sudo ufw reload

Genius. This worked 100%

Thanks for sharing this solution @ArslanKAS !

<!-- gh-comment-id:2078047564 --> @narensrini-ds commented on GitHub (Apr 25, 2024): > > # How I resolved this issue > > It looks like the default CORS policy is to allow only localhost, so you need to change it with environment variables. > > As root, edit this file: `/etc/systemd/system/ollama.service` > > ``` > > [Unit] > > Description=Ollama Service > > After=network-online.target > > > > [Service] > > ExecStart=/usr/local/bin/ollama serve > > User=ollama > > Group=ollama > > Restart=always > > RestartSec=3 > > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > > Environment="OLLAMA_HOST=0.0.0.0:11434" > > Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434" > > > > [Install] > > WantedBy=default.target > > ``` > > > > > > > > > > > > > > > > > > > > > > > > The only changes were the lines: > > ``` > > Environment="OLLAMA_HOST=0.0.0.0:11434" > > Environment="OLLAMA_ORIGINS=http://0.0.0.0:11434" > > ``` > > > > > > > > > > > > > > > > > > > > > > > > After it, you need to reload the daemon and the service: > > ```shell > > sudo systemctl daemon-reload > > sudo systemctl restart ollama.service > > ``` > > > > > > > > > > > > > > > > > > > > > > > > Also, ensure your firewall is not blocking the port 11434: > > ```shell > > sudo ufw allow 11434 > > sudo ufw reload > > ``` > > Genius. This worked 100% Thanks for sharing this solution @ArslanKAS !
Author
Owner

@deepakyadavdx commented on GitHub (May 12, 2024):

Are you get solution for this?

<!-- gh-comment-id:2106343121 --> @deepakyadavdx commented on GitHub (May 12, 2024): Are you get solution for this?
Author
Owner

@pdevine commented on GitHub (May 14, 2024):

Going to close this out since this is all covered in the FAQ

<!-- gh-comment-id:2110957126 --> @pdevine commented on GitHub (May 14, 2024): Going to close this out since this is all covered in the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md)
Author
Owner

@jmio23 commented on GitHub (Jun 17, 2024):

the solution doesn't work on Ubuntu 24, the file etc/systemd/system/ollama.service is over written and edits removed, unless you put them at the top, where they seem to do nothing at all ... but there is no information about how to format this and I fed up trying... thanks canonical

<!-- gh-comment-id:2174419540 --> @jmio23 commented on GitHub (Jun 17, 2024): the solution doesn't work on Ubuntu 24, the file etc/systemd/system/ollama.service is over written and edits removed, unless you put them at the top, where they seem to do nothing at all ... but there is no information about how to format this and I fed up trying... thanks canonical
Author
Owner

@ntelo007 commented on GitHub (Aug 27, 2024):

This solution doesn't work if you run ollama from a container. Maybe it works if you modify the desktop application. Any solution for the docker version?

<!-- gh-comment-id:2313147089 --> @ntelo007 commented on GitHub (Aug 27, 2024): This solution doesn't work if you run ollama from a container. Maybe it works if you modify the desktop application. Any solution for the docker version?
Author
Owner

@loglux commented on GitHub (Sep 5, 2024):

I'm joinig the last question. If you try to connect from the docker container using ollama python library, you get An error occurred: [Errno 111] Connection refused

<!-- gh-comment-id:2332829403 --> @loglux commented on GitHub (Sep 5, 2024): I'm joinig the last question. If you try to connect from the docker container using ollama python library, you get An error occurred: [Errno 111] Connection refused
Author
Owner

@ntelo007 commented on GitHub (Sep 6, 2024):

I commented out the base_url parameter in LangChain's ChatOllama object, and I added a Host Ollama variable in my docker-compose file and it worked.

<!-- gh-comment-id:2333395992 --> @ntelo007 commented on GitHub (Sep 6, 2024): I commented out the base_url parameter in LangChain's ChatOllama object, and I added a Host Ollama variable in my docker-compose file and it worked.
Author
Owner

@new-Matthew commented on GitHub (Sep 6, 2024):

I commented out the base_url parameter in LangChain's ChatOllama object, and I added a Host Ollama variable in my docker-compose file and it worked.
hello!
do you can share the part of the code you modified to work? I couldn't solve the problem

<!-- gh-comment-id:2334685398 --> @new-Matthew commented on GitHub (Sep 6, 2024): > I commented out the base_url parameter in LangChain's ChatOllama object, and I added a Host Ollama variable in my docker-compose file and it worked. hello! do you can share the part of the code you modified to work? I couldn't solve the problem
Author
Owner

@danielrsantana-humaitrix commented on GitHub (Oct 2, 2024):

This is how I solved this issue, using Azure Virtual Machine with a FastAPI Docker image.

  1. Use the docker image instead of this curl -fsSL https://ollama.com/install.sh | sh:
  docker run -d --name ollama -p 11434:11434 --network mynet -v ollama_storage:/root/.ollama ollama/ollama:latest
  1. When running your API image, use the same network name and add OLLAMA_HOST=ollama before your app start command.
  docker run -it -d \
    -p "$server_port":"$server_port" \
    --name "$docker_image" \
    --network mynet \
    "$docker_image_url" \
    /bin/sh -c "cd src && OLLAMA_HOST=ollama python main.py"

How do you know when it will work:

If you enter your network and run the following command, it will succeed:

  docker exec -it your_container_id /bin/bash

  # if your docker instance can run this, your settings are good
  curl http://ollama:11434/api/chat -d '{
    "model": "llama3.2:1b",
    "messages": [
      { "role": "user", "content": "why is the sky blue?" }
    ]
  }'
<!-- gh-comment-id:2388887520 --> @danielrsantana-humaitrix commented on GitHub (Oct 2, 2024): This is how I solved this issue, using Azure Virtual Machine with a FastAPI Docker image. 1. Use the `docker image` instead of this `curl -fsSL https://ollama.com/install.sh | sh`: ```bash docker run -d --name ollama -p 11434:11434 --network mynet -v ollama_storage:/root/.ollama ollama/ollama:latest ``` 2. When running your API image, use the same `network name` and add `OLLAMA_HOST=ollama` before your app start command. ```bash docker run -it -d \ -p "$server_port":"$server_port" \ --name "$docker_image" \ --network mynet \ "$docker_image_url" \ /bin/sh -c "cd src && OLLAMA_HOST=ollama python main.py" ``` How do you know when it will work: If you enter your network and run the following command, it will succeed: ```bash docker exec -it your_container_id /bin/bash # if your docker instance can run this, your settings are good curl http://ollama:11434/api/chat -d '{ "model": "llama3.2:1b", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] }' ```
Author
Owner

@SaM-0777 commented on GitHub (Feb 24, 2025):

Make sure ollama is running
ollama serve

Use http://127.0.0.1:11434/api/generate instead of http://localhost:11434/api/generate

<!-- gh-comment-id:2679374475 --> @SaM-0777 commented on GitHub (Feb 24, 2025): Make sure ollama is running ```ollama serve``` Use http://127.0.0.1:11434/api/generate instead of http://localhost:11434/api/generate
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26978