[GH-ISSUE #2194] Change the default 11434 port? #1253

Closed
opened 2026-04-12 11:02:09 -05:00 by GiteaMirror · 31 comments
Owner

Originally created by @CHesketh76 on GitHub (Jan 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2194

I am getting this error message Error: listen tcp 127.0.0.1:11434: bind: address already in use every time I run ollama serve. Would it be possible to have the option to change the port?

Originally created by @CHesketh76 on GitHub (Jan 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2194 I am getting this error message ```Error: listen tcp 127.0.0.1:11434: bind: address already in use``` every time I run ```ollama serve```. Would it be possible to have the option to change the port?
Author
Owner

@CHesketh76 commented on GitHub (Jan 25, 2024):

Yes, i killed the process that was using it but I am still getting this error message.

<!-- gh-comment-id:1911061462 --> @CHesketh76 commented on GitHub (Jan 25, 2024): Yes, i killed the process that was using it but I am still getting this error message.
Author
Owner

@pdevine commented on GitHub (Jan 25, 2024):

Hey @CHesketh76 This is covered in the FAQ, but the way to do it is with the OLLAMA_HOST env variable. You can use something like OLLAMA_HOST=127.0.0.1:11435 ollama serve to start ollama serving on port 11435.

<!-- gh-comment-id:1911149296 --> @pdevine commented on GitHub (Jan 25, 2024): Hey @CHesketh76 This is covered in the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network), but the way to do it is with the `OLLAMA_HOST` env variable. You can use something like `OLLAMA_HOST=127.0.0.1:11435 ollama serve` to start ollama serving on port 11435.
Author
Owner

@mxyng commented on GitHub (Jan 25, 2024):

What platform are you on? If it's on macOS and you're using the Mac app, the app starts an instance of ollama on the default port. This means you don't need to run ollama serve. If you need to configure ollama for some reason, the FAQ as a few pointers on how to do that for macOS

<!-- gh-comment-id:1911158155 --> @mxyng commented on GitHub (Jan 25, 2024): What platform are you on? If it's on macOS and you're using the Mac app, the app starts an instance of ollama on the default port. This means you don't need to run `ollama serve`. If you need to configure ollama for some reason, the FAQ as a few pointers on how to do that for macOS
Author
Owner

@JodyWi commented on GitHub (Feb 8, 2024):

Hey @CHesketh76 This is covered in the FAQ, but the way to do it is with the OLLAMA_HOST env variable. You can use something like OLLAMA_HOST=127.0.0.1:11435 ollama serve to start ollama serving on port 11435.

OLLAMA_HOST=127.0.0.1:11435 ollama serve | Works thanks @pdevine

<!-- gh-comment-id:1934225753 --> @JodyWi commented on GitHub (Feb 8, 2024): > Hey @CHesketh76 This is covered in the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network), but the way to do it is with the `OLLAMA_HOST` env variable. You can use something like `OLLAMA_HOST=127.0.0.1:11435 ollama serve` to start ollama serving on port 11435. OLLAMA_HOST=127.0.0.1:11435 ollama serve | Works thanks @pdevine
Author
Owner

@schnow265 commented on GitHub (Apr 2, 2024):

I found out why. If you are on Linux and are having this issue when installing bare metal (using the command on the website) and you use systemd (systemctl), ollama will install itself as a systemd service. You can run sudo systemctl status ollama.service to verify this. Which also means that you don’t need to serve.

<!-- gh-comment-id:2032631445 --> @schnow265 commented on GitHub (Apr 2, 2024): I found out why. If you are on Linux and are having this issue when installing bare metal (using the command on the website) and you use systemd (systemctl), ollama will install itself as a systemd service. You can run `sudo systemctl status ollama.service` to verify this. Which also means that you don’t need to `serve`.
Author
Owner

@YokeYao commented on GitHub (Apr 16, 2024):

I found out why. If you are on Linux and are having this issue when installing bare metal (using the command on the website) and you use systemd (systemctl), ollama will install itself as a systemd service. You can run sudo systemctl status ollama.service to verify this. Which also means that you don’t need to serve.

It is ture. It is runningitself??!

<!-- gh-comment-id:2059562522 --> @YokeYao commented on GitHub (Apr 16, 2024): > I found out why. If you are on Linux and are having this issue when installing bare metal (using the command on the website) and you use systemd (systemctl), ollama will install itself as a systemd service. You can run `sudo systemctl status ollama.service` to verify this. Which also means that you don’t need to `serve`. It is ture. It is runningitself??!
Author
Owner

@michelle-chou25 commented on GitHub (May 9, 2024):

No if I don't "ollama serve" in linux, I can't run a model by "ollama run ", the error msg is"Can't connect to Ollama App, Is it running?"

<!-- gh-comment-id:2102250612 --> @michelle-chou25 commented on GitHub (May 9, 2024): No if I don't "ollama serve" in linux, I can't run a model by "ollama run <model>", the error msg is"Can't connect to Ollama App, Is it running?"
Author
Owner

@michelle-chou25 commented on GitHub (May 9, 2024):

And I changed the config file of Ollama, added "Environment="OLLAMA_HOST=0.0.0.0:80", but it still showed the listened address is 11434 when I run a model.

<!-- gh-comment-id:2102279889 --> @michelle-chou25 commented on GitHub (May 9, 2024): And I changed the config file of Ollama, added "Environment="OLLAMA_HOST=0.0.0.0:80", but it still showed the listened address is 11434 when I run a model.
Author
Owner

@hdnh2006 commented on GitHub (May 27, 2024):

And I changed the config file of Ollama, added "Environment="OLLAMA_HOST=0.0.0.0:80", but it still showed the listened address is 11434 when I run a model.

Maybe you missed to write your config file? In Linux (Ubuntu), if you run sudo nano /etc/systemd/system/ollama.service you can modify it like this:

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:5050"
Environment="OLLAMA_DEBUG=1"

[Install]
WantedBy=default.target
<!-- gh-comment-id:2133514433 --> @hdnh2006 commented on GitHub (May 27, 2024): > And I changed the config file of Ollama, added "Environment="OLLAMA_HOST=0.0.0.0:80", but it still showed the listened address is 11434 when I run a model. Maybe you missed to write your config file? In Linux (Ubuntu), if you run `sudo nano /etc/systemd/system/ollama.service` you can modify it like this: ```yaml [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment="OLLAMA_HOST=0.0.0.0:5050" Environment="OLLAMA_DEBUG=1" [Install] WantedBy=default.target ```
Author
Owner

@ichux commented on GitHub (Jun 19, 2024):

And I changed the config file of Ollama, added "Environment="OLLAMA_HOST=0.0.0.0:80", but it still showed the listened address is 11434 when I run a model.

Maybe you miss wrote your config file? In Linux (Ubuntu), if you run sudo nano /etc/systemd/system/ollama.service you can modify it like this:

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:5050"
Environment="OLLAMA_DEBUG=1"

[Install]
WantedBy=default.target

It's cool to do this after the above

sudo systemctl daemon-reload; sudo service ollama restart

<!-- gh-comment-id:2178830064 --> @ichux commented on GitHub (Jun 19, 2024): > > And I changed the config file of Ollama, added "Environment="OLLAMA_HOST=0.0.0.0:80", but it still showed the listened address is 11434 when I run a model. > > Maybe you miss wrote your config file? In Linux (Ubuntu), if you run `sudo nano /etc/systemd/system/ollama.service` you can modify it like this: > > ```yaml > [Unit] > Description=Ollama Service > After=network-online.target > > [Service] > ExecStart=/usr/local/bin/ollama serve > User=ollama > Group=ollama > Restart=always > RestartSec=3 > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > Environment="OLLAMA_HOST=0.0.0.0:5050" > Environment="OLLAMA_DEBUG=1" > > [Install] > WantedBy=default.target > ``` It's cool to do this after the above > sudo systemctl daemon-reload; sudo service ollama restart
Author
Owner

@jokerssssss9999 commented on GitHub (Jul 2, 2024):

windows 中如何修改

C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve
'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序

<!-- gh-comment-id:2201777483 --> @jokerssssss9999 commented on GitHub (Jul 2, 2024): windows 中如何修改 C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序
Author
Owner

@benjaminchensz commented on GitHub (Jul 6, 2024):

windows 中如何修改

C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序

把这个写入windows的环境变量设置中。

<!-- gh-comment-id:2211747411 --> @benjaminchensz commented on GitHub (Jul 6, 2024): > windows 中如何修改 > > C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序 把这个写入windows的环境变量设置中。
Author
Owner

@hemangjoshi37a commented on GitHub (Jul 15, 2024):

is it possible to provide port with ollama serve command ?

<!-- gh-comment-id:2228010244 --> @hemangjoshi37a commented on GitHub (Jul 15, 2024): is it possible to provide port with `ollama serve` command ?
Author
Owner

@pdevine commented on GitHub (Jul 15, 2024):

@hemangjoshi37a yes, use the OLLAMA_HOST env variable w/ ollama serve. This is covered in the FAQ

<!-- gh-comment-id:2228888145 --> @pdevine commented on GitHub (Jul 15, 2024): @hemangjoshi37a yes, use the `OLLAMA_HOST` env variable w/ `ollama serve`. This is covered in the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network)
Author
Owner

@dyh2024 commented on GitHub (Jul 16, 2024):

No if I don't "ollama serve" in linux, I can't run a model by "ollama run ", the error msg is"Can't connect to Ollama App, Is it running?"

The same problem!

<!-- gh-comment-id:2231219208 --> @dyh2024 commented on GitHub (Jul 16, 2024): > No if I don't "ollama serve" in linux, I can't run a model by "ollama run ", the error msg is"Can't connect to Ollama App, Is it running?" The same problem!
Author
Owner

@pdevine commented on GitHub (Jul 17, 2024):

@dyh2024 You need to also tell ollama run the correct port to connect to using OLLAMA_HOST. You can use OLLAMA_HOST=localhost:<port> ollama run <model>.

<!-- gh-comment-id:2232176952 --> @pdevine commented on GitHub (Jul 17, 2024): @dyh2024 You need to also tell `ollama run` the correct port to connect to using `OLLAMA_HOST`. You can use `OLLAMA_HOST=localhost:<port> ollama run <model>`.
Author
Owner

@dyh2024 commented on GitHub (Jul 17, 2024):

@pdevine I changed to OLLAMA_HOST=0.0.0.0:6006, Before ollama run , I had done export OLLAMA_HOST=0.0.0.0:6006, but has problem, Maybe must set to localhost not 0.0.0.0 before ollama run ?

<!-- gh-comment-id:2232201157 --> @dyh2024 commented on GitHub (Jul 17, 2024): @pdevine I changed to OLLAMA_HOST=0.0.0.0:6006, Before ollama run <model>, I had done export OLLAMA_HOST=0.0.0.0:6006, but has problem, Maybe must set to localhost not 0.0.0.0 before ollama run <model> ?
Author
Owner

@pdevine commented on GitHub (Jul 17, 2024):

@dyh2024 use OLLAMA_HOST=localhost:6006 ollama run <model> to run a model. Use OLLAMA_HOST=0.0.0.0:6006 ollama serve to start the ollama server.

<!-- gh-comment-id:2232233408 --> @pdevine commented on GitHub (Jul 17, 2024): @dyh2024 use `OLLAMA_HOST=localhost:6006 ollama run <model>` to run a model. Use `OLLAMA_HOST=0.0.0.0:6006 ollama serve` to start the ollama server.
Author
Owner

@dyh2024 commented on GitHub (Jul 17, 2024):

@pdevine Thank you!

<!-- gh-comment-id:2232314701 --> @dyh2024 commented on GitHub (Jul 17, 2024): @pdevine Thank you!
Author
Owner

@njtan142 commented on GitHub (Aug 18, 2024):

For those who installed this on WSL

If you did:

  • install using curl
  • run ollama run llama3.1
  • then exited using Ctrl+d after the model is downloaded

Then there's a good chance that the service is still running. Check your browser (or terminal) with the default
url: http://127.0.0.1:11434/

It will respond with Ollama is running

<!-- gh-comment-id:2295416981 --> @njtan142 commented on GitHub (Aug 18, 2024): For those who installed this on [WSL](https://ubuntu.com/desktop/wsl) If you did: - install using curl - run `ollama run llama3.1` - then exited using `Ctrl+d` after the model is downloaded Then there's a good chance that the service is still running. Check your browser (or terminal) with the default url: http://127.0.0.1:11434/ It will respond with `Ollama is running`
Author
Owner

@sanketss84 commented on GitHub (Oct 7, 2024):

Environment="OLLAMA_HOST=0.0.0.0:5050"

This line right here saved me. 11434 port was just not working no matter what I did.
Thanks

<!-- gh-comment-id:2396779895 --> @sanketss84 commented on GitHub (Oct 7, 2024): > > Environment="OLLAMA_HOST=0.0.0.0:5050" This line right here saved me. 11434 port was just not working no matter what I did. Thanks
Author
Owner

@hdnh2006 commented on GitHub (Oct 8, 2024):

Environment="OLLAMA_HOST=0.0.0.0:5050"

This line right here saved me. 11434 port was just not working no matter what I did. Thanks

More than welcome! I'm happy my suggestion helped you!

<!-- gh-comment-id:2399933827 --> @hdnh2006 commented on GitHub (Oct 8, 2024): > > > Environment="OLLAMA_HOST=0.0.0.0:5050" > > This line right here saved me. 11434 port was just not working no matter what I did. Thanks More than welcome! I'm happy my suggestion helped you!
Author
Owner

@522315428 commented on GitHub (Oct 29, 2024):

请问在docker中如何修改对应的配置文件

<!-- gh-comment-id:2443717470 --> @522315428 commented on GitHub (Oct 29, 2024): 请问在docker中如何修改对应的配置文件
Author
Owner

@Arshil-Akkala commented on GitHub (Jan 24, 2025):

Environment="OLLAMA_HOST=0.0.0.0:5050"

where do I put this?

<!-- gh-comment-id:2611776556 --> @Arshil-Akkala commented on GitHub (Jan 24, 2025): > Environment="OLLAMA_HOST=0.0.0.0:5050" where do I put this?
Author
Owner

@hdnh2006 commented on GitHub (Jan 25, 2025):

Environment="OLLAMA_HOST=0.0.0.0:5050"

where do I put this?

Please read my comment if you use Linux, I don't know for other OS: https://github.com/ollama/ollama/issues/2194#issuecomment-2133514433

<!-- gh-comment-id:2613956707 --> @hdnh2006 commented on GitHub (Jan 25, 2025): > > Environment="OLLAMA_HOST=0.0.0.0:5050" > > where do I put this? Please read my comment if you use Linux, I don't know for other OS: https://github.com/ollama/ollama/issues/2194#issuecomment-2133514433
Author
Owner

@jaindinkar commented on GitHub (Feb 3, 2025):

Select an unused port and update the Ollama service config, do daemon-reload, and restart the service.
Config example:

### Anything between here and the comment below will become the new contents of the file
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_DEBUG=1"
### Lines below this comment will be discarded
<!-- gh-comment-id:2631616008 --> @jaindinkar commented on GitHub (Feb 3, 2025): Select an unused port and update the Ollama service config, do daemon-reload, and restart the service. Config example: ``` ### Anything between here and the comment below will become the new contents of the file [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_DEBUG=1" ### Lines below this comment will be discarded ```
Author
Owner

@OhMy-Git commented on GitHub (Feb 8, 2025):

windows 中如何修改

C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序

前面要加上 set

Image

<!-- gh-comment-id:2645821011 --> @OhMy-Git commented on GitHub (Feb 8, 2025): > windows 中如何修改 > > C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序 前面要加上 set ![Image](https://github.com/user-attachments/assets/073c02c2-9c04-47b0-8cd0-59cb4ece2a40)
Author
Owner

@mehlkopf commented on GitHub (Mar 12, 2025):

@mxyng Have you found a solution for the Mac app? Thank you!

<!-- gh-comment-id:2717520272 --> @mehlkopf commented on GitHub (Mar 12, 2025): @mxyng Have you found a solution for the Mac app? Thank you!
Author
Owner

@jokerssssss9999 commented on GitHub (Apr 10, 2025):

windows 中如何修改
C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序

前面要加上 set

Image
我已经搞定了 谢谢了

<!-- gh-comment-id:2793982429 --> @jokerssssss9999 commented on GitHub (Apr 10, 2025): > > windows 中如何修改 > > C:\Users\Administrator>OLLAMA_HOST=127.0.0.1:11435 ollama serve 'OLLAMA_HOST' 不是内部或外部命令,也不是可运行的程序 > > 前面要加上 set > > ![Image](https://github.com/user-attachments/assets/073c02c2-9c04-47b0-8cd0-59cb4ece2a40) 我已经搞定了 谢谢了
Author
Owner

@bmeyer99 commented on GitHub (Jul 11, 2025):

I got sick of waiting for this so I made my own with the help of Claude. Extremely simple concept, just answer the request on ingest, proxy to another port where Ollama is running and capture the metrics along the way. Your clients have no idea the proxy is there, it runs on 11434 and changes Ollama to 11435. Includes /metrics for Prometheus and /analytics for detailed per-request analytics. Please check it out here: https://github.com/bmeyer99/Ollama_Proxy_Wrapper
Vey simple and straightforward you just clone it, run 3 commands (dependencies, prep_install, run it). I need to make it into a service tomorrow. please enjoy.

<!-- gh-comment-id:3060432963 --> @bmeyer99 commented on GitHub (Jul 11, 2025): I got sick of waiting for this so I made my own with the help of Claude. Extremely simple concept, just answer the request on ingest, proxy to another port where Ollama is running and capture the metrics along the way. Your clients have no idea the proxy is there, it runs on 11434 and changes Ollama to 11435. Includes /metrics for Prometheus and /analytics for detailed per-request analytics. Please check it out here: https://github.com/bmeyer99/Ollama_Proxy_Wrapper Vey simple and straightforward you just clone it, run 3 commands (dependencies, prep_install, run it). I need to make it into a service tomorrow. please enjoy.
Author
Owner

@erturkkadir commented on GitHub (Dec 21, 2025):

in my case i forgot to update bash hence each time i ve been getting server not started, here is the fix

nano ~/.bashrc 
export OLLAMA_HOST=0.0.0.0:1234
<!-- gh-comment-id:3679308888 --> @erturkkadir commented on GitHub (Dec 21, 2025): in my case i forgot to update bash hence each time i ve been getting server not started, here is the fix ``` nano ~/.bashrc export OLLAMA_HOST=0.0.0.0:1234
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1253