Ollama on Windows occupied all available ports when downloading #4416

Closed
opened 2025-11-12 12:18:59 -06:00 by GiteaMirror · 7 comments
Owner

Originally created by @TheStarAlight on GitHub (Sep 26, 2024).

What is the issue?

When I'm trying to download a model from Ollama for Windows, after a while, my browsers cannot visit any other website, showing "connection refused". And the download would also fail (after the first part of this model finished, the next part cannot start and reports error).
The log ~/AppData/Local/Ollama/server.log shows Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
I also used the netstat, which shows that port number up to 65535 is occupied by ollama.
Is there any way to solve it?

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

Win 0.3.12

Originally created by @TheStarAlight on GitHub (Sep 26, 2024). ### What is the issue? When I'm trying to download a model from Ollama for Windows, after a while, my browsers cannot visit any other website, showing "connection refused". And the download would also fail (after the first part of this model finished, the next part cannot start and reports error). The log `~/AppData/Local/Ollama/server.log` shows `Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.` I also used the `netstat`, which shows that port number up to 65535 is occupied by ollama. Is there any way to solve it? ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version Win 0.3.12
GiteaMirror added the bugwindowsnetworking labels 2025-11-12 12:18:59 -06:00
Author
Owner

@dhiltgen commented on GitHub (Sep 26, 2024):

@TheStarAlight are you by any chance on a slow or flaky network where a lot of retries are happening? Perhaps we have a FD ~leak or delayed close in the retry logic and aren't cleaning up as we go.

@dhiltgen commented on GitHub (Sep 26, 2024): @TheStarAlight are you by any chance on a slow or flaky network where a lot of retries are happening? Perhaps we have a FD ~leak or delayed close in the retry logic and aren't cleaning up as we go.
Author
Owner

@TheStarAlight commented on GitHub (Sep 27, 2024):

@TheStarAlight are you by any chance on a slow or flaky network where a lot of retries are happening? Perhaps we have a FD ~leak or delayed close in the retry logic and aren't cleaning up as we go.

Yes you are right, network is not stable in China. Thank you for your attention! :D

@TheStarAlight commented on GitHub (Sep 27, 2024): > @TheStarAlight are you by any chance on a slow or flaky network where a lot of retries are happening? Perhaps we have a FD ~leak or delayed close in the retry logic and aren't cleaning up as we go. Yes you are right, network is not stable in China. Thank you for your attention! :D
Author
Owner

@dhiltgen commented on GitHub (Sep 30, 2024):

@TheStarAlight the fix will be in the next version (0.3.13) when that ships.

@dhiltgen commented on GitHub (Sep 30, 2024): @TheStarAlight the fix will be in the next version (0.3.13) when that ships.
Author
Owner

@SantaLaMuerte commented on GitHub (Oct 23, 2024):

@TheStarAlight исправление будет в следующей версии (0.3.13), когда она выйдет.

maybe add a flag --port to set a custom port for webui , cause port is used by ollama 11434 process itself?

also when you under vpn it might cause some missconfigs with localhost adress. but its another story

@SantaLaMuerte commented on GitHub (Oct 23, 2024): > @TheStarAlight исправление будет в следующей версии (0.3.13), когда она выйдет. maybe add a flag --port to set a custom port for webui , cause port is used by ollama 11434 process itself? also when you under vpn it might cause some missconfigs with localhost adress. but its another story
Author
Owner

@SantaLaMuerte commented on GitHub (Oct 23, 2024):

https://github.com/ollama/ollama/issues/5816
https://github.com/ollama/ollama/issues/3774

Get-Process ollama* | Stop-Process -Force | ollama serve
in one of issues - is founded solution.

Is a possible to build in an OpenUI inside (with WSL support or even just python packages) ? Kind of plug and play
it will be very helpfull to just install , open ui and start chat with files, internet and much more . Thanks for your product and work!

(easieast setup for me was install ollama , install webui via python and just use . thanks again)

@SantaLaMuerte commented on GitHub (Oct 23, 2024): https://github.com/ollama/ollama/issues/5816 https://github.com/ollama/ollama/issues/3774 `Get-Process ollama* | Stop-Process -Force | ollama serve ` in one of issues - is founded solution. Is a possible to build in an OpenUI inside (with WSL support or even just python packages) ? Kind of plug and play it will be very helpfull to just install , open ui and start chat with files, internet and much more . Thanks for your product and work! (easieast setup for me was install ollama , install webui via python and just use . thanks again)
Author
Owner

@TheStarAlight commented on GitHub (Oct 24, 2024):

#5816 #3774

Get-Process ollama* | Stop-Process -Force | ollama serve in one of issues - is founded solution.

Is a possible to build in an OpenUI inside (with WSL support or even just python packages) ? Kind of plug and play it will be very helpfull to just install , open ui and start chat with files, internet and much more . Thanks for your product and work!

(easieast setup for me was install ollama , install webui via python and just use . thanks again)

I think you should raise this question in open-webui. I'm also looking forward to a installable version. Using docker is difficult for freshman like me, and I usually encounter problems that the webui cannot start.

@TheStarAlight commented on GitHub (Oct 24, 2024): > #5816 #3774 > > `Get-Process ollama* | Stop-Process -Force | ollama serve ` in one of issues - is founded solution. > > Is a possible to build in an OpenUI inside (with WSL support or even just python packages) ? Kind of plug and play it will be very helpfull to just install , open ui and start chat with files, internet and much more . Thanks for your product and work! > > (easieast setup for me was install ollama , install webui via python and just use . thanks again) I think you should raise this question in [open-webui](https://github.com/open-webui/open-webui). I'm also looking forward to a installable version. Using docker is difficult for freshman like me, and I usually encounter problems that the webui cannot start.
Author
Owner

@SantaLaMuerte commented on GitHub (Oct 24, 2024):

#5816 #3774
Get-Process ollama* | Stop-Process -Force | ollama serve in one of issues - is founded solution.
Is a possible to build in an OpenUI inside (with WSL support or even just python packages) ? Kind of plug and play it will be very helpfull to just install , open ui and start chat with files, internet and much more . Thanks for your product and work!
(easieast setup for me was install ollama , install webui via python and just use . thanks again)

I think you should raise this question in open-webui. I'm also looking forward to a installable version. Using docker is difficult for freshman like me, and I usually encounter problems that the webui cannot start.

i anyway face a not able to found AMD 6700XT GPU , even with Hip drivers, amd latest (kind of Rocm for win) , but it shows that NOT DETECTED - so clear win installation is useless for me.. I see a gpu using ofk , but just a bit.

so the working way i think is anyway install it on WSL with real ROCM support or maybe docker))) (i dont like containers too)
i am one click guy, script kiddy - and cant even imagine what hell will go on simple people to made it all work )))
So claude or chatgpt is still the best way (20$ is 20$ ofk)

upd. found that https://medium.com/@danies.pahlevi/how-to-use-ollama-with-open-webui-with-rx-6700-xt-12gb-backend-in-windows-5aca4546e23f and ollama for amd repo .. but its again a костыли (crutches) ((

@SantaLaMuerte commented on GitHub (Oct 24, 2024): > > #5816 #3774 > > `Get-Process ollama* | Stop-Process -Force | ollama serve ` in one of issues - is founded solution. > > Is a possible to build in an OpenUI inside (with WSL support or even just python packages) ? Kind of plug and play it will be very helpfull to just install , open ui and start chat with files, internet and much more . Thanks for your product and work! > > (easieast setup for me was install ollama , install webui via python and just use . thanks again) > > I think you should raise this question in [open-webui](https://github.com/open-webui/open-webui). I'm also looking forward to a installable version. Using docker is difficult for freshman like me, and I usually encounter problems that the webui cannot start. i anyway face a not able to found AMD 6700XT GPU , even with Hip drivers, amd latest (kind of Rocm for win) , but it shows that NOT DETECTED - so clear win installation is useless for me.. I see a gpu using ofk , but just a bit. so the working way i think is anyway install it on WSL with real ROCM support or maybe docker))) (i dont like containers too) i am one click guy, script kiddy - and cant even imagine what hell will go on *simple* people to made it all work ))) So claude or chatgpt is still the best way (20$ is 20$ ofk) upd. found that https://medium.com/@danies.pahlevi/how-to-use-ollama-with-open-webui-with-rx-6700-xt-12gb-backend-in-windows-5aca4546e23f and ollama for amd repo .. but its again a костыли (crutches) ((
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#4416