[GH-ISSUE #8385] Cannot list or install models: Connection refused error on Windows 10 #5382

Closed
opened 2026-04-12 16:35:52 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @inspector3535 on GitHub (Jan 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8385

What is the issue?

Greetings,
I'm unable to interact with Ollama on Windows 10. I cannot list models, nor can I install any of them.
Whatever command I try, I receive the following error: even when I type ollama list, the same error appears.
I initially suspected that my server configuration might be causing the issue, but I tried it on different Windows computers, and the problem persists.
I've even tried different installation methods (direct installation, installing via Scoop, Winget, etc.), but the result is always the same.

Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: The connection could not be established because the target machine actively rejected it.```
Note: Windows Defender and the firewall are disabled, and the only error I can find in the log is:
"no compatible GPUs were discovered".
The hardware of my server should be sufficient to run at least small models like LLaMA 2-7B or GPT Small.
Additionally, I can successfully run the Ollama server using ollama serve; I can even see the server running in the browser, but I am unable to display, download, or install models.
Thank you in advance for your time and assistance. I would appreciate any guidance on how to resolve this issue.

### OS

Windows

### GPU

Other

### CPU

Intel

### Ollama version

0.5.4
Originally created by @inspector3535 on GitHub (Jan 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8385 ### What is the issue? Greetings, I'm unable to interact with Ollama on Windows 10. I cannot list models, nor can I install any of them. Whatever command I try, I receive the following error: even when I type ollama list, the same error appears. I initially suspected that my server configuration might be causing the issue, but I tried it on different Windows computers, and the problem persists. I've even tried different installation methods (direct installation, installing via Scoop, Winget, etc.), but the result is always the same. ```Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: The connection could not be established because the target machine actively rejected it.``` Note: Windows Defender and the firewall are disabled, and the only error I can find in the log is: "no compatible GPUs were discovered". The hardware of my server should be sufficient to run at least small models like LLaMA 2-7B or GPT Small. Additionally, I can successfully run the Ollama server using ollama serve; I can even see the server running in the browser, but I am unable to display, download, or install models. Thank you in advance for your time and assistance. I would appreciate any guidance on how to resolve this issue. ### OS Windows ### GPU Other ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-12 16:35:52 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

What does the following display:

curl http://localhost:11434/api/version
<!-- gh-comment-id:2585238707 --> @rick-github commented on GitHub (Jan 11, 2025): What does the following display: ``` curl http://localhost:11434/api/version ```
Author
Owner

@inspector3535 commented on GitHub (Jan 11, 2025):

{"version":"0.5.4"}

<!-- gh-comment-id:2585240683 --> @inspector3535 commented on GitHub (Jan 11, 2025): {"version":"0.5.4"}
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

Ok, so there are no problems convention to the server via localhost. What's the result of

curl http://127.0.0.1:11434/api/version
<!-- gh-comment-id:2585241786 --> @rick-github commented on GitHub (Jan 11, 2025): Ok, so there are no problems convention to the server via localhost. What's the result of ``` curl http://127.0.0.1:11434/api/version ```
Author
Owner

@inspector3535 commented on GitHub (Jan 11, 2025):

this command: :
curl http://127.0.0.1:11434/api/version
displays the ollama version on terminal while serving:
{"version":"0.5.4"}

<!-- gh-comment-id:2585246731 --> @inspector3535 commented on GitHub (Jan 11, 2025): this command: : curl http://127.0.0.1:11434/api/version displays the ollama version on terminal while serving: {"version":"0.5.4"}
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

Ok, wasn't expecting that. What's the output of

ollama -v
<!-- gh-comment-id:2585247926 --> @rick-github commented on GitHub (Jan 11, 2025): Ok, wasn't expecting that. What's the output of ``` ollama -v ```
Author
Owner

@inspector3535 commented on GitHub (Jan 11, 2025):

ollama version is 0.5.4

<!-- gh-comment-id:2585254949 --> @inspector3535 commented on GitHub (Jan 11, 2025): ollama version is 0.5.4
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

Now I'm confused. The client apparently has no problems connecting to the server. What's the output of

curl locahost:11434 http://localhost:11434/api/tags
ollama ps
<!-- gh-comment-id:2585257923 --> @rick-github commented on GitHub (Jan 11, 2025): Now I'm confused. The client apparently has no problems connecting to the server. What's the output of ``` curl locahost:11434 http://localhost:11434/api/tags ollama ps ```
Author
Owner

@inspector3535 commented on GitHub (Jan 11, 2025):

Microsoft Windows [Version 10.0.19045.5247]
(c) Microsoft Corporation. All rights reserved.

C:\Users\Administrator>curl locahost:11434 http://localhost:11434/api/tags
curl: (6) Could not resolve host: locahost
{"models":[]}
C:\Users\Administrator>ollama ps
NAME ID SIZE PROCESSOR UNTIL

C:\Users\Administrator>

<!-- gh-comment-id:2585261509 --> @inspector3535 commented on GitHub (Jan 11, 2025): Microsoft Windows [Version 10.0.19045.5247] (c) Microsoft Corporation. All rights reserved. C:\Users\Administrator>curl locahost:11434 http://localhost:11434/api/tags curl: (6) Could not resolve host: locahost {"models":[]} C:\Users\Administrator>ollama ps NAME ID SIZE PROCESSOR UNTIL C:\Users\Administrator>
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

So this looks like it's working. What's the output of

ollama list
<!-- gh-comment-id:2585262327 --> @rick-github commented on GitHub (Jan 11, 2025): So this looks like it's working. What's the output of ``` ollama list ```
Author
Owner

@inspector3535 commented on GitHub (Jan 11, 2025):

C:\Users\Administrator>ollama list
NAME ID SIZE MODIFIED

C:\Users\Administrator>

<!-- gh-comment-id:2585266105 --> @inspector3535 commented on GitHub (Jan 11, 2025): C:\Users\Administrator>ollama list NAME ID SIZE MODIFIED C:\Users\Administrator>
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

It's working.

<!-- gh-comment-id:2585266538 --> @rick-github commented on GitHub (Jan 11, 2025): It's working.
Author
Owner

@inspector3535 commented on GitHub (Jan 11, 2025):

well? this is very weard. but before I attempt to list, the error in the subject was acurring even if I try to list models.

<!-- gh-comment-id:2585266969 --> @inspector3535 commented on GitHub (Jan 11, 2025): well? this is very weard. but before I attempt to list, the error in the subject was acurring even if I try to list models.
Author
Owner

@rick-github commented on GitHub (Jan 11, 2025):

Your hardware doesn't have a GPU so you won't get fast inference. Depending on your requirements, I can recommend the qwen2.5 family: it has small but capable models.

<!-- gh-comment-id:2585267861 --> @rick-github commented on GitHub (Jan 11, 2025): Your hardware doesn't have a GPU so you won't get fast inference. Depending on your requirements, I can recommend the [qwen2.5](https://ollama.com/library/qwen2.5) family: it has small but capable models.
Author
Owner

@pdevine commented on GitHub (Jan 13, 2025):

I'd also recommend llama3.2; both the 1B and 3B models should work fine on your system.

<!-- gh-comment-id:2588013463 --> @pdevine commented on GitHub (Jan 13, 2025): I'd also recommend `llama3.2`; both the 1B and 3B models should work fine on your system.
Author
Owner

@inspector3535 commented on GitHub (Jan 13, 2025):

hello everyone;
thanks for your valuable comments, I finally figured out how to use it.
I didn't know that I need to run ollama server on a separate command line, then open another command line to interact with ollama (pull models, run, rm.)
again thanks for your model recommendations, i'll pay attention. sure I don't have GPU on my server so i'll try the ones above.

<!-- gh-comment-id:2588079715 --> @inspector3535 commented on GitHub (Jan 13, 2025): hello everyone; thanks for your valuable comments, I finally figured out how to use it. I didn't know that I need to run ollama server on a separate command line, then open another command line to interact with ollama (pull models, run, rm.) again thanks for your model recommendations, i'll pay attention. sure I don't have GPU on my server so i'll try the ones above.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5382