[GH-ISSUE #8852] Ollama "No connection could be made because the target machine actively refused it." error only without GPU #52250

Closed
opened 2026-04-28 22:39:23 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @Ltamann on GitHub (Feb 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8852

What is the issue?

Tested on Windows Versions 0.55, 0.56, and 0.57:

  1. Configure NVIDIA Control Panel:

    • On Windows 11, open the NVIDIA Control Panel.
    • Navigate to "CUDA - GPUs" and set it to None (do not select any GPU).
    • Click Apply to save changes.
  2. Start the Ollama Server:

    • Open Command Prompt and run:
      ollama serve
    • Output:
      time= level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
      time= level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="255.9 GiB" available="243.0 GiB"
  3. Run a Model:

    • In Command Prompt, run any Ollama model:
      ollama run [model-name]
    • Error encountered:
      Error: Post "http://0.0.0.0:11435/api/show": dial tcp 0.0.0.0:11435: connectex: No connection could be made because the target machine actively refused it.

Observation:

  • Using a GPU, the inference works correctly without issues:

    time= level=INFO source=types.go:131 msg="inference compute" id=GPU-.. library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"

  • Without the GPU, the Ollama server shuts down after a minute, during model downloads, or while chatting. Always saying:No connection could be made because the target machine actively refused it.

Question:
What is a reliable way to run Ollama only using CPU and RAM instead of a GPU?

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.55, 0.56, 0.57

Originally created by @Ltamann on GitHub (Feb 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8852 ### What is the issue? Tested on Windows Versions 0.55, 0.56, and 0.57: 1. Configure NVIDIA Control Panel: - On Windows 11, open the NVIDIA Control Panel. - Navigate to "CUDA - GPUs" and set it to **None** (do not select any GPU). - Click **Apply** to save changes. 2. Start the Ollama Server: - Open Command Prompt and run: ollama serve - Output: time= level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time= level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="255.9 GiB" available="243.0 GiB" 3. Run a Model: - In Command Prompt, run any Ollama model: ollama run [model-name] - Error encountered: Error: Post "http://0.0.0.0:11435/api/show": dial tcp 0.0.0.0:11435: connectex: No connection could be made because the target machine actively refused it. Observation: - Using a GPU, the inference works correctly without issues: time= level=INFO source=types.go:131 msg="inference compute" id=GPU-.. library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" - Without the GPU, the Ollama server shuts down after a minute, during model downloads, or while chatting. Always saying:No connection could be made because the target machine actively refused it. Question: What is a reliable way to run Ollama only using CPU and RAM instead of a GPU? ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.55, 0.56, 0.57
GiteaMirror added the bug label 2026-04-28 22:39:23 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

Without GPU, the Ollama server shuts down after starting and attempting to chat using downloaded models.

Does it actually shut down? What do the server logs show?

<!-- gh-comment-id:2637360026 --> @rick-github commented on GitHub (Feb 5, 2025): > Without GPU, the Ollama server shuts down after starting and attempting to chat using downloaded models. Does it actually shut down? What do the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) show?
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

And why port 11435?

<!-- gh-comment-id:2637363678 --> @rick-github commented on GitHub (Feb 5, 2025): And why port 11435?
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

Excuse the follow on posting.

What is a reliable way to run Ollama only using CPU and RAM instead of a GPU?

https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650

<!-- gh-comment-id:2637370565 --> @rick-github commented on GitHub (Feb 5, 2025): Excuse the follow on posting. > What is a reliable way to run Ollama only using CPU and RAM instead of a GPU? https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650
Author
Owner

@Ltamann commented on GitHub (Feb 5, 2025):

I switched ports to check if the issue was port-related, but it's not. The Ollama server shuts down after about a minute, even if I don't interact with it.

it's easy to replicate — just disable the GPU for Ollama, and you'll see the issue downloading a model.

The app.log shows a crash:
time=2025-02-05T16:37:08.749+01:00 level=WARN source=server.go:163 msg="server crash 1 - exit code 3221225477 - respawning"

Meanwhile, the server.log doesn't display any error messages.
2025/02/05 16:37:09 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\YLAB-Partner\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-05T16:37:09.319+01:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-05T16:37:09.319+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-05T16:37:09.319+01:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5)"
time=2025-02-05T16:37:09.320+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-02-05T16:37:09.320+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-05T16:37:09.320+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-05T16:37:09.320+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=128
time=2025-02-05T16:37:09.362+01:00 level=INFO source=gpu.go:620 msg="no nvidia devices detected by library C:\Windows\system32\nvcuda.dll"
time=2025-02-05T16:37:09.496+01:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-02-05T16:37:09.497+01:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="255.9 GiB" available="242.5 GiB"

<!-- gh-comment-id:2637574851 --> @Ltamann commented on GitHub (Feb 5, 2025): I switched ports to check if the issue was port-related, but it's not. The Ollama server shuts down after about a minute, even if I don't interact with it. it's easy to replicate — just disable the GPU for Ollama, and you'll see the issue downloading a model. The app.log shows a crash: time=2025-02-05T16:37:08.749+01:00 level=WARN source=server.go:163 msg="server crash 1 - exit code 3221225477 - respawning" Meanwhile, the server.log doesn't display any error messages. 2025/02/05 16:37:09 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\YLAB-Partner\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-05T16:37:09.319+01:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-05T16:37:09.319+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-05T16:37:09.319+01:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5)" time=2025-02-05T16:37:09.320+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-02-05T16:37:09.320+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-05T16:37:09.320+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-05T16:37:09.320+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=128 time=2025-02-05T16:37:09.362+01:00 level=INFO source=gpu.go:620 msg="no nvidia devices detected by library C:\\Windows\\system32\\nvcuda.dll" time=2025-02-05T16:37:09.496+01:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-02-05T16:37:09.497+01:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="255.9 GiB" available="242.5 GiB"
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

Try setting OLLAMA_DEBUG=1 in the server environment.

<!-- gh-comment-id:2637578763 --> @rick-github commented on GitHub (Feb 5, 2025): Try setting `OLLAMA_DEBUG=1` in the server environment.
Author
Owner

@sabbirsam commented on GitHub (Feb 5, 2025):

$ ollama run llama3.2-vision
pulling manifest
pulling manifest
pulling 11f274007f09... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 6.0 GB
pulling ece5e659647a... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.9 GB
pulling 715415638c9c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 269 B
pulling 0b4284c1f870... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB
Error: Post "http://127.0.0.1:11434/api/show": read tcp 127.0.0.1:14014->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

After completing it shows "forcibly closed by the remote host"

<!-- gh-comment-id:2637597704 --> @sabbirsam commented on GitHub (Feb 5, 2025): $ ollama run llama3.2-vision pulling manifest pulling manifest pulling 11f274007f09... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 6.0 GB pulling ece5e659647a... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.9 GB pulling 715415638c9c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 269 B pulling 0b4284c1f870... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB Error: Post "http://127.0.0.1:11434/api/show": read tcp 127.0.0.1:14014->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. After completing it shows "forcibly closed by the remote host"
Author
Owner

@Ltamann commented on GitHub (Feb 5, 2025):

Ollama debug : The last message before the server shuts down:

panic: runtime error: index out of range [0] with length 0

goroutine 83 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc00033caf0, {0x7ff7b108ba80, 0xc0005a2be0}, 0xc000628480, 0xc0006139c0)
github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x7ff7b108ba80, 0xc0005a2be0}, {{{0x7ff7b0ecb470, 0x5}, {0x7ff7b0ee02a7, 0x12}, {0x7ff7b0ed3ec3, 0x7}, {0xc000588d90, 0x7}, ...}, ...})
github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x7ff7b108ba80, 0xc0005a2be0}, {0xc000588d90, 0xe}, 0xc0006139c0, 0xc000048ee0)
github.com/ollama/ollama/server/images.go:564 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
github.com/ollama/ollama/server/routes.go:594 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 49
github.com/ollama/ollama/server/routes.go:581 +0x691

<!-- gh-comment-id:2637598539 --> @Ltamann commented on GitHub (Feb 5, 2025): Ollama debug : The last message before the server shuts down: panic: runtime error: index out of range [0] with length 0 goroutine 83 [running]: github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc00033caf0, {0x7ff7b108ba80, 0xc0005a2be0}, 0xc000628480, 0xc0006139c0) github.com/ollama/ollama/server/download.go:175 +0x539 github.com/ollama/ollama/server.downloadBlob({0x7ff7b108ba80, 0xc0005a2be0}, {{{0x7ff7b0ecb470, 0x5}, {0x7ff7b0ee02a7, 0x12}, {0x7ff7b0ed3ec3, 0x7}, {0xc000588d90, 0x7}, ...}, ...}) github.com/ollama/ollama/server/download.go:489 +0x4da github.com/ollama/ollama/server.PullModel({0x7ff7b108ba80, 0xc0005a2be0}, {0xc000588d90, 0xe}, 0xc0006139c0, 0xc000048ee0) github.com/ollama/ollama/server/images.go:564 +0x771 github.com/ollama/ollama/server.(*Server).PullHandler.func1() github.com/ollama/ollama/server/routes.go:594 +0x197 created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 49 github.com/ollama/ollama/server/routes.go:581 +0x691
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

https://github.com/ollama/ollama/issues/8784

<!-- gh-comment-id:2637605849 --> @rick-github commented on GitHub (Feb 5, 2025): https://github.com/ollama/ollama/issues/8784
Author
Owner

@Ltamann commented on GitHub (Feb 5, 2025):

#8784

it’s only an issue when the GPU is disabled—everything works perfectly when the GPU is enabled, so I don’t think it’s the same problem. Also, the server doesn’t just shut down during downloads; it can shut down even when idle after waiting for a while.

<!-- gh-comment-id:2637861985 --> @Ltamann commented on GitHub (Feb 5, 2025): > [#8784](https://github.com/ollama/ollama/issues/8784) it’s only an issue when the GPU is disabled—everything works perfectly when the GPU is enabled, so I don’t think it’s the same problem. Also, the server doesn’t just shut down during downloads; it can shut down even when idle after waiting for a while.
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

panic: runtime error: index out of range [0] with length 0

This error, at least, is #8784. If you have other logs, that may shed more light.

<!-- gh-comment-id:2637939851 --> @rick-github commented on GitHub (Feb 5, 2025): > panic: runtime error: index out of range [0] with length 0 This error, at least, is #8784. If you have other logs, that may shed more light.
Author
Owner

@rick-github commented on GitHub (Mar 26, 2025):

The Ollama server shuts down after about a minute, even if I don't interact with it.
it's easy to replicate — just disable the GPU for Ollama, and you'll see the issue downloading a model.

Likely this: #9836

<!-- gh-comment-id:2754217247 --> @rick-github commented on GitHub (Mar 26, 2025): > The Ollama server shuts down after about a minute, even if I don't interact with it. > it's easy to replicate — just disable the GPU for Ollama, and you'll see the issue downloading a model. Likely this: #9836
Author
Owner

@dirkbrnd commented on GitHub (May 2, 2025):

This should fix it!
https://github.com/agno-agi/agno/pull/3057

I will release asap

<!-- gh-comment-id:2847264955 --> @dirkbrnd commented on GitHub (May 2, 2025): This should fix it! https://github.com/agno-agi/agno/pull/3057 I will release asap
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52250