[GH-ISSUE #7697] ollama is not working , Error: could not connect to ollama app, is it running? #4914

Closed
opened 2026-04-12 15:58:09 -05:00 by GiteaMirror · 25 comments
Owner

Originally created by @gokulcoder7 on GitHub (Nov 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7697

What is the issue?

C:\Windows\System32>ollama list
Error: could not connect to ollama app, is it running?

C:\Windows\System32>
C:\Windows\System32>ollama --version
Warning: could not connect to a running Ollama instance
Warning: client version is 0.4.2

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @gokulcoder7 on GitHub (Nov 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7697 ### What is the issue? C:\Windows\System32>ollama list Error: could not connect to ollama app, is it running? C:\Windows\System32> C:\Windows\System32>ollama --version Warning: could not connect to a running Ollama instance Warning: client version is 0.4.2 ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 15:58:09 -05:00
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

2024/11/16 07:51:59 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:F:\ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-16T07:51:59.100+05:30 level=INFO source=images.go:755 msg="total blobs: 10"
time=2024-11-16T07:51:59.100+05:30 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-16T07:51:59.100+05:30 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.4.2)"
time=2024-11-16T07:51:59.101+05:30 level=ERROR source=common.go:279 msg="empty runner dir"
time=2024-11-16T07:51:59.101+05:30 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[]
Error: unable to initialize llm runners unable to locate runners in any search path [C:\Users\Sushant\AppData\Local\Programs\Ollama C:\Users\Sushant\AppData\Local\Programs\Ollama\windows-amd64 C:\Users\Sushant\AppData\Local\Programs\Ollama\dist\windows-amd64 C:\Users\Sushant\AppData\Local\Programs\Ollama C:\Users\Sushant\AppData\Local\Programs\Ollama\windows-amd64 C:\Users\Sushant\AppData\Local\Programs\Ollama\dist\windows-amd64 C:\Windows\System32 C:\Windows\System32\windows-amd64 C:\Windows\System32\dist\windows-amd64]

above is server.log

<!-- gh-comment-id:2480329235 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): 2024/11/16 07:51:59 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:F:\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-16T07:51:59.100+05:30 level=INFO source=images.go:755 msg="total blobs: 10" time=2024-11-16T07:51:59.100+05:30 level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-16T07:51:59.100+05:30 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.4.2)" time=2024-11-16T07:51:59.101+05:30 level=ERROR source=common.go:279 msg="empty runner dir" time=2024-11-16T07:51:59.101+05:30 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[] **Error: unable to initialize llm runners unable to locate runners in any search path [C:\Users\Sushant\AppData\Local\Programs\Ollama C:\Users\Sushant\AppData\Local\Programs\Ollama\windows-amd64 C:\Users\Sushant\AppData\Local\Programs\Ollama\dist\windows-amd64 C:\Users\Sushant\AppData\Local\Programs\Ollama C:\Users\Sushant\AppData\Local\Programs\Ollama\windows-amd64 C:\Users\Sushant\AppData\Local\Programs\Ollama\dist\windows-amd64 C:\Windows\System32 C:\Windows\System32\windows-amd64 C:\Windows\System32\dist\windows-amd64]** above is server.log
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

this error begin after update of ollama stopped suddenly

<!-- gh-comment-id:2480354213 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): this error begin after update of ollama stopped suddenly
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

server.log
server-1.log

<!-- gh-comment-id:2480395605 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): [server.log](https://github.com/user-attachments/files/17783474/server.log) [server-1.log](https://github.com/user-attachments/files/17783477/server-1.log)
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

server-3.log
server-3.log
server-5.log
server-5.log

<!-- gh-comment-id:2480395846 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): [server-3.log](https://github.com/user-attachments/files/17783479/server-3.log) [server-3.log](https://github.com/user-attachments/files/17783480/server-3.log) [server-5.log](https://github.com/user-attachments/files/17783481/server-5.log) [server-5.log](https://github.com/user-attachments/files/17783482/server-5.log)
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

server-2.log
server-3.log
server-4.log
server-5.log
server.log
server-1.log

<!-- gh-comment-id:2480396030 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): [server-2.log](https://github.com/user-attachments/files/17783484/server-2.log) [server-3.log](https://github.com/user-attachments/files/17783485/server-3.log) [server-4.log](https://github.com/user-attachments/files/17783486/server-4.log) [server-5.log](https://github.com/user-attachments/files/17783487/server-5.log) [server.log](https://github.com/user-attachments/files/17783488/server.log) [server-1.log](https://github.com/user-attachments/files/17783489/server-1.log)
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

image

<!-- gh-comment-id:2480397742 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): ![image](https://github.com/user-attachments/assets/707e7559-8004-43aa-8d8f-a640614e5010)
Author
Owner

@gokulcoder7 commented on GitHub (Nov 16, 2024):

@rick-github please help me

<!-- gh-comment-id:2480398286 --> @gokulcoder7 commented on GitHub (Nov 16, 2024): @rick-github please help me
Author
Owner

@rick-github commented on GitHub (Nov 16, 2024):

It seems that upgrading from an old version of ollama to 0.4.2 fails - my virtual windows box was 0.3.14, it got upgraded to 0.4.2 and failed in the same way as your logs, missing runners. It was fixed by just installing ollama again from the front page: https://ollama.com/download/windows. I wasn't able to repeat this - I deleted 0.3.14 via Windows uninstall, installed 0.3.14 and upgraded to 0.4.2 via ollama and this time it worked, so I don't know why it failed the first time. I recommend just re-installing.

<!-- gh-comment-id:2480846950 --> @rick-github commented on GitHub (Nov 16, 2024): It seems that upgrading from an old version of ollama to 0.4.2 fails - my virtual windows box was 0.3.14, it got upgraded to 0.4.2 and failed in the same way as your logs, missing runners. It was fixed by just installing ollama again from the front page: https://ollama.com/download/windows. I wasn't able to repeat this - I deleted 0.3.14 via Windows uninstall, installed 0.3.14 and upgraded to 0.4.2 via ollama and this time it worked, so I don't know why it failed the first time. I recommend just re-installing.
Author
Owner

@zhoufy20 commented on GitHub (Nov 17, 2024):

It seems that upgrading from an old version of ollama to 0.4.2 fails - my virtual windows box was 0.3.14, it got upgraded to 0.4.2 and failed in the same way as your logs, missing runners. It was fixed by just installing ollama again from the front page: https://ollama.com/download/windows. I wasn't able to repeat this - I deleted 0.3.14 via Windows uninstall, installed 0.3.14 and upgraded to 0.4.2 via ollama and this time it worked, so I don't know why it failed the first time. I recommend just re-installing.

How can i solve this? Thanks a lot, @rick-github.

PS C:\Users\Feiyu> ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.4.2
PS C:\Users\Feiyu> ollama run qwen2.5-coder:7b
Error: could not connect to ollama app, is it running?
<!-- gh-comment-id:2481291179 --> @zhoufy20 commented on GitHub (Nov 17, 2024): > It seems that upgrading from an old version of ollama to 0.4.2 fails - my virtual windows box was 0.3.14, it got upgraded to 0.4.2 and failed in the same way as your logs, missing runners. It was fixed by just installing ollama again from the front page: https://ollama.com/download/windows. I wasn't able to repeat this - I deleted 0.3.14 via Windows uninstall, installed 0.3.14 and upgraded to 0.4.2 via ollama and this time it worked, so I don't know why it failed the first time. I recommend just re-installing. How can i solve this? Thanks a lot, @rick-github. ``` PS C:\Users\Feiyu> ollama -v Warning: could not connect to a running Ollama instance Warning: client version is 0.4.2 PS C:\Users\Feiyu> ollama run qwen2.5-coder:7b Error: could not connect to ollama app, is it running? ```
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

Reinstall olllama.

<!-- gh-comment-id:2481292960 --> @rick-github commented on GitHub (Nov 17, 2024): Reinstall olllama.
Author
Owner

@zhoufy20 commented on GitHub (Nov 17, 2024):

Reinstall olllama.

PS C:\Users\Feiyu> ollama serve
2024/11/17 22:31:14 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://127.0.0.1:1080 HTTP_PROXY:http://127.0.0.1:1080 NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:0.0.0.0 OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-17T22:31:14.524+08:00 level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-17T22:31:14.525+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-17T22:31:14.525+08:00 level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)"
time=2024-11-17T22:31:14.526+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm cpu cpu_avx cpu_avx2]"
time=2024-11-17T22:31:14.526+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-17T22:31:14.526+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-11-17T22:31:14.526+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=18 efficiency=0 threads=36
time=2024-11-17T22:31:14.934+08:00 level=INFO source=gpu.go:328 msg="detected OS VRAM overhead" id=GPU-7ad40336-8f4e-040f-a38f-8fd70facaffa library=cuda compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" overhead="1013.8 MiB"
time=2024-11-17T22:31:15.156+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-7ad40336-8f4e-040f-a38f-8fd70facaffa library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
time=2024-11-17T22:31:15.157+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-85678040-95fe-8eaa-e42b-36921fc7f9b5 library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2024/11/17 - 22:31:22 | 200 | 1.7617ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:31:53 | 200 | 564µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:32:24 | 200 | 577µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:32:55 | 200 | 553.1µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:33:00 | 200 | 365.4µs | ::1 | GET "/"
[GIN] 2024/11/17 - 22:33:00 | 404 | 0s | ::1 | GET "/favicon.ico"
[GIN] 2024/11/17 - 22:33:26 | 200 | 644.9µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:33:50 | 404 | 1.0706ms | ::1 | POST "/api/generate"
[GIN] 2024/11/17 - 22:33:56 | 200 | 1.0949ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:34:27 | 200 | 604.3µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:34:58 | 200 | 0s | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:35:29 | 200 | 561.8µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:36:01 | 200 | 569.4µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:36:33 | 200 | 535.1µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:37:04 | 200 | 804.1µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:37:35 | 200 | 524µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/17 - 22:38:06 | 200 | 572.3µs | 127.0.0.1 | GET "/api/tags"

this is the log file.

<!-- gh-comment-id:2481295020 --> @zhoufy20 commented on GitHub (Nov 17, 2024): > Reinstall olllama. PS C:\Users\Feiyu> ollama serve 2024/11/17 22:31:14 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://127.0.0.1:1080 HTTP_PROXY:http://127.0.0.1:1080 NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:0.0.0.0 OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-17T22:31:14.524+08:00 level=INFO source=images.go:755 msg="total blobs: 0" time=2024-11-17T22:31:14.525+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-17T22:31:14.525+08:00 level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)" time=2024-11-17T22:31:14.526+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm cpu cpu_avx cpu_avx2]" time=2024-11-17T22:31:14.526+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-17T22:31:14.526+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-11-17T22:31:14.526+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=18 efficiency=0 threads=36 time=2024-11-17T22:31:14.934+08:00 level=INFO source=gpu.go:328 msg="detected OS VRAM overhead" id=GPU-7ad40336-8f4e-040f-a38f-8fd70facaffa library=cuda compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" overhead="1013.8 MiB" time=2024-11-17T22:31:15.156+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-7ad40336-8f4e-040f-a38f-8fd70facaffa library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" time=2024-11-17T22:31:15.157+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-85678040-95fe-8eaa-e42b-36921fc7f9b5 library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2024/11/17 - 22:31:22 | 200 | 1.7617ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:31:53 | 200 | 564µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:32:24 | 200 | 577µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:32:55 | 200 | 553.1µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:33:00 | 200 | 365.4µs | ::1 | GET "/" [GIN] 2024/11/17 - 22:33:00 | 404 | 0s | ::1 | GET "/favicon.ico" [GIN] 2024/11/17 - 22:33:26 | 200 | 644.9µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:33:50 | 404 | 1.0706ms | ::1 | POST "/api/generate" [GIN] 2024/11/17 - 22:33:56 | 200 | 1.0949ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:34:27 | 200 | 604.3µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:34:58 | 200 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:35:29 | 200 | 561.8µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:36:01 | 200 | 569.4µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:36:33 | 200 | 535.1µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:37:04 | 200 | 804.1µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:37:35 | 200 | 524µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/17 - 22:38:06 | 200 | 572.3µs | 127.0.0.1 | GET "/api/tags" this is the log file.
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

HTTP_PROXY:http://127.0.0.1:1080

Set NO_PROXY=127.0.0.1 in the client environment.

<!-- gh-comment-id:2481302719 --> @rick-github commented on GitHub (Nov 17, 2024): ``` HTTP_PROXY:http://127.0.0.1:1080 ``` Set `NO_PROXY=127.0.0.1` in the client environment.
Author
Owner

@zhoufy20 commented on GitHub (Nov 17, 2024):

Ok, I tried but no success:
image
image
image
image
Thanks a lot!

<!-- gh-comment-id:2481304410 --> @zhoufy20 commented on GitHub (Nov 17, 2024): Ok, I tried but no success: ![image](https://github.com/user-attachments/assets/2c361fd8-41d7-42df-bada-7dcd49c782f8) ![image](https://github.com/user-attachments/assets/f77963fc-7de1-4084-9f0e-7c42eceddf4b) ![image](https://github.com/user-attachments/assets/bd51a6a4-e8c4-44ee-a930-b6f5bdfdc16b) ![image](https://github.com/user-attachments/assets/a1c69474-1769-4225-94d2-dfa5b6d7d1c4) Thanks a lot!
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

Set NO_PROXY in the client environment.

You want the server to be able to connect to the internet via your proxy on 127.0.0.1:1080, so you need to set HTTPS_PROXY but not NO_PROXY in the server environment.

You want the client to be able to connect to the ollama server at 127.0.0.1:11434, so you need to set NO_PROXY or not set HTTP_PROXY in the client environment.

<!-- gh-comment-id:2481307161 --> @rick-github commented on GitHub (Nov 17, 2024): Set `NO_PROXY` in the **client** environment. You want the server to be able to connect to the internet via your proxy on 127.0.0.1:1080, so you need to set `HTTPS_PROXY` but not `NO_PROXY` in the **server** environment. You want the client to be able to connect to the ollama server at 127.0.0.1:11434, so you need to set `NO_PROXY` _or_ not set `HTTP_PROXY` in the **client** environment.
Author
Owner

@zhoufy20 commented on GitHub (Nov 17, 2024):

127.0.0.1:1080

Sorry, I am not very clear about windows's environment. Is this right?

PS C:\Users\Feiyu> ollama run qwen
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen/manifests/latest": proxyconnect tcp: dial tcp 127.0.0.1:1080: connectex: No connection could be made because the target machine actively refused it.

image

image

<!-- gh-comment-id:2481361926 --> @zhoufy20 commented on GitHub (Nov 17, 2024): > 127.0.0.1:1080 Sorry, I am not very clear about windows's environment. Is this right? ``` PS C:\Users\Feiyu> ollama run qwen pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen/manifests/latest": proxyconnect tcp: dial tcp 127.0.0.1:1080: connectex: No connection could be made because the target machine actively refused it. ``` ![image](https://github.com/user-attachments/assets/6c0c668b-8d58-4fd4-afb2-80e7397cf071) ![image](https://github.com/user-attachments/assets/5fa0ba35-b3e5-409f-b7fc-522bd0f9bfbb)
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

127.0.0.1:1080: connectex: No connection could be made because the target machine actively refused it.

Your proxy needs to be configured. https://clashfor.org/tutorial.html

<!-- gh-comment-id:2481370426 --> @rick-github commented on GitHub (Nov 17, 2024): ``` 127.0.0.1:1080: connectex: No connection could be made because the target machine actively refused it. ``` Your proxy needs to be configured. https://clashfor.org/tutorial.html
Author
Owner

@zhoufy20 commented on GitHub (Nov 17, 2024):

HTTP_PROXY

Thanks a lot for your detailed explanations and it is ok now.

<!-- gh-comment-id:2481398617 --> @zhoufy20 commented on GitHub (Nov 17, 2024): > HTTP_PROXY Thanks a lot for your detailed explanations and it is ok now.
Author
Owner

@CanglanXYA commented on GitHub (Dec 1, 2024):

same problem, I try to assign env variable host to 127.0.0.1, the client could connect to instance but fail to call http_proxy at same host. If I comment this variable the client cant connect to instance completely.

Under both of these circumstances the ollama could be called by openwebui and run llm successfully. I guess the problem comes from ollama client, current version is 0.4.6.

I cant get any meaningful log even after enabling log at env, the client only return "127.0.0.1:11434 bind already in use" if I run "ollama serve" again

<!-- gh-comment-id:2509771698 --> @CanglanXYA commented on GitHub (Dec 1, 2024): same problem, I try to assign env variable host to 127.0.0.1, the client could connect to instance but fail to call http_proxy at same host. If I comment this variable the client cant connect to instance completely. Under both of these circumstances the ollama could be called by openwebui and run llm successfully. I guess the problem comes from ollama client, current version is 0.4.6. I cant get any meaningful log even after enabling log at env, the client only return "127.0.0.1:11434 bind already in use" if I run "ollama serve" again
Author
Owner

@rick-github commented on GitHub (Dec 1, 2024):

"127.0.0.1:11434 bind already in use"

You need to stop the old server before running a new server.

<!-- gh-comment-id:2509774627 --> @rick-github commented on GitHub (Dec 1, 2024): > "127.0.0.1:11434 bind already in use" You need to stop the old server before running a new server.
Author
Owner

@CanglanXYA commented on GitHub (Dec 1, 2024):

"127.0.0.1:11434 bind already in use"

You need to stop the old server before running a new server.

thx, I would check my container later

<!-- gh-comment-id:2509916539 --> @CanglanXYA commented on GitHub (Dec 1, 2024): > > "127.0.0.1:11434 bind already in use" > > You need to stop the old server before running a new server. thx, I would check my container later
Author
Owner

@CHEN1998-xinyu commented on GitHub (Dec 4, 2024):

could you solve this problem? I also have same problem, can I chat with you in wechat?

<!-- gh-comment-id:2517754094 --> @CHEN1998-xinyu commented on GitHub (Dec 4, 2024): could you solve this problem? I also have same problem, can I chat with you in wechat?
Author
Owner

@rick-github commented on GitHub (Dec 4, 2024):

Open a new ticket. Describe your problem. Add server logs.

<!-- gh-comment-id:2517772526 --> @rick-github commented on GitHub (Dec 4, 2024): Open a new ticket. Describe your problem. Add [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@CHEN1998-xinyu commented on GitHub (Dec 5, 2024):

thank you, after reinstall, this question be resovled

<!-- gh-comment-id:2520491811 --> @CHEN1998-xinyu commented on GitHub (Dec 5, 2024): thank you, after reinstall, this question be resovled
Author
Owner

@Eyion commented on GitHub (Jan 12, 2025):

使用“ollama run qwen2.5:7b”时,提示“Error: could not connect to ollama app, is it running?”
使用“ollama serve”时,提示“Error: listen tcp 127.0.0.1:11434: bind: An attempt was made to access a socket in a way forbidden by its access permissions.”

<!-- gh-comment-id:2585699254 --> @Eyion commented on GitHub (Jan 12, 2025): 使用“ollama run qwen2.5:7b”时,提示“Error: could not connect to ollama app, is it running?” 使用“ollama serve”时,提示“Error: listen tcp 127.0.0.1:11434: bind: An attempt was made to access a socket in a way forbidden by its access permissions.”
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

https://github.com/ollama/ollama/issues/5795#issuecomment-2239525601

<!-- gh-comment-id:2585701570 --> @rick-github commented on GitHub (Jan 12, 2025): https://github.com/ollama/ollama/issues/5795#issuecomment-2239525601
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4914