[GH-ISSUE #13770] Cloud model connection timeout when using official Ollama client #55538

Open
opened 2026-04-29 09:22:14 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @chenchongming512 on GitHub (Jan 19, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13770

What is the issue?

Description

When trying to use cloud models in the official Ollama client, I'm getting a connection timeout error.

Steps to Reproduce

  1. Open Ollama official client
  2. Attempt to use any cloud model
  3. Get connection timeout error

Error Message

500 Internal Server Error: Post "https://ollama.com:443/api/chat?ts=1768793788": dial tcp 34.36.133.15:443: i/o timeout

What I've Tried

  1. Restarted Ollama client
  2. Checked internet connection
  3. Local model are fine

Additional Information

PING ollama.com (34.36.133.15): 56 data bytes
64 bytes from 34.36.133.15: icmp_seq=0 ttl=104 time=223.880 ms
64 bytes from 34.36.133.15: icmp_seq=1 ttl=104 time=231.907 ms
64 bytes from 34.36.133.15: icmp_seq=2 ttl=104 time=251.540 ms
64 bytes from 34.36.133.15: icmp_seq=3 ttl=104 time=298.792 ms
64 bytes from 34.36.133.15: icmp_seq=4 ttl=104 time=228.120 ms
--- ollama.com ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss

Questions

Are cloud models currently available ?

Relevant log output

[GIN] 2026/01/19 - 11:38:28 | 200 |    5.825583ms |       127.0.0.1 | GET      "/api/tags"
time=2026-01-19T11:38:44.419+08:00 level=ERROR source=routes.go:1809 msg="Post \"https://ollama.com/api/me?ts=1768793894\": dial tcp 34.36.133.15:443: i/o timeout"
[GIN] 2026/01/19 - 11:38:44 | 200 |  30.00169625s |       127.0.0.1 | POST     "/api/me"
[GIN] 2026/01/19 - 11:38:58 | 200 |     6.76375ms |       127.0.0.1 | GET      "/api/tags"
time=2026-01-19T11:39:14.928+08:00 level=ERROR source=routes.go:1809 msg="Post \"https://ollama.com/api/me?ts=1768793924\": dial tcp 34.36.133.15:443: i/o timeout"
[GIN] 2026/01/19 - 11:39:14 | 200 | 30.001228208s |       127.0.0.1 | POST     "/api/me"
time=2026-01-19T11:39:16.281+08:00 level=ERROR source=routes.go:1809 msg="Post \"https://ollama.com/api/me?ts=1768793955\": read tcp 10.10.55.16:57012->34.36.133.15:443: read: connection reset by peer"
[GIN] 2026/01/19 - 11:39:16 | 200 |  346.957916ms |       127.0.0.1 | POST     "/api/me"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

Version 0.14.2

Originally created by @chenchongming512 on GitHub (Jan 19, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13770 ### What is the issue? ## Description When trying to use cloud models in the official Ollama client, I'm getting a connection timeout error. ## Steps to Reproduce 1. Open Ollama official client 2. Attempt to use any cloud model 3. Get connection timeout error ## Error Message 500 Internal Server Error: Post "[https://ollama.com:443/api/chat?ts=1768793788](https://ollama.com/api/chat?ts=1768793788)": dial tcp 34.36.133.15:443: i/o timeout ## What I've Tried 1. Restarted Ollama client 2. Checked internet connection 3. Local model are fine ## Additional Information PING ollama.com (34.36.133.15): 56 data bytes 64 bytes from 34.36.133.15: icmp_seq=0 ttl=104 time=223.880 ms 64 bytes from 34.36.133.15: icmp_seq=1 ttl=104 time=231.907 ms 64 bytes from 34.36.133.15: icmp_seq=2 ttl=104 time=251.540 ms 64 bytes from 34.36.133.15: icmp_seq=3 ttl=104 time=298.792 ms 64 bytes from 34.36.133.15: icmp_seq=4 ttl=104 time=228.120 ms --- ollama.com ping statistics --- 5 packets transmitted, 5 packets received, 0.0% packet loss ## Questions Are cloud models currently available ? ### Relevant log output ```shell [GIN] 2026/01/19 - 11:38:28 | 200 | 5.825583ms | 127.0.0.1 | GET "/api/tags" time=2026-01-19T11:38:44.419+08:00 level=ERROR source=routes.go:1809 msg="Post \"https://ollama.com/api/me?ts=1768793894\": dial tcp 34.36.133.15:443: i/o timeout" [GIN] 2026/01/19 - 11:38:44 | 200 | 30.00169625s | 127.0.0.1 | POST "/api/me" [GIN] 2026/01/19 - 11:38:58 | 200 | 6.76375ms | 127.0.0.1 | GET "/api/tags" time=2026-01-19T11:39:14.928+08:00 level=ERROR source=routes.go:1809 msg="Post \"https://ollama.com/api/me?ts=1768793924\": dial tcp 34.36.133.15:443: i/o timeout" [GIN] 2026/01/19 - 11:39:14 | 200 | 30.001228208s | 127.0.0.1 | POST "/api/me" time=2026-01-19T11:39:16.281+08:00 level=ERROR source=routes.go:1809 msg="Post \"https://ollama.com/api/me?ts=1768793955\": read tcp 10.10.55.16:57012->34.36.133.15:443: read: connection reset by peer" [GIN] 2026/01/19 - 11:39:16 | 200 | 346.957916ms | 127.0.0.1 | POST "/api/me" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version Version 0.14.2
GiteaMirror added the bug label 2026-04-29 09:22:14 -05:00
Author
Owner

@venidicii commented on GitHub (Jan 19, 2026):

Maybe only the IP in the Chinese Mainland is affected.

<!-- gh-comment-id:3766876580 --> @venidicii commented on GitHub (Jan 19, 2026): Maybe only the IP in the Chinese Mainland is affected.
Author
Owner

@chenchongming512 commented on GitHub (Jan 19, 2026):

But it couldn't work even with VPN

<!-- gh-comment-id:3767495857 --> @chenchongming512 commented on GitHub (Jan 19, 2026): But it couldn't work even with VPN
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

Cloud models appear to be working fine.

$ OLLAMA_HOST=ollama.com ollama run rnj-1:8b hello
Hello! How can I assist you today?
$ OLLAMA_HOST=ollama.com ollama run kimi-k2:1t hello
Hello! How can I help you today?
$ OLLAMA_HOST=ollama.com ollama run deepseek-v3.1:671b hello --think=false
Hello! 😊 How can I help you today?
$ OLLAMA_HOST=ollama.com ollama run mistral-large-3:675b hello 
Hello! 😊 How can I assist you today? Whether you have a question, need help with something, or just want to chat, I'm here for you!
<!-- gh-comment-id:3768765627 --> @rick-github commented on GitHub (Jan 19, 2026): Cloud models appear to be working fine. ```console $ OLLAMA_HOST=ollama.com ollama run rnj-1:8b hello Hello! How can I assist you today? $ OLLAMA_HOST=ollama.com ollama run kimi-k2:1t hello Hello! How can I help you today? $ OLLAMA_HOST=ollama.com ollama run deepseek-v3.1:671b hello --think=false Hello! 😊 How can I help you today? $ OLLAMA_HOST=ollama.com ollama run mistral-large-3:675b hello Hello! 😊 How can I assist you today? Whether you have a question, need help with something, or just want to chat, I'm here for you! ```
Author
Owner

@venidicii commented on GitHub (Jan 19, 2026):

Cloud models appear to be working fine.

$ OLLAMA_HOST=ollama.com ollama run rnj-1:8b hello
Hello! How can I assist you today?
$ OLLAMA_HOST=ollama.com ollama run kimi-k2:1t hello
Hello! How can I help you today?
$ OLLAMA_HOST=ollama.com ollama run deepseek-v3.1:671b hello --think=false
Hello! 😊 How can I help you today?
$ OLLAMA_HOST=ollama.com ollama run mistral-large-3:675b hello
Hello! 😊 How can I assist you today? Whether you have a question, need help with something, or just want to chat, I'm here for you!

$ OLLAMA_HOST=ollama.com ollama run kimi-k2:1t hello
Error: Head "https://ollama.com:443/?ts=1768841986": read tcp 192.168.2.88:44968->34.36.133.15:443: read: connection reset by peer

My ip: 61.159.205.76.

<!-- gh-comment-id:3769381472 --> @venidicii commented on GitHub (Jan 19, 2026): > Cloud models appear to be working fine. > > $ OLLAMA_HOST=ollama.com ollama run rnj-1:8b hello > Hello! How can I assist you today? > $ OLLAMA_HOST=ollama.com ollama run kimi-k2:1t hello > Hello! How can I help you today? > $ OLLAMA_HOST=ollama.com ollama run deepseek-v3.1:671b hello --think=false > Hello! 😊 How can I help you today? > $ OLLAMA_HOST=ollama.com ollama run mistral-large-3:675b hello > Hello! 😊 How can I assist you today? Whether you have a question, need help with something, or just want to chat, I'm here for you! $ OLLAMA_HOST=ollama.com ollama run kimi-k2:1t hello Error: Head "https://ollama.com:443/?ts=1768841986": read tcp 192.168.2.88:44968->34.36.133.15:443: read: connection reset by peer My ip: 61.159.205.76.
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

I think you'll have to take it up with your Internet Service Provider or the administration of the Great Firewall.

<!-- gh-comment-id:3769432612 --> @rick-github commented on GitHub (Jan 19, 2026): I think you'll have to take it up with your Internet Service Provider or the administration of the Great Firewall.
Author
Owner

@ghost commented on GitHub (Jan 19, 2026):

@chenchongming512
Looks like a timeout / network edge issue, try running npx ai-patch doctor to see what the solution is?

<!-- gh-comment-id:3770292912 --> @ghost commented on GitHub (Jan 19, 2026): @chenchongming512 Looks like a timeout / network edge issue, try running `npx ai-patch doctor` to see what the solution is?
Author
Owner

@fuhua2019 commented on GitHub (Jan 20, 2026):

When using the Ollama client to access cloud models, it returns the following error:
500 Internal Server Error: Post "https://ollama.com:443/api/chat?ts=1768919723
": read tcp 192.168.3.77:52104->34.36.133.15:443: read: connection reset by peer.
However, the Ollama official website (https://ollama.com/
) is accessible normally via a browser.
Has anyone started tracking or resolving this issue?

<!-- gh-comment-id:3773226070 --> @fuhua2019 commented on GitHub (Jan 20, 2026): When using the Ollama client to access cloud models, it returns the following error: 500 Internal Server Error: Post "[https://ollama.com:443/api/chat?ts=1768919723](https://ollama.com/api/chat?ts=1768919723) ": read tcp 192.168.3.77:52104->34.36.133.15:443: read: connection reset by peer. However, the Ollama official website (https://ollama.com/ ) is accessible normally via a browser. Has anyone started tracking or resolving this issue?
Author
Owner

@speedyfoxai commented on GitHub (Feb 28, 2026):

Adding another data point — seeing the same issue today (Feb 28, 2026). Cloud models (specifically kimi-k2.5:cloud) load successfully but produce no response. Other cloud models like minimax work fine, so it appears model-specific or related to particular cloud hardware allocation. No errors in logs, just silent failures.

<!-- gh-comment-id:3977911842 --> @speedyfoxai commented on GitHub (Feb 28, 2026): Adding another data point — seeing the same issue today (Feb 28, 2026). Cloud models (specifically kimi-k2.5:cloud) load successfully but produce no response. Other cloud models like minimax work fine, so it appears model-specific or related to particular cloud hardware allocation. No errors in logs, just silent failures.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55538