[GH-ISSUE #3504] I can't pull any models #64196

Open
opened 2026-05-03 16:30:37 -05:00 by GiteaMirror · 61 comments
Owner

Originally created by @jsrcode on GitHub (Apr 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3504

What is the issue?

C:\Users\18164>ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout

What did you expect to see?

Pull the model

Steps to reproduce

Pull the model

Are there any recent changes that introduced the issue?

No

OS

Windows

Architecture

x86

Platform

Docker

Ollama version

0.1.30

GPU

Intel

GPU info

No response

CPU

Intel

Other software

No response

Originally created by @jsrcode on GitHub (Apr 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3504 ### What is the issue? C:\Users\18164>ollama run qwen:0.5b pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout ### What did you expect to see? Pull the model ### Steps to reproduce Pull the model ### Are there any recent changes that introduced the issue? No ### OS Windows ### Architecture x86 ### Platform Docker ### Ollama version 0.1.30 ### GPU Intel ### GPU info _No response_ ### CPU Intel ### Other software _No response_
GiteaMirror added the bug label 2026-05-03 16:30:37 -05:00
Author
Owner

@jsrcode commented on GitHub (Apr 5, 2024):

C:\Users\18164>ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=-dL8dGX7EOvm7PlquSf5lw&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1712326755": net/http: TLS handshake timeout

<!-- gh-comment-id:2039927506 --> @jsrcode commented on GitHub (Apr 5, 2024): C:\Users\18164>ollama pull llama2 pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=-dL8dGX7EOvm7PlquSf5lw&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1712326755": net/http: TLS handshake timeout
Author
Owner

@jsrcode commented on GitHub (Apr 5, 2024):

This is true for all models

<!-- gh-comment-id:2039928102 --> @jsrcode commented on GitHub (Apr 5, 2024): This is true for all models
Author
Owner

@igorschlum commented on GitHub (Apr 5, 2024):

Hi @jsrcode
I will try to help you. There is an issue with your network configuration as Ollama pull lama2 works for any of us and no problem is reported here.

The error message you're encountering, Error: pull model manifest: Get "https://ollama.com/token?...": net/http: TLS handshake timeout, suggests a problem with establishing a secure connection to the server. This could be due to several reasons, including network issues, firewall restrictions, or problems with SSL certificates. Here are some steps to troubleshoot and potentially resolve the issue:

1 - Check Network Connection: Ensure your internet connection is stable and fast enough. A slow or unstable connection can cause timeouts during the TLS handshake process.

2 - Firewall or Proxy Settings: If you're behind a firewall or using a proxy, it might be blocking or interfering with the connection. Try disabling the firewall temporarily or configuring it to allow connections to ollama.com. If you're using a proxy, ensure it's correctly configured in your environment variables or Ollama's configuration.

3 - SSL Certificate Issues: The error could be related to SSL certificate issues, such as a self-signed certificate. If you're in a controlled environment where you can trust the certificate, you might consider using the --insecure flag with the ollama pull command to bypass SSL certificate verification. However, be cautious with this approach as it can expose you to security risks.

4 - Environment Variables for Proxy: If you're using a proxy, ensure that the HTTPS_PROXY environment variable is correctly set to point to your proxy server. This is crucial for applications that need to connect to the internet through a proxy.
Restart Ollama Service: Sometimes, simply restarting the Ollama service can resolve transient issues. Use the appropriate command for your operating system to restart the service.

5 - Manual Pull Attempts: As a workaround, you can try pulling the model multiple times in quick succession. This approach has been reported to sometimes bypass the issue, especially if it's related to temporary network glitches or server-side issues.

6 - Can you try from another network? Can you share your network configuration to see if you are behind a company network, a university network or a home provider network. If your network is managed by a inhouse administrator you can ask him to help you.

Remember, when dealing with network issues or SSL certificates, always ensure you're following best practices for security and privacy.

Let us know here if you find a solution so Ollama could displayed a better documented error message if possible.

<!-- gh-comment-id:2040676842 --> @igorschlum commented on GitHub (Apr 5, 2024): Hi @jsrcode I will try to help you. There is an issue with your network configuration as Ollama pull lama2 works for any of us and no problem is reported here. The error message you're encountering, Error: pull model manifest: Get "https://ollama.com/token?...": net/http: TLS handshake timeout, suggests a problem with establishing a secure connection to the server. This could be due to several reasons, including network issues, firewall restrictions, or problems with SSL certificates. Here are some steps to troubleshoot and potentially resolve the issue: 1 - Check Network Connection: Ensure your internet connection is stable and fast enough. A slow or unstable connection can cause timeouts during the TLS handshake process. 2 - Firewall or Proxy Settings: If you're behind a firewall or using a proxy, it might be blocking or interfering with the connection. Try disabling the firewall temporarily or configuring it to allow connections to ollama.com. If you're using a proxy, ensure it's correctly configured in your environment variables or Ollama's configuration. 3 - SSL Certificate Issues: The error could be related to SSL certificate issues, such as a self-signed certificate. If you're in a controlled environment where you can trust the certificate, you might consider using the --insecure flag with the ollama pull command to bypass SSL certificate verification. However, be cautious with this approach as it can expose you to security risks. 4 - Environment Variables for Proxy: If you're using a proxy, ensure that the HTTPS_PROXY environment variable is correctly set to point to your proxy server. This is crucial for applications that need to connect to the internet through a proxy. Restart Ollama Service: Sometimes, simply restarting the Ollama service can resolve transient issues. Use the appropriate command for your operating system to restart the service. 5 - Manual Pull Attempts: As a workaround, you can try pulling the model multiple times in quick succession. This approach has been reported to sometimes bypass the issue, especially if it's related to temporary network glitches or server-side issues. 6 - Can you try from another network? Can you share your network configuration to see if you are behind a company network, a university network or a home provider network. If your network is managed by a inhouse administrator you can ask him to help you. Remember, when dealing with network issues or SSL certificates, always ensure you're following best practices for security and privacy. Let us know here if you find a solution so Ollama could displayed a better documented error message if possible.
Author
Owner

@ajwillia69 commented on GitHub (Apr 6, 2024):

PS C:\WINDOWS\system32> ollama run llama2
Error: error loading model C:\Users\ajwil.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
PS C:\WINDOWS\system32>
same problem here ran one model and then pulled another model and now won't run any model.

<!-- gh-comment-id:2040904913 --> @ajwillia69 commented on GitHub (Apr 6, 2024): PS C:\WINDOWS\system32> ollama run llama2 Error: error loading model C:\Users\ajwil\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 PS C:\WINDOWS\system32> same problem here ran one model and then pulled another model and now won't run any model.
Author
Owner

@igorschlum commented on GitHub (Apr 6, 2024):

@ajwillia69 I think that the issue you face is different as it's not a network issue, but rather a memory issue or a model naming issue. Could you post a new issue and try with tiny models? (search tiny in the list of models).

<!-- gh-comment-id:2040912389 --> @igorschlum commented on GitHub (Apr 6, 2024): @ajwillia69 I think that the issue you face is different as it's not a network issue, but rather a memory issue or a model naming issue. Could you post a new issue and try with tiny models? (search tiny in the list of models).
Author
Owner

@jsrcode commented on GitHub (Apr 6, 2024):

嗨,我会尽力帮助你。您的网络配置存在问题,因为 Ollama pull lama2 适用于我们任何人,这里没有报告任何问题。

您遇到的错误消息 Error: pull model manifest: Get “https://ollama.com/token?...”: net/http: TLS handshake timeout, indicates a issue with a secure connection to the server.这可能是由于多种原因造成的,包括网络问题、防火墙限制或 SSL 证书问题。以下是排查问题并可能解决问题的一些步骤:

1 - 检查网络连接:确保您的互联网连接稳定且足够快。在 TLS 握手过程中,连接缓慢或不稳定可能会导致超时。

2 - 防火墙或代理设置:如果您位于防火墙后面或使用代理,则它可能会阻止或干扰连接。尝试暂时禁用防火墙或将其配置为允许连接 ollama.com。如果您使用的是代理,请确保在环境变量或 Ollama 的配置中正确配置了代理。

3 - SSL 证书问题:该错误可能与 SSL 证书问题有关,例如自签名证书。如果您处于可以信任证书的受控环境中,则可以考虑将 --insecure 标志与 ollama pull 命令一起使用,以绕过 SSL 证书验证。但是,请谨慎使用这种方法,因为它可能会使您面临安全风险。

4 - 代理的环境变量:如果您使用的是代理,请确保HTTPS_PROXY环境变量已正确设置为指向您的代理服务器。这对于需要通过代理连接到 Internet 的应用程序至关重要。重新启动 Ollama 服务:有时,只需重新启动 Ollama 服务即可解决暂时性问题。使用适用于您的操作系统的命令重新启动服务。

5 - 手动拉动尝试:作为一种解决方法,您可以尝试快速连续多次拉动模型。据报道,这种方法有时会绕过该问题,尤其是在它与临时网络故障或服务器端问题有关时。

6 - 您可以从其他网络尝试吗?您能否共享您的网络配置,以查看您是否落后于公司网络、大学网络或家庭提供商网络。如果您的网络由内部管理员管理,您可以请他帮助您。

请记住,在处理网络问题或SSL证书时,请始终确保您遵循安全和隐私的最佳实践。

如果您找到解决方案,请在此处告诉我们,以便 Ollama 可以显示更好的记录错误消息(如果可能)。

There is nothing wrong with the firewall, and the ollama.com is accessible normally in the browser, but I get this error when pulling the model

<!-- gh-comment-id:2040965210 --> @jsrcode commented on GitHub (Apr 6, 2024): > 嗨,我会尽力帮助你。您的网络配置存在问题,因为 Ollama pull lama2 适用于我们任何人,这里没有报告任何问题。 > > 您遇到的错误消息 Error: pull model manifest: Get “https://ollama.com/token?...”: net/http: TLS handshake timeout, indicates a issue with a secure connection to the server.这可能是由于多种原因造成的,包括网络问题、防火墙限制或 SSL 证书问题。以下是排查问题并可能解决问题的一些步骤: > > 1 - 检查网络连接:确保您的互联网连接稳定且足够快。在 TLS 握手过程中,连接缓慢或不稳定可能会导致超时。 > > 2 - 防火墙或代理设置:如果您位于防火墙后面或使用代理,则它可能会阻止或干扰连接。尝试暂时禁用防火墙或将其配置为允许连接 ollama.com。如果您使用的是代理,请确保在环境变量或 Ollama 的配置中正确配置了代理。 > > 3 - SSL 证书问题:该错误可能与 SSL 证书问题有关,例如自签名证书。如果您处于可以信任证书的受控环境中,则可以考虑将 --insecure 标志与 ollama pull 命令一起使用,以绕过 SSL 证书验证。但是,请谨慎使用这种方法,因为它可能会使您面临安全风险。 > > 4 - 代理的环境变量:如果您使用的是代理,请确保HTTPS_PROXY环境变量已正确设置为指向您的代理服务器。这对于需要通过代理连接到 Internet 的应用程序至关重要。重新启动 Ollama 服务:有时,只需重新启动 Ollama 服务即可解决暂时性问题。使用适用于您的操作系统的命令重新启动服务。 > > 5 - 手动拉动尝试:作为一种解决方法,您可以尝试快速连续多次拉动模型。据报道,这种方法有时会绕过该问题,尤其是在它与临时网络故障或服务器端问题有关时。 > > 6 - 您可以从其他网络尝试吗?您能否共享您的网络配置,以查看您是否落后于公司网络、大学网络或家庭提供商网络。如果您的网络由内部管理员管理,您可以请他帮助您。 > > 请记住,在处理网络问题或SSL证书时,请始终确保您遵循安全和隐私的最佳实践。 > > 如果您找到解决方案,请在此处告诉我们,以便 Ollama 可以显示更好的记录错误消息(如果可能)。 There is nothing wrong with the firewall, and the ollama.com is accessible normally in the browser, but I get this error when pulling the model
Author
Owner

@xiehongxin commented on GitHub (Apr 7, 2024):

maybe you can try ping ollama.com to check you network

<!-- gh-comment-id:2041396307 --> @xiehongxin commented on GitHub (Apr 7, 2024): maybe you can try `ping ollama.com` to check you network
Author
Owner

@igorschlum commented on GitHub (Apr 11, 2024):

@jsrcode did you try from another location? You did not answer to know if you are at home or at University. Did you tried with new version 0.1.31?

<!-- gh-comment-id:2048808641 --> @igorschlum commented on GitHub (Apr 11, 2024): @jsrcode did you try from another location? You did not answer to know if you are at home or at University. Did you tried with new version 0.1.31?
Author
Owner

@Seedmanc commented on GitHub (Apr 12, 2024):

Same here. Clearly "Works for us" is not acceptable here.

<!-- gh-comment-id:2051785844 --> @Seedmanc commented on GitHub (Apr 12, 2024): Same here. Clearly "Works for us" is not acceptable here.
Author
Owner

@igorschlum commented on GitHub (Apr 12, 2024):

@Seedmanc It works for hundreds of users, so we have to find out what is the issue in some particulars configuration. One solution would be to be able to download manually the model and installl it manually in the directory.
Another solution would be to have réplications of servers to be able to download models from other parts of the word.
In which country are you?

<!-- gh-comment-id:2052009070 --> @igorschlum commented on GitHub (Apr 12, 2024): @Seedmanc It works for hundreds of users, so we have to find out what is the issue in some particulars configuration. One solution would be to be able to download manually the model and installl it manually in the directory. Another solution would be to have réplications of servers to be able to download models from other parts of the word. In which country are you?
Author
Owner

@Seedmanc commented on GitHub (Apr 12, 2024):

I'm not alone, as indicated by the opener of this issue and here's another link: https://forums.docker.com/t/docker-ollama-error-pull-model-manifest-get-https-registry-ollama-ai-v2-library-llama2-manifests-latest/140256/2

which country

Russia. I expected this question, but no, VPN doesn't help. Tried several of them.

So far only thing that worked is installing Ollama on Colab, pulling the models there and then downloading them from Colab and putting manually in folder on my system. This is a terrible amount of hoops to go through just to get started.

<!-- gh-comment-id:2052024165 --> @Seedmanc commented on GitHub (Apr 12, 2024): I'm not alone, as indicated by the opener of this issue and here's another link: https://forums.docker.com/t/docker-ollama-error-pull-model-manifest-get-https-registry-ollama-ai-v2-library-llama2-manifests-latest/140256/2 >which country Russia. I expected this question, but no, VPN doesn't help. Tried several of them. So far only thing that worked is installing Ollama on Colab, pulling the models there and then downloading them from Colab and putting manually in folder on my system. This is a terrible amount of hoops to go through just to get started.
Author
Owner

@igorschlum commented on GitHub (Apr 12, 2024):

OK, there is an issue when downloading from certain countries because of proxies of limitations. The message should be at least more clear. I hope Ollama Team could take this point and offer a manual download of models.

<!-- gh-comment-id:2052063634 --> @igorschlum commented on GitHub (Apr 12, 2024): OK, there is an issue when downloading from certain countries because of proxies of limitations. The message should be at least more clear. I hope Ollama Team could take this point and offer a manual download of models.
Author
Owner

@phpadminer commented on GitHub (Apr 22, 2024):

这个可能需要梯子并且命令行翻墙,当然就算翻墙了也有可能失败。多试几次就好了,至少我就是这样子的。
This may require a VPN and the command line must use it, although even if the VPN is on, it may fail. Just try a few more times.

<!-- gh-comment-id:2068560659 --> @phpadminer commented on GitHub (Apr 22, 2024): 这个可能需要梯子并且命令行翻墙,当然就算翻墙了也有可能失败。多试几次就好了,至少我就是这样子的。 This may require a VPN and the command line must use it, although even if the VPN is on, it may fail. Just try a few more times.
Author
Owner

@Seedmanc commented on GitHub (May 2, 2024):

So I've tried a lot of different things and some of them must have worked, it does pull models now. I can't redo them one by one to tell for sure which, but I can list a few.

  • Went to internet settings and enabled various TLSs and SSLs.
  • Added ollama.ai to trusted sites
  • Removed some weird self-made certificates I had in my personal storage
  • DLed the certificate from ollama.ai's site and added it to trusted root
  • might have been something else also

Perhaps some of you with better knowledge might pick an action from this list that most likely have fixed the issue.

On an unrelated note I have a similar TLS problem with a Unity game that tries to access Google docs on launch and hangs when it fails due to "Curl error 60: Cert verify failed: UNITYTLS_X509VERIFY_FLAG_USER_ERROR1". I was hoping the nature of the problem is the same as I had here and now that it's fixed, the game would also work. It didn't.

<!-- gh-comment-id:2089931840 --> @Seedmanc commented on GitHub (May 2, 2024): So I've tried a lot of different things and some of them must have worked, it does pull models now. I can't redo them one by one to tell for sure which, but I can list a few. - Went to internet settings and enabled various TLSs and SSLs. - Added ollama.ai to trusted sites - Removed some weird self-made certificates I had in my personal storage - DLed the certificate from ollama.ai's site and added it to trusted root - might have been something else also Perhaps some of you with better knowledge might pick an action from this list that most likely have fixed the issue. On an unrelated note I have a similar TLS problem with a Unity game that tries to access Google docs on launch and hangs when it fails due to "Curl error 60: Cert verify failed: UNITYTLS_X509VERIFY_FLAG_USER_ERROR1". I was hoping the nature of the problem is the same as I had here and now that it's fixed, the game would also work. It didn't.
Author
Owner

@bmizerany commented on GitHub (May 9, 2024):

Hello, Everyone!

At Ollama we're working on a solution to this issue, and have been seeing some positive results!

Now we need your help testing in your enviroments as well!

How to help:

  1. Run a test pull through our staging server

    From the list below, pick one (or many) of the models that you have not pulled already, and perform a pull.

    ollama pull issue1736.ollama.dev/library/llama3:8b
    ollama pull issue1736.ollama.dev/library/gemma:2b
    ollama pull issue1736.ollama.dev/library/mistral
    ollama pull issue1736.ollama.dev/library/dolphin-mistral
    ollama pull issue1736.ollama.dev/library/wizardlm2
    ollama pull issue1736.ollama.dev/library/llava-phi3
    ollama pull issue1736.ollama.dev/library/llava-llama3
    ollama pull issue1736.ollama.dev/library/dolphin-phi
    ollama pull issue1736.ollama.dev/library/nomic-embed-text
    ollama pull issue1736.ollama.dev/library/phi3
    ollama pull issue1736.ollama.dev/library/orca-mini
    
  2. Remove and retry 2 or 3 more times

    ollama rm issue1736.ollama.dev/library/<model>[:<tag>]
    ollama pull issue1736.ollama.dev/library/<model>[:<tag>]
    
  3. Report back!

    Please respond here answering these questions to the best of your ability:

    • What was the full ollama pull command you ran including model?
    • What OS are you running the ollama server on?
    • What speed range did you see? (e.g. 30-50 MB/s)
    • What version of Ollama are you using?
    • What region of the world is your ollama running?
    • What is the top speed of your internet connection?
    • Was it faster, slower, the same as a normal ollama pull <model> for the same model(s)?

Thank you all so much in advance. We look forward to hearing back from you.

<!-- gh-comment-id:2103085137 --> @bmizerany commented on GitHub (May 9, 2024): Hello, Everyone! At Ollama we're working on a solution to this issue, and have been seeing some positive results! Now we need your help testing in your enviroments as well! How to help: 1. Run a test pull through our staging server From the list below, pick one (or many) of the models that you *have not* pulled already, and perform a pull. ``` ollama pull issue1736.ollama.dev/library/llama3:8b ollama pull issue1736.ollama.dev/library/gemma:2b ollama pull issue1736.ollama.dev/library/mistral ollama pull issue1736.ollama.dev/library/dolphin-mistral ollama pull issue1736.ollama.dev/library/wizardlm2 ollama pull issue1736.ollama.dev/library/llava-phi3 ollama pull issue1736.ollama.dev/library/llava-llama3 ollama pull issue1736.ollama.dev/library/dolphin-phi ollama pull issue1736.ollama.dev/library/nomic-embed-text ollama pull issue1736.ollama.dev/library/phi3 ollama pull issue1736.ollama.dev/library/orca-mini ``` 1. Remove and retry 2 or 3 more times ``` ollama rm issue1736.ollama.dev/library/<model>[:<tag>] ollama pull issue1736.ollama.dev/library/<model>[:<tag>] ``` 1. Report back! Please respond here answering these questions to the best of your ability: - What was the full `ollama pull` command you ran including model? - What OS are you running the ollama server on? - What speed range did you see? (e.g. `30-50 MB/s`) - What version of Ollama are you using? - What region of the world is your ollama running? - What is the top speed of your internet connection? - Was it faster, slower, the same as a normal `ollama pull <model>` for the same model(s)? Thank you all so much in advance. We look forward to hearing back from you.
Author
Owner

@Alchemistqqqq commented on GitHub (May 10, 2024):

大家好!

在 Ollama,我们正在努力解决这个问题,并且已经看到了一些积极的成果!

现在我们还需要您帮助在您的环境中进行测试!

如何帮助:

  1. 通过我们的临时服务器运行测试拉取
    _从下面的列表中,选择一个(或多个)您尚未_拉取的模型,然后执行拉取。

    ollama pull issue1736.ollama.dev/library/llama3:8b
    ollama pull issue1736.ollama.dev/library/gemma:2b
    ollama pull issue1736.ollama.dev/library/mistral
    ollama pull issue1736.ollama.dev/library/dolphin-mistral
    ollama pull issue1736.ollama.dev/library/wizardlm2
    ollama pull issue1736.ollama.dev/library/llava-phi3
    ollama pull issue1736.ollama.dev/library/llava-llama3
    ollama pull issue1736.ollama.dev/library/dolphin-phi
    ollama pull issue1736.ollama.dev/library/nomic-embed-text
    ollama pull issue1736.ollama.dev/library/phi3
    ollama pull issue1736.ollama.dev/library/orca-mini
    
  2. 删除并重试 2 或 3 次以上

    ollama rm issue1736.ollama.dev/library/<model>[:<tag>]
    ollama pull issue1736.ollama.dev/library/<model>[:<tag>]
    
  3. 回来报告!
    请在此尽您所能回答以下问题:

    • ollama pull您运行的完整命令(包括模型)是什么?
    • 您在什么操作系统上运行 ollama 服务器?
    • 你看到什么速度范围? (例如30-50 MB/s
    • 您使用的 Ollama 版本是什么?
    • 您的 ollama 在世界哪个地区运行?
    • 您的互联网连接的最高速度是多少?
    • ollama pull <model>对于相同型号,它是更快、更慢还是与正常情况相同?

提前非常感谢大家。我们期待您的回复。

image
Hello, let me first explain my environment to you: I am in China, using the linux ubuntu system server, the server is using the campus network. This causes many one-click operations to fail due to network reasons. According to the manual installation tutorial provided, I have installed ollama on the server and can ping through the ollama website. But as the picture above shows. I need to use qwen to fix the problem, but both run and the pull operation you provided will fail. This problem has been bothering me for a long time, I hope you can provide a detailed manual installation tutorial, because my own host can hang vpn, so I now think a better solution is to download the corresponding file on the local machine, and drag and drop to the corresponding location of the server to achieve the same effect as the run command.

<!-- gh-comment-id:2103906008 --> @Alchemistqqqq commented on GitHub (May 10, 2024): > 大家好! > > 在 Ollama,我们正在努力解决这个问题,并且已经看到了一些积极的成果! > > 现在我们还需要您帮助在您的环境中进行测试! > > 如何帮助: > > 1. 通过我们的临时服务器运行测试拉取 > _从下面的列表中,选择一个(或多个)您尚未_拉取的模型,然后执行拉取。 > ``` > ollama pull issue1736.ollama.dev/library/llama3:8b > ollama pull issue1736.ollama.dev/library/gemma:2b > ollama pull issue1736.ollama.dev/library/mistral > ollama pull issue1736.ollama.dev/library/dolphin-mistral > ollama pull issue1736.ollama.dev/library/wizardlm2 > ollama pull issue1736.ollama.dev/library/llava-phi3 > ollama pull issue1736.ollama.dev/library/llava-llama3 > ollama pull issue1736.ollama.dev/library/dolphin-phi > ollama pull issue1736.ollama.dev/library/nomic-embed-text > ollama pull issue1736.ollama.dev/library/phi3 > ollama pull issue1736.ollama.dev/library/orca-mini > ``` > 2. 删除并重试 2 或 3 次以上 > ``` > ollama rm issue1736.ollama.dev/library/<model>[:<tag>] > ollama pull issue1736.ollama.dev/library/<model>[:<tag>] > ``` > 3. 回来报告! > 请在此尽您所能回答以下问题: > > * `ollama pull`您运行的完整命令(包括模型)是什么? > * 您在什么操作系统上运行 ollama 服务器? > * 你看到什么速度范围? (例如`30-50 MB/s`) > * 您使用的 Ollama 版本是什么? > * 您的 ollama 在世界哪个地区运行? > * 您的互联网连接的最高速度是多少? > * `ollama pull <model>`对于相同型号,它是更快、更慢还是与正常情况相同? > > 提前非常感谢大家。我们期待您的回复。 ![image](https://github.com/ollama/ollama/assets/146717415/4a2d646f-646a-43fd-a558-38c4cd665796) Hello, let me first explain my environment to you: I am in China, using the linux ubuntu system server, the server is using the campus network. This causes many one-click operations to fail due to network reasons. According to the manual installation tutorial provided, I have installed ollama on the server and can ping through the ollama website. But as the picture above shows. I need to use qwen to fix the problem, but both run and the pull operation you provided will fail. This problem has been bothering me for a long time, I hope you can provide a detailed manual installation tutorial, because my own host can hang vpn, so I now think a better solution is to download the corresponding file on the local machine, and drag and drop to the corresponding location of the server to achieve the same effect as the run command.
Author
Owner

@igorschlum commented on GitHub (May 10, 2024):

Hi, I tried with very bad and good internet connection. With good internet connection, it's fast and with poor internet connection, it's not dropping as it was doing before. When the connection was halted, Ollama said that the connection drop
So for me, it's all good.

pulling manifest
pulling 377876be20ba... 36% ▕█████████████ ▏ 841 MB/2.3 GB 2.4 MB/s 10m17s
Error: max retries exceeded: Get "https://issue1736.ollama.dev/v2/library/llava-phi3/blobs/sha256:377876be20bac24488716c04824ab3a6978900679b40013b0d2585004555e658": read tcp 192.168.1.80:50744->66.241.124.100:443: read: connection reset by peer

<!-- gh-comment-id:2103951057 --> @igorschlum commented on GitHub (May 10, 2024): Hi, I tried with very bad and good internet connection. With good internet connection, it's fast and with poor internet connection, it's not dropping as it was doing before. When the connection was halted, Ollama said that the connection drop So for me, it's all good. pulling manifest pulling 377876be20ba... 36% ▕█████████████ ▏ 841 MB/2.3 GB 2.4 MB/s 10m17s Error: max retries exceeded: Get "https://issue1736.ollama.dev/v2/library/llava-phi3/blobs/sha256:377876be20bac24488716c04824ab3a6978900679b40013b0d2585004555e658": read tcp 192.168.1.80:50744->66.241.124.100:443: read: connection reset by peer
Author
Owner

@pj-connect commented on GitHub (May 16, 2024):

Same issue here.

ollama pull llama3
[GIN] 2024/05/16 - 02:40:05 | 200 |       27.99µs |       127.0.0.1 | HEAD     "/"
pulling manifest ⠦ [GIN] 2024/05/16 - 02:40:20 | 200 | 14.772615735s |       127.0.0.1 | POST     "/api/pull"
pulling manifest 
Error: pull model manifest: Get "https://ollama.com/token?nonce=CaM2X-esOi-e2PKnFp8giw&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1715827210": net/http: TLS handshake timeout

There is no debug, nor verbose, option ? Even on discord, some people say it works for them as a proof that there is no issue at all. I very rarely get a TLS handshake timeout, but consistently with ollama.com.

Using my vpn, I selected a US server, and now the manifest, and the model, is downloading. So this seems a strictly a geolocation issue.

<!-- gh-comment-id:2113912608 --> @pj-connect commented on GitHub (May 16, 2024): Same issue here. ``` ollama pull llama3 [GIN] 2024/05/16 - 02:40:05 | 200 | 27.99µs | 127.0.0.1 | HEAD "/" pulling manifest ⠦ [GIN] 2024/05/16 - 02:40:20 | 200 | 14.772615735s | 127.0.0.1 | POST "/api/pull" pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=CaM2X-esOi-e2PKnFp8giw&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1715827210": net/http: TLS handshake timeout ``` There is no debug, nor verbose, option ? Even on discord, some people say it works for them as a proof that there is no issue at all. I very rarely get a TLS handshake timeout, but consistently with ollama.com. **Using my vpn, I selected a US server, and now the manifest, and the model, is downloading. So this seems a strictly a geolocation issue.**
Author
Owner

@saymanq commented on GitHub (May 16, 2024):

I was getting the TLS handshake timeout, but when I used a VPN and changed my server to the United States it started working as expected just as someone else here has also pointed out. It seems to be a geo restriction problem.

<!-- gh-comment-id:2116017428 --> @saymanq commented on GitHub (May 16, 2024): I was getting the TLS handshake timeout, but when I used a VPN and changed my server to the United States it started working as expected just as someone else here has also pointed out. It seems to be a geo restriction problem.
Author
Owner

@sunnyisabaster commented on GitHub (May 20, 2024):

I changed rule to global in my vpn. it's solved.

<!-- gh-comment-id:2119726922 --> @sunnyisabaster commented on GitHub (May 20, 2024): I changed rule to global in my vpn. it's solved.
Author
Owner

@igorschlum commented on GitHub (May 20, 2024):

@jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster.

<!-- gh-comment-id:2120336792 --> @igorschlum commented on GitHub (May 20, 2024): @jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster.
Author
Owner

@taobiaoli1314 commented on GitHub (May 21, 2024):

@jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster.

The same error, I tried everything, both the ollama version and the vpn, and they all failed.

<!-- gh-comment-id:2121848480 --> @taobiaoli1314 commented on GitHub (May 21, 2024): > @jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster. The same error, I tried everything, both the ollama version and the vpn, and they all failed.
Author
Owner

@taobiaoli1314 commented on GitHub (May 21, 2024):

I met the same error:
pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6a/6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240521%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240521T063315Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=0b9c67a70a1f8baba2a93135998888e394f987bf417a3b126995e05ed27b60ea": net/http: TLS handshake timeout

<!-- gh-comment-id:2121863023 --> @taobiaoli1314 commented on GitHub (May 21, 2024): I met the same error: `pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6a/6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240521%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240521T063315Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=0b9c67a70a1f8baba2a93135998888e394f987bf417a3b126995e05ed27b60ea": net/http: TLS handshake timeout`
Author
Owner

@pj-connect commented on GitHub (May 21, 2024):

Simply set your local VPN to select and use a server in the USA to make seem that your internet traffic originated from the USA.

To achieve the appearance that your internet traffic originates from the United States, configure your local VPN to connect to a server located within the USA. This will mask your actual location and provide you with an American IP address, thus making it seem as if your online activity is taking place within the United States.

For instance, one practical application of this technique can be seen in accessing region-locked content on streaming services like Netflix, or better the ollama server. Many shows and movies are available exclusively to U.S. viewers due to licensing agreements. By using a VPN to connect to a U.S. server, international users can bypass these geographical restrictions and gain access to a broader library of content.

<!-- gh-comment-id:2122998521 --> @pj-connect commented on GitHub (May 21, 2024): Simply set your local VPN to select and use a server in the USA to make seem that your internet traffic originated from the USA. To achieve the appearance that your internet traffic originates from the United States, configure your local VPN to connect to a server located within the USA. This will mask your actual location and provide you with an American IP address, thus making it seem as if your online activity is taking place within the United States. For instance, one practical application of this technique can be seen in accessing region-locked content on streaming services like Netflix, **or better the ollama server**. Many shows and movies are available exclusively to U.S. viewers due to licensing agreements. By using a VPN to connect to a U.S. server, international users can bypass these geographical restrictions and gain access to a broader library of content.
Author
Owner

@smartexpert commented on GitHub (May 22, 2024):

I'm running docker on linux and encountered a certificate verification error. I was able to solve it be adding the custom certificate and build a new docker image based on the docs here.

<!-- gh-comment-id:2125488040 --> @smartexpert commented on GitHub (May 22, 2024): I'm running docker on linux and encountered a certificate verification error. I was able to solve it be adding the custom certificate and build a new docker image based on the docs [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy).
Author
Owner

@quanta-guy commented on GitHub (Jun 6, 2024):

What is the issue?

C:\Users\18164>ollama run qwen:0.5b pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout

What did you expect to see?

Pull the model

Steps to reproduce

Pull the model

Are there any recent changes that introduced the issue?

No

OS

Windows

Architecture

x86

Platform

Docker

Ollama version

0.1.30

GPU

Intel

GPU info

No response

CPU

Intel

Other software

No response

Try Using Alternative DNS Servers: You can try changing your DNS servers to Google's public DNS (8.8.8.8 and 8.8.4.4) or Cloudflare's DNS (1.1.1.1). This worked for me
Uploading image.png…

<!-- gh-comment-id:2151569000 --> @quanta-guy commented on GitHub (Jun 6, 2024): > ### What is the issue? > C:\Users\18164>ollama run qwen:0.5b pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout > > ### What did you expect to see? > Pull the model > > ### Steps to reproduce > Pull the model > > ### Are there any recent changes that introduced the issue? > No > > ### OS > Windows > > ### Architecture > x86 > > ### Platform > Docker > > ### Ollama version > 0.1.30 > > ### GPU > Intel > > ### GPU info > _No response_ > > ### CPU > Intel > > ### Other software > _No response_ Try Using Alternative DNS Servers: You can try changing your DNS servers to Google's public DNS (8.8.8.8 and 8.8.4.4) or Cloudflare's DNS (1.1.1.1). This worked for me ![Uploading image.png…]()
Author
Owner

@weipengzou commented on GitHub (Jun 6, 2024):

set my VPN to "TUN mode"

and ollama pull gemma:2b

that work for me.

<!-- gh-comment-id:2152925593 --> @weipengzou commented on GitHub (Jun 6, 2024): set my VPN to "TUN mode" and `ollama pull gemma:2b` that work for me.
Author
Owner

@chris-at-work commented on GitHub (Jun 13, 2024):

Try downgrading:

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION="0.1.29" sh

this solved connection reset errors on any and all models for me. ⚠️ This will reinstall, meaning any edits to systemd files or anything the installer does will be lost.

I don't know the connection between the CLI and servers but maybe ollama is rearchitecting?

<!-- gh-comment-id:2166722992 --> @chris-at-work commented on GitHub (Jun 13, 2024): Try downgrading: ``` curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION="0.1.29" sh ``` this solved `connection reset` errors on any and all models for me. ⚠️ This will reinstall, meaning any edits to systemd files or anything the installer does will be lost. I don't know the connection between the CLI and servers but maybe ollama is rearchitecting?
Author
Owner

@biandan commented on GitHub (Jul 14, 2024):

the new version still can not work at windows and wsl linux :ollama version is 0.2.5

@jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster.

The same error, I tried everything, both the ollama version and the vpn, and they all failed.

the new version still can not work at windows and wsl linux :ollama version is 0.2.5

<!-- gh-comment-id:2227349614 --> @biandan commented on GitHub (Jul 14, 2024): the new version still can not work at windows and wsl linux :ollama version is 0.2.5 > > @jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster. > > The same error, I tried everything, both the ollama version and the vpn, and they all failed. the new version still can not work at windows and wsl linux :ollama version is 0.2.5
Author
Owner

@shuurik commented on GitHub (Jul 14, 2024):

pulling manifest
Error: Head "6a0746a1ec/data!F(MISSING)20240714%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240714T140258Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=1383414da685fbce952259fd7c69196641bed1f9a5cf49661feb8c675a45c9bc": dial tcp 104.18.9.90:443: i/o timeout

<!-- gh-comment-id:2227360897 --> @shuurik commented on GitHub (Jul 14, 2024): pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6a/6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240714%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240714T140258Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=1383414da685fbce952259fd7c69196641bed1f9a5cf49661feb8c675a45c9bc": dial tcp 104.18.9.90:443: i/o timeout
Author
Owner

@luckydevil13 commented on GitHub (Jul 29, 2024):

same here pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e1/e16120252a9b0e49ed8074d11838d8b0227957a09d749d18425e491243e13822/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240729%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240729T052444Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=34aeb1c147c02d5cd23f9d6fbf05318f6e77a738c83e55cec1d9f9c234a4afac": dial tcp 188.114.98.224:443: i/o timeout

<!-- gh-comment-id:2254973514 --> @luckydevil13 commented on GitHub (Jul 29, 2024): same here `pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e1/e16120252a9b0e49ed8074d11838d8b0227957a09d749d18425e491243e13822/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240729%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240729T052444Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=34aeb1c147c02d5cd23f9d6fbf05318f6e77a738c83e55cec1d9f9c234a4afac": dial tcp 188.114.98.224:443: i/o timeout`
Author
Owner

@igorschlum commented on GitHub (Aug 10, 2024):

@jsrcode and others who still face this issue. Could you try with version 0.3.4 of Ollama?
Are you able to donwload GGUF files from HuggingFace?
Thank you for the Update.

<!-- gh-comment-id:2280440816 --> @igorschlum commented on GitHub (Aug 10, 2024): @jsrcode and others who still face this issue. Could you try with version 0.3.4 of Ollama? Are you able to donwload GGUF files from HuggingFace? Thank you for the Update.
Author
Owner

@xiehongxin commented on GitHub (Aug 25, 2024):

您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!

<!-- gh-comment-id:2308719367 --> @xiehongxin commented on GitHub (Aug 25, 2024): 您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
Author
Owner

@PrashantSakre commented on GitHub (Jan 17, 2025):

Hi Need help here too,
I am getting bellow error. And also 'ollama show codeollama' doesn't work. ( Error: model 'codellama:7b' not found)

pulling manifest
pulling 3a43f93b78ec... 0% ▕ ▏ 0 B/3.8 GB
Error: max retries exceeded: Get "3a43f93b78/data": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host

<!-- gh-comment-id:2598770446 --> @PrashantSakre commented on GitHub (Jan 17, 2025): Hi Need help here too, I am getting bellow error. And also 'ollama show codeollama' doesn't work. ( Error: model 'codellama:7b' not found) pulling manifest pulling 3a43f93b78ec... 0% ▕ ▏ 0 B/3.8 GB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/3a/3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250117%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250117T164253Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=3d8bbf471d6df33017b9930048c7beb174c876cb9cb9bae9a2651e0065a2b5ce": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
Author
Owner

@xiehongxin commented on GitHub (Jan 17, 2025):

您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!

<!-- gh-comment-id:2598771460 --> @xiehongxin commented on GitHub (Jan 17, 2025): 您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
Author
Owner

@erix22 commented on GitHub (Jan 24, 2025):

Hi,

I did install a new PC yesterday and just installed Ollama this morning.
and I cannot pull DeepSeek-R1; I obtain
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/96/96c415656d377a..............
net/http: TLS handshake timeout all the time...

I did start with Wifi and now on ethernet cable, but still the same...
is there something going on today ?

cloudfare ?

Thanks in advance

<!-- gh-comment-id:2612504246 --> @erix22 commented on GitHub (Jan 24, 2025): Hi, I did install a new PC yesterday and just installed Ollama this morning. and I cannot pull DeepSeek-R1; I obtain Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/96/96c415656d377a.............. net/http: TLS handshake timeout all the time... I did start with Wifi and now on ethernet cable, but still the same... is there something going on today ? cloudfare ? Thanks in advance
Author
Owner

@erix22 commented on GitHub (Jan 24, 2025):

well, how I did solve the problem or how it disappeared...

I did a copy of the URL from my previous post when I was trying to
pull DeepSeek-r1 and then I pasted it in my browser.
it asked me whereI wanted to save the "thing"
(because I do not know what is it at the moment)
and... it did work without a single problem..

so, I am not an expert, including Linux Network Ollama, but I am sure
there are a lot of experts who will read this message and I really hope
one of them will come and tell me what was wrong..?

for the record:

Memory:
System RAM: total: 64 GiB available: 58.73 GiB used: 2.79 GiB (4.7%)
Message: For most reliable report, use superuser + dmidecode.
Array-1: capacity: 96 GiB note: est. slots: 2 modules: 2 EC: None
max-module-size: 48 GiB note: est.
Device-1: Channel-A DIMM 0 type: DDR5 detail: synchronous unbuffered
(unregistered) size: 16 GiB speed: 5600 MT/s volts: note: check curr: 1
min: 1 max: 1 width (bits): data: 64 total: 64 manufacturer: Crucial
part-no: CT16G56C46S5.C8D serial: E94C903B
Device-2: Channel-B DIMM 0 type: DDR5 detail: synchronous unbuffered
(unregistered) size: 48 GiB speed: 5600 MT/s volts: note: check curr: 1
min: 1 max: 1 width (bits): data: 64 total: 64
manufacturer: Micron Technology part-no: CT48G56C46S5.M16B1
serial: EB029679
System:
Host: geeka8 Kernel: 6.8.0-51-generic arch: x86_64 bits: 64
Desktop: Xfce v: 4.18.1 Distro: Linux Mint 22 Wilma
Graphics:
Device-1: AMD Phoenix3 driver: amdgpu v: kernel
Display: x11 server: X.Org v: 21.1.11 with: Xwayland v: 23.2.6 driver: X:
loaded: amdgpu unloaded: fbdev,modesetting,vesa dri: radeonsi gpu: amdgpu
resolution: 1920x1080~60Hz
API: EGL v: 1.5 drivers: radeonsi,swrast platforms: x11,surfaceless,device
API: OpenGL v: 4.6 compat-v: 4.5 vendor: amd mesa v: 24.0.9-0ubuntu0.3
renderer: AMD Radeon Graphics (radeonsi gfx1103_r1 LLVM 17.0.6 DRM 3.57
6.8.0-51-generic)

Browser: Firefox Mint flavored Mint-001-1.0 134.0.2 (64-bit)
ollama version is 0.5.7

PING ollama.com (34.36.133.15) 56(84) bytes of data.
64 bytes from 15.133.36.34.bc.googleusercontent.com (34.36.133.15): icmp_seq=1 ttl=112 time=1173 ms
64 bytes from 15.133.36.34.bc.googleusercontent.com (34.36.133.15): icmp_seq=2 ttl=112 time=1131 ms
--- ollama.com ping statistics ---
10 packets transmitted, 9 received, 10% packet loss, time 10559ms
rtt min/avg/max/mdev = 824.670/1191.051/1523.466/215.513 ms, pipe 2

geeka8:# nslookup ollama.com
Server: 127.0.0.53
Address: 127.0.0.53#53

Non-authoritative answer:
Name: ollama.com
Address: 34.36.133.15

geeka8:# dig ollama.com

; <<>> DiG 9.18.30-0ubuntu0.24.04.1-Ubuntu <<>> ollama.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47908
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;ollama.com. IN A

;; ANSWER SECTION:
ollama.com. 166 IN A 34.36.133.15

;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jan 24 16:55:50 CET 2025
;; MSG SIZE rcvd: 55


if I may, which packages are supposed to be installed before the
installaion of Ollama ?

now, where and how to move the "thing to make it usable more my local instance
of Ollama ? it is actually in my Downloads dir, zwhere should I move it ?
what name should I give it ? etc etc

I even cannot pull "deepseek-r1:1.5b"
my location is France and I am not using a VPN

Ihave the same problem with another PC on another network on another ISP...

any ideas ???

<!-- gh-comment-id:2612897193 --> @erix22 commented on GitHub (Jan 24, 2025): well, how I did solve the problem or how it disappeared... I did a copy of the URL from my previous post when I was trying to pull DeepSeek-r1 and then I pasted it in my browser. it asked me whereI wanted to save the "thing" (because I do not know what is it at the moment) and... it did work without a single problem.. so, I am not an expert, including Linux Network Ollama, but I am sure there are a lot of experts who will read this message and I really hope one of them will come and tell me what was wrong..? for the record: ---------------------------------------------------------------------- Memory: System RAM: total: 64 GiB available: 58.73 GiB used: 2.79 GiB (4.7%) Message: For most reliable report, use superuser + dmidecode. Array-1: capacity: 96 GiB note: est. slots: 2 modules: 2 EC: None max-module-size: 48 GiB note: est. Device-1: Channel-A DIMM 0 type: DDR5 detail: synchronous unbuffered (unregistered) size: 16 GiB speed: 5600 MT/s volts: note: check curr: 1 min: 1 max: 1 width (bits): data: 64 total: 64 manufacturer: Crucial part-no: CT16G56C46S5.C8D serial: E94C903B Device-2: Channel-B DIMM 0 type: DDR5 detail: synchronous unbuffered (unregistered) size: 48 GiB speed: 5600 MT/s volts: note: check curr: 1 min: 1 max: 1 width (bits): data: 64 total: 64 manufacturer: Micron Technology part-no: CT48G56C46S5.M16B1 serial: EB029679 System: Host: geeka8 Kernel: 6.8.0-51-generic arch: x86_64 bits: 64 Desktop: Xfce v: 4.18.1 Distro: Linux Mint 22 Wilma Graphics: Device-1: AMD Phoenix3 driver: amdgpu v: kernel Display: x11 server: X.Org v: 21.1.11 with: Xwayland v: 23.2.6 driver: X: loaded: amdgpu unloaded: fbdev,modesetting,vesa dri: radeonsi gpu: amdgpu resolution: 1920x1080~60Hz API: EGL v: 1.5 drivers: radeonsi,swrast platforms: x11,surfaceless,device API: OpenGL v: 4.6 compat-v: 4.5 vendor: amd mesa v: 24.0.9-0ubuntu0.3 renderer: AMD Radeon Graphics (radeonsi gfx1103_r1 LLVM 17.0.6 DRM 3.57 6.8.0-51-generic) Browser: Firefox Mint flavored Mint-001-1.0 134.0.2 (64-bit) ollama version is 0.5.7 ---------------------------------------------------------------------- PING ollama.com (34.36.133.15) 56(84) bytes of data. 64 bytes from 15.133.36.34.bc.googleusercontent.com (34.36.133.15): icmp_seq=1 ttl=112 time=1173 ms 64 bytes from 15.133.36.34.bc.googleusercontent.com (34.36.133.15): icmp_seq=2 ttl=112 time=1131 ms --- ollama.com ping statistics --- 10 packets transmitted, 9 received, 10% packet loss, time 10559ms rtt min/avg/max/mdev = 824.670/1191.051/1523.466/215.513 ms, pipe 2 geeka8:# nslookup ollama.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: ollama.com Address: 34.36.133.15 geeka8:# dig ollama.com ; <<>> DiG 9.18.30-0ubuntu0.24.04.1-Ubuntu <<>> ollama.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47908 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;ollama.com. IN A ;; ANSWER SECTION: ollama.com. 166 IN A 34.36.133.15 ;; Query time: 0 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP) ;; WHEN: Fri Jan 24 16:55:50 CET 2025 ;; MSG SIZE rcvd: 55 ---------------------------------------------------------------------- if I may, which packages are supposed to be installed before the installaion of Ollama ? now, where and how to move the "thing to make it usable more my local instance of Ollama ? it is actually in my Downloads dir, zwhere should I move it ? what name should I give it ? etc etc I even cannot pull "deepseek-r1:1.5b" my location is France and I am not using a VPN Ihave the same problem with another PC on another network on another ISP... any ideas ???
Author
Owner

@rick-github commented on GitHub (Jan 24, 2025):

https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807

<!-- gh-comment-id:2613247398 --> @rick-github commented on GitHub (Jan 24, 2025): https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807
Author
Owner

@erix22 commented on GitHub (Jan 25, 2025):

Thank you @rick-github
I also noticed some strange access rights on the Ollama directory (GID)
thank you for your help indeed

<!-- gh-comment-id:2613807471 --> @erix22 commented on GitHub (Jan 25, 2025): Thank you @rick-github I also noticed some strange access rights on the Ollama directory (GID) thank you for your help indeed
Author
Owner

@martinwozenilek commented on GitHub (Jan 25, 2025):

Same problem here with a Jetson Orin, can't pull any model with ollama, TLS timout. In the end I've downloaded the models manually and made some adjustments to the filenames and access rights. I've used this for download:

https://github.com/amirrezaDev1378/ollama-model-direct-download

But of course the PowerShell script will help the same.

After download I've to correct the filenames from just "data" to "sha-92348238...". The filenames needs to have a dash behind "sha" and not a ":" like defined in the manifest file.

After a first ollama run the manifest file get's rewritten and everything is good to go!

The blob directory must look like this:

root@orin:~# ls -l /usr/share/ollama/.ollama/models/blobs/
total 4573336
-rw-r--r-- 1 ollama ollama        387 Jan 24 20:41 sha256-369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150
-rw-r--r-- 1 ollama ollama        487 Jan 24 20:54 sha256-40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd
-rw-r--r-- 1 ollama ollama       1065 Jan 24 20:41 sha256-6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4
-rw-r--r-- 1 ollama ollama 4683073184 Jan 24 20:41 sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
-rw-r--r-- 1 ollama ollama        148 Jan 24 20:41 sha256-f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588

And after a first "ollama run" the manifest file looks like this:

root@orin:~# cat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/deepseek-r1/7b | jq
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
    "mediaType": "application/vnd.docker.container.image.v1+json",
    "digest": "sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd",
    "size": 487
  },
  "layers": [
    {
      "mediaType": "application/vnd.ollama.image.model",
      "digest": "sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49",
      "size": 4683073184
    },
    {
      "mediaType": "application/vnd.ollama.image.template",
      "digest": "sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150",
      "size": 387
    },
    {
      "mediaType": "application/vnd.ollama.image.license",
      "digest": "sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4",
      "size": 1065
    },
    {
      "mediaType": "application/vnd.ollama.image.params",
      "digest": "sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588",
      "size": 148
    }
  ]
}
<!-- gh-comment-id:2613821641 --> @martinwozenilek commented on GitHub (Jan 25, 2025): Same problem here with a Jetson Orin, can't pull any model with ollama, TLS timout. In the end I've downloaded the models manually and made some adjustments to the filenames and access rights. I've used this for download: https://github.com/amirrezaDev1378/ollama-model-direct-download But of course the PowerShell script will help the same. After download I've to correct the filenames from just "data" to "sha-92348238...". The filenames needs to have a dash behind "sha" and not a ":" like defined in the manifest file. After a first ollama run the manifest file get's rewritten and everything is good to go! The blob directory must look like this: ``` root@orin:~# ls -l /usr/share/ollama/.ollama/models/blobs/ total 4573336 -rw-r--r-- 1 ollama ollama 387 Jan 24 20:41 sha256-369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150 -rw-r--r-- 1 ollama ollama 487 Jan 24 20:54 sha256-40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd -rw-r--r-- 1 ollama ollama 1065 Jan 24 20:41 sha256-6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4 -rw-r--r-- 1 ollama ollama 4683073184 Jan 24 20:41 sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 -rw-r--r-- 1 ollama ollama 148 Jan 24 20:41 sha256-f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588 ``` And after a first "ollama run" the manifest file looks like this: ``` root@orin:~# cat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/deepseek-r1/7b | jq { "schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "config": { "mediaType": "application/vnd.docker.container.image.v1+json", "digest": "sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd", "size": 487 }, "layers": [ { "mediaType": "application/vnd.ollama.image.model", "digest": "sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49", "size": 4683073184 }, { "mediaType": "application/vnd.ollama.image.template", "digest": "sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150", "size": 387 }, { "mediaType": "application/vnd.ollama.image.license", "digest": "sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4", "size": 1065 }, { "mediaType": "application/vnd.ollama.image.params", "digest": "sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588", "size": 148 } ] } ```
Author
Owner

@Mrahmani71 commented on GitHub (Jan 27, 2025):

Why is Ollama like this? I've been trying to pull a model for about 2 hours. It downloads about 1 GB and then drops back down to 400 MB. 🤦‍♂️😢

<!-- gh-comment-id:2615363409 --> @Mrahmani71 commented on GitHub (Jan 27, 2025): Why is Ollama like this? I've been trying to pull a model for about 2 hours. It downloads about 1 GB and then drops back down to 400 MB. 🤦‍♂️😢
Author
Owner

@rick-github commented on GitHub (Jan 27, 2025):

There are problems connecting to the Cloudfare CDN. Use one of the workarounds linked in this post to download the model.

<!-- gh-comment-id:2615388333 --> @rick-github commented on GitHub (Jan 27, 2025): There are problems connecting to the Cloudfare CDN. Use one of the workarounds linked in this post to download the model.
Author
Owner

@CasCard commented on GitHub (Jan 27, 2025):

Try this for Windows

  1. Fix DNS for Wi-Fi Adapter
    From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS:

Command Prompt (Admin):

netsh interface ip set dns "Wi-Fi" static 8.8.8.8
netsh interface ip add dns "Wi-Fi" 8.8.4.4 index=2
ipconfig /flushdns

PowerShell (Admin):

Set-DnsClientServerAddress -InterfaceAlias "Wi-Fi" -ServerAddresses ("8.8.8.8","8.8.4.4")
  1. Test DNS Resolution
    Check if the Cloudflare domain resolves:
nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com

Success: You’ll see an IP address like 172.64.xxx.xxx.

<!-- gh-comment-id:2616085992 --> @CasCard commented on GitHub (Jan 27, 2025): Try this for Windows 1. Fix DNS for Wi-Fi Adapter From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS: Command Prompt (Admin): ```cmd netsh interface ip set dns "Wi-Fi" static 8.8.8.8 netsh interface ip add dns "Wi-Fi" 8.8.4.4 index=2 ipconfig /flushdns ``` PowerShell (Admin): ``` Set-DnsClientServerAddress -InterfaceAlias "Wi-Fi" -ServerAddresses ("8.8.8.8","8.8.4.4") ``` 2. Test DNS Resolution Check if the Cloudflare domain resolves: ```cmd nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com ``` Success: You’ll see an IP address like 172.64.xxx.xxx.
Author
Owner

@Mrahmani71 commented on GitHub (Jan 29, 2025):

Why is Ollama like this? I've been trying to pull a model for about 2 hours. It downloads about 1 GB and then drops back down to 400 MB. 🤦‍♂️😢

I could download the models only when I used a VPN with an American server.

<!-- gh-comment-id:2622424020 --> @Mrahmani71 commented on GitHub (Jan 29, 2025): > Why is Ollama like this? I've been trying to pull a model for about 2 hours. It downloads about 1 GB and then drops back down to 400 MB. 🤦‍♂️😢 I could download the models only when I used a VPN with an American server.
Author
Owner

@heshi2019 commented on GitHub (Feb 3, 2025):

This doesn't seem to be a standalone issue. From about a month ago until now, many people have been experiencing the problem of not being able to pull models. Based on my own observation, switching networks, refreshing DNS, not using/using VPN, restarting the Olama service, and accessing the corresponding network address in the browser before pulling again can occasionally be useful. Another phenomenon is that the speed drops from 6MB/s to 100KB/s in the final stage of pulling,

<!-- gh-comment-id:2630137445 --> @heshi2019 commented on GitHub (Feb 3, 2025): This doesn't seem to be a standalone issue. From about a month ago until now, many people have been experiencing the problem of not being able to pull models. Based on my own observation, switching networks, refreshing DNS, not using/using VPN, restarting the Olama service, and accessing the corresponding network address in the browser before pulling again can occasionally be useful. Another phenomenon is that the speed drops from 6MB/s to 100KB/s in the final stage of pulling,
Author
Owner

@5UFKEFU commented on GitHub (Feb 7, 2025):

I ran into it too, my internet isn't that bad, but it just pulls over and over again and stays between 1%-2%.Had to manually upload the model file.

<!-- gh-comment-id:2642243313 --> @5UFKEFU commented on GitHub (Feb 7, 2025): I ran into it too, my internet isn't that bad, but it just pulls over and over again and stays between 1%-2%.Had to manually upload the model file.
Author
Owner

@JiangZhigz5055 commented on GitHub (Feb 13, 2025):

Could it be the server problem? I have encountered the error for a week. My laptop is macbook pro M3max 64G ram. I installed Ollama from ollama.com. Can anybody help? Thanks.
"xxxx@MacBook-Pro-M3max ~ % ollama run deepseek-r1:70b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/deepseek-r1/manifests/70b": read tcp [2408:846a:10:608d:c564:516d:c33c:5ca8]:49232->[2606:4700:3036::6815:4be3]:443: read: connection reset by peer"

<!-- gh-comment-id:2657194097 --> @JiangZhigz5055 commented on GitHub (Feb 13, 2025): Could it be the server problem? I have encountered the error for a week. My laptop is macbook pro M3max 64G ram. I installed Ollama from ollama.com. Can anybody help? Thanks. "xxxx@MacBook-Pro-M3max ~ % ollama run deepseek-r1:70b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/deepseek-r1/manifests/70b": read tcp [2408:846a:10:608d:c564:516d:c33c:5ca8]:49232->[2606:4700:3036::6815:4be3]:443: read: connection reset by peer"
Author
Owner

@CRASH-Tech commented on GitHub (Feb 13, 2025):

+1

<!-- gh-comment-id:2657360898 --> @CRASH-Tech commented on GitHub (Feb 13, 2025): +1
Author
Owner

@VanemKrAu commented on GitHub (Feb 22, 2025):

There is a file that cannot be downloaded for some reason, and below are the error codes:
PS C:\Users\Vanem> ollama run hf.co/bartowski/gemma-2-9b-it-abliterated-GGUF:Q4_K_M
pulling manifest
pulling 88d84ac97967... 100% ▕████████████████████████████████████████████████████████▏ 5.8 GB
pulling e0a42594d802... 0% ▕ ▏ 0 B/ 358 B
Error: max retries exceeded: Get "https://huggingface.co/v2/bartowski/gemma-2-9b-it-abliterated-GGUF/blobs/sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348?__sign=eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc0MDIzMjcwMSwic3ViIjoiL2JhcnRvd3NraS9nZW1tYS0yLTliLWl0LWFibGl0ZXJhdGVkLUdHVUYiLCJleHAiOjE3NDAyMzMzMDEsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.EajXgKPztGGet2vc4EvabvsiDLdikeUaJZNqtavrkIt4rwNnX_Glhr8K7pZNwhF-DTDTygmyD5xOIaySf0HkAQ": dial tcp 157.240.20.18:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

<!-- gh-comment-id:2676241118 --> @VanemKrAu commented on GitHub (Feb 22, 2025): There is a file that cannot be downloaded for some reason, and below are the error codes: PS C:\Users\Vanem> ollama run hf.co/bartowski/gemma-2-9b-it-abliterated-GGUF:Q4_K_M pulling manifest pulling 88d84ac97967... 100% ▕████████████████████████████████████████████████████████▏ 5.8 GB pulling e0a42594d802... 0% ▕ ▏ 0 B/ 358 B Error: max retries exceeded: Get "https://huggingface.co/v2/bartowski/gemma-2-9b-it-abliterated-GGUF/blobs/sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348?__sign=eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc0MDIzMjcwMSwic3ViIjoiL2JhcnRvd3NraS9nZW1tYS0yLTliLWl0LWFibGl0ZXJhdGVkLUdHVUYiLCJleHAiOjE3NDAyMzMzMDEsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.EajXgKPztGGet2vc4EvabvsiDLdikeUaJZNqtavrkIt4rwNnX_Glhr8K7pZNwhF-DTDTygmyD5xOIaySf0HkAQ": dial tcp 157.240.20.18:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Author
Owner

@VanemKrAu commented on GitHub (Feb 22, 2025):

There is a file that cannot be downloaded for some reason, and below are the error codes: PS C:\Users\Vanem> ollama run hf.co/bartowski/gemma-2-9b-it-abliterated-GGUF:Q4_K_M pulling manifest pulling 88d84ac97967... 100% ▕████████████████████████████████████████████████████████▏ 5.8 GB pulling e0a42594d802... 0% ▕ ▏ 0 B/ 358 B Error: max retries exceeded: Get "https://huggingface.co/v2/bartowski/gemma-2-9b-it-abliterated-GGUF/blobs/sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348?__sign=eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc0MDIzMjcwMSwic3ViIjoiL2JhcnRvd3NraS9nZW1tYS0yLTliLWl0LWFibGl0ZXJhdGVkLUdHVUYiLCJleHAiOjE3NDAyMzMzMDEsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.EajXgKPztGGet2vc4EvabvsiDLdikeUaJZNqtavrkIt4rwNnX_Glhr8K7pZNwhF-DTDTygmyD5xOIaySf0HkAQ": dial tcp 157.240.20.18:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

I finally solved this damn problem; I used an acceleration service for huggingface inside the Steam accelerator, and now I can pull it. This is so insane.

<!-- gh-comment-id:2676298180 --> @VanemKrAu commented on GitHub (Feb 22, 2025): > There is a file that cannot be downloaded for some reason, and below are the error codes: PS C:\Users\Vanem> ollama run hf.co/bartowski/gemma-2-9b-it-abliterated-GGUF:Q4_K_M pulling manifest pulling 88d84ac97967... 100% ▕████████████████████████████████████████████████████████▏ 5.8 GB pulling e0a42594d802... 0% ▕ ▏ 0 B/ 358 B Error: max retries exceeded: Get "https://huggingface.co/v2/bartowski/gemma-2-9b-it-abliterated-GGUF/blobs/sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348?__sign=eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc0MDIzMjcwMSwic3ViIjoiL2JhcnRvd3NraS9nZW1tYS0yLTliLWl0LWFibGl0ZXJhdGVkLUdHVUYiLCJleHAiOjE3NDAyMzMzMDEsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.EajXgKPztGGet2vc4EvabvsiDLdikeUaJZNqtavrkIt4rwNnX_Glhr8K7pZNwhF-DTDTygmyD5xOIaySf0HkAQ": dial tcp 157.240.20.18:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. I finally solved this damn problem; I used an acceleration service for huggingface inside the Steam accelerator, and now I can pull it. This is so insane.
Author
Owner

@kewlcode commented on GitHub (Mar 14, 2025):

still getting domain lookup error.

dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host

<!-- gh-comment-id:2723946737 --> @kewlcode commented on GitHub (Mar 14, 2025): still getting domain lookup error. dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
Author
Owner

@xiehongxin commented on GitHub (Mar 14, 2025):

您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!

<!-- gh-comment-id:2723949180 --> @xiehongxin commented on GitHub (Mar 14, 2025): 您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
Author
Owner

@rick-github commented on GitHub (Mar 14, 2025):

Follow these instructions: https://github.com/ollama/ollama/issues/8605#issuecomment-2639100703

<!-- gh-comment-id:2723968198 --> @rick-github commented on GitHub (Mar 14, 2025): Follow these instructions: https://github.com/ollama/ollama/issues/8605#issuecomment-2639100703
Author
Owner

@kewlcode commented on GitHub (Mar 14, 2025):

Follow these instructions: #8605 (comment)

Editing the hosts file resolved the issue. But hope someone solves this longstanding issue. If the server ip addresses change - will have to edit hosts file manually again. :-(

<!-- gh-comment-id:2724027963 --> @kewlcode commented on GitHub (Mar 14, 2025): > Follow these instructions: [#8605 (comment)](https://github.com/ollama/ollama/issues/8605#issuecomment-2639100703) Editing the hosts file resolved the issue. But hope someone solves this longstanding issue. If the server ip addresses change - will have to edit hosts file manually again. :-(
Author
Owner

@pragnesh-singh-rajput commented on GitHub (Mar 26, 2025):

Try this for Windows

  1. Fix DNS for Wi-Fi Adapter
    From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS:

Command Prompt (Admin):

netsh interface ip set dns "Wi-Fi" static 8.8.8.8
netsh interface ip add dns "Wi-Fi" 8.8.4.4 index=2
ipconfig /flushdns
PowerShell (Admin):

Set-DnsClientServerAddress -InterfaceAlias "Wi-Fi" -ServerAddresses ("8.8.8.8","8.8.4.4")
  1. Test DNS Resolution
    Check if the Cloudflare domain resolves:

nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Success: You’ll see an IP address like 172.64.xxx.xxx.

This worked for me...

<!-- gh-comment-id:2753871148 --> @pragnesh-singh-rajput commented on GitHub (Mar 26, 2025): > Try this for Windows > > 1. Fix DNS for Wi-Fi Adapter > From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS: > > Command Prompt (Admin): > > netsh interface ip set dns "Wi-Fi" static 8.8.8.8 > netsh interface ip add dns "Wi-Fi" 8.8.4.4 index=2 > ipconfig /flushdns > PowerShell (Admin): > > ``` > Set-DnsClientServerAddress -InterfaceAlias "Wi-Fi" -ServerAddresses ("8.8.8.8","8.8.4.4") > ``` > > 2. Test DNS Resolution > Check if the Cloudflare domain resolves: > > nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com > Success: You’ll see an IP address like 172.64.xxx.xxx. This worked for me...
Author
Owner

@adityanema2004 commented on GitHub (May 20, 2025):

Try this for Windows

  1. Fix DNS for Wi-Fi Adapter
    From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS:

Command Prompt (Admin):

netsh interface ip set dns "Wi-Fi" static 8.8.8.8
netsh interface ip add dns "Wi-Fi" 8.8.4.4 index=2
ipconfig /flushdns
PowerShell (Admin):

Set-DnsClientServerAddress -InterfaceAlias "Wi-Fi" -ServerAddresses ("8.8.8.8","8.8.4.4")
  1. Test DNS Resolution
    Check if the Cloudflare domain resolves:

nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Success: You’ll see an IP address like 172.64.xxx.xxx.

It Worked , Thanks!!

<!-- gh-comment-id:2893501716 --> @adityanema2004 commented on GitHub (May 20, 2025): > Try this for Windows > > 1. Fix DNS for Wi-Fi Adapter > From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS: > > Command Prompt (Admin): > > netsh interface ip set dns "Wi-Fi" static 8.8.8.8 > netsh interface ip add dns "Wi-Fi" 8.8.4.4 index=2 > ipconfig /flushdns > PowerShell (Admin): > > ``` > Set-DnsClientServerAddress -InterfaceAlias "Wi-Fi" -ServerAddresses ("8.8.8.8","8.8.4.4") > ``` > > 2. Test DNS Resolution > Check if the Cloudflare domain resolves: > > nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com > Success: You’ll see an IP address like 172.64.xxx.xxx. It Worked , Thanks!!
Author
Owner

@tianlichunhong commented on GitHub (Jun 9, 2025):

On windows, If I set the system proxy, ollama pull can't work. If I set htts_porxy all_proxy in system environment, ollama port 11434 will through the proxy. So I think there is a bug in ollama. ollama provides service must not through proxy, if it's required, need a ollama_proxy, not the system environment proxy. only pull need to use the system proxy.

<!-- gh-comment-id:2954450747 --> @tianlichunhong commented on GitHub (Jun 9, 2025): On windows, If I set the system proxy, ollama pull can't work. If I set htts_porxy all_proxy in system environment, ollama port 11434 will through the proxy. So I think there is a bug in ollama. ollama provides service must not through proxy, if it's required, need a ollama_proxy, not the system environment proxy. only pull need to use the system proxy.
Author
Owner

@xiehongxin commented on GitHub (Jun 9, 2025):

您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!

<!-- gh-comment-id:2954451173 --> @xiehongxin commented on GitHub (Jun 9, 2025): 您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
Author
Owner

@sameepvicky commented on GitHub (Jun 29, 2025):

Image

<!-- gh-comment-id:3016176065 --> @sameepvicky commented on GitHub (Jun 29, 2025): ![Image](https://github.com/user-attachments/assets/804bbb50-cf38-43e7-a48c-aa6fab6018d4)
Author
Owner

@xiehongxin commented on GitHub (Jun 29, 2025):

您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!

<!-- gh-comment-id:3016176440 --> @xiehongxin commented on GitHub (Jun 29, 2025): 您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
Author
Owner

@Potracheno commented on GitHub (Jul 15, 2025):

The same here. Seems like problem is in cloudflare which doesn't treat all countries equally. With US VPN it works.

<!-- gh-comment-id:3073587837 --> @Potracheno commented on GitHub (Jul 15, 2025): The same here. Seems like problem is in cloudflare which doesn't treat all countries equally. With US VPN it works.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64196