[GH-ISSUE #676] 403 Forbidden #46815

Closed
opened 2026-04-28 00:20:48 -05:00 by GiteaMirror · 31 comments
Owner

Originally created by @daaniyaan on GitHub (Oct 2, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/676

I'm getting this error for all the models.
setting http and https proxy in the terminal also doesn't work.

pulling manifest
Error: pull model manifest: on pull registry responded with code 403: 
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Forbidden</h1>
<h2>Your client does not have permission to get URL <code>/v2/library/vicuna/manifests/latest</code> from this server.</h2>
<h2></h2>
</body></html>
Originally created by @daaniyaan on GitHub (Oct 2, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/676 I'm getting this error for all the models. setting http and https proxy in the terminal also doesn't work. ``` pulling manifest Error: pull model manifest: on pull registry responded with code 403: <html><head> <meta http-equiv="content-type" content="text/html;charset=utf-8"> <title>403 Forbidden</title> </head> <body text=#000000 bgcolor=#ffffff> <h1>Error: Forbidden</h1> <h2>Your client does not have permission to get URL <code>/v2/library/vicuna/manifests/latest</code> from this server.</h2> <h2></h2> </body></html> ```
Author
Owner

@technovangelist commented on GitHub (Oct 2, 2023):

What command did you run for this?

<!-- gh-comment-id:1743651471 --> @technovangelist commented on GitHub (Oct 2, 2023): What command did you run for this?
Author
Owner

@technovangelist commented on GitHub (Oct 2, 2023):

For vicuna, you should use ollama pull vicuna

<!-- gh-comment-id:1743654791 --> @technovangelist commented on GitHub (Oct 2, 2023): For vicuna, you should use `ollama pull vicuna`
Author
Owner

@technovangelist commented on GitHub (Oct 2, 2023):

alternatively there is

curl -X POST http://localhost:11434/api/pull -d '{
  "name": "vicuna"
}'
<!-- gh-comment-id:1743656020 --> @technovangelist commented on GitHub (Oct 2, 2023): alternatively there is ``` curl -X POST http://localhost:11434/api/pull -d '{ "name": "vicuna" }' ```
Author
Owner

@daaniyaan commented on GitHub (Oct 2, 2023):

What command did you run for this?

it gives error for all of them.
for example this one is for mistral. tried with both "run" and "pull"

CleanShot 2023-10-02 at 23 24 36

<!-- gh-comment-id:1743665838 --> @daaniyaan commented on GitHub (Oct 2, 2023): > What command did you run for this? it gives error for all of them. for example this one is for mistral. tried with both "run" and "pull" ![CleanShot 2023-10-02 at 23 24 36](https://github.com/jmorganca/ollama/assets/31348710/65252b98-085d-4a48-9a49-5766feda0e99)
Author
Owner

@technovangelist commented on GitHub (Oct 2, 2023):

When you run ollama --version, what do you get?

<!-- gh-comment-id:1743669902 --> @technovangelist commented on GitHub (Oct 2, 2023): When you run ollama --version, what do you get?
Author
Owner

@technovangelist commented on GitHub (Oct 2, 2023):

And how did you install it?

<!-- gh-comment-id:1743671242 --> @technovangelist commented on GitHub (Oct 2, 2023): And how did you install it?
Author
Owner

@daaniyaan commented on GitHub (Oct 2, 2023):

When you run ollama --version, what do you get?

% ollama --version : ollama version 0.1.0

Downloaded the MacOs version from the website and followed the instructions.

<!-- gh-comment-id:1743678133 --> @daaniyaan commented on GitHub (Oct 2, 2023): > When you run ollama --version, what do you get? `% ollama --version : ollama version 0.1.0` Downloaded the MacOs version from the website and followed the instructions.
Author
Owner

@daaniyaan commented on GitHub (Oct 2, 2023):

tried different IPs still doesn't work.

image

also tried to download from the remote servers with the same IP and they worked.

CleanShot 2023-10-02 at 23 44 07

uninstalled and installed the app again on mac os and still giving the same error.

<!-- gh-comment-id:1743688943 --> @daaniyaan commented on GitHub (Oct 2, 2023): tried different IPs still doesn't work. ![image](https://github.com/jmorganca/ollama/assets/31348710/cc84029d-05a4-4984-8689-14e15a967856) also tried to download from the remote servers with the same IP and they worked. ![CleanShot 2023-10-02 at 23 44 07](https://github.com/jmorganca/ollama/assets/31348710/0d0ae21f-21d6-4439-8b86-70ab5507b50a) uninstalled and installed the app again on mac os and still giving the same error.
Author
Owner

@technovangelist commented on GitHub (Oct 2, 2023):

how are you connecting to these locations? If you run it from where you are without any vpn or firewall or proxy in the way, what do you get?

<!-- gh-comment-id:1743706591 --> @technovangelist commented on GitHub (Oct 2, 2023): how are you connecting to these locations? If you run it from where you are without any vpn or firewall or proxy in the way, what do you get?
Author
Owner

@daaniyaan commented on GitHub (Oct 2, 2023):

how are you connecting to these locations? If you run it from where you are without any vpn or firewall or proxy in the way, what do you get?

If i run it from my current location (iran) without any proxy or vpn i get the 403 forbidden error.
I'm connecting to those location by setting up a dynamic ssh port forwarding and using mac os socks5 proxy. and using privoxy for setting up http proxy.
I haven't had any problem before with this setup.

<!-- gh-comment-id:1743731888 --> @daaniyaan commented on GitHub (Oct 2, 2023): > how are you connecting to these locations? If you run it from where you are without any vpn or firewall or proxy in the way, what do you get? If i run it from my current location (iran) without any proxy or vpn i get the 403 forbidden error. I'm connecting to those location by setting up a dynamic ssh port forwarding and using mac os socks5 proxy. and using privoxy for setting up http proxy. I haven't had any problem before with this setup.
Author
Owner

@daaniyaan commented on GitHub (Oct 3, 2023):

Found the problem/solution.
Looks like the ollama command wasn't respecting my proxy.
but using curl for pulling the model worked.
curl -X POST http://localhost:11434/api/pull -d '{ "name": "mistral" }'

<!-- gh-comment-id:1744722380 --> @daaniyaan commented on GitHub (Oct 3, 2023): Found the problem/solution. Looks like the ollama command wasn't respecting my proxy. but using curl for pulling the model worked. `curl -X POST http://localhost:11434/api/pull -d '{ "name": "mistral" }'`
Author
Owner

@daaniyaan commented on GitHub (Oct 28, 2023):

Looks like this problem still exists. #743 didn't fix it?

CleanShot 2023-10-28 at 19 31 44

<!-- gh-comment-id:1783858147 --> @daaniyaan commented on GitHub (Oct 28, 2023): Looks like this problem still exists. #743 didn't fix it? ![CleanShot 2023-10-28 at 19 31 44](https://github.com/jmorganca/ollama/assets/31348710/78366402-5f51-4e31-94c3-2e65ac2b9082)
Author
Owner

@mxyng commented on GitHub (Oct 30, 2023):

This error doesn't look quite the same as #915 since it's clearly encountering issues with the proxy. Can you make sure ollama has been updated and running latest? Also make sure whatever proxy you need is set wherever ollama serve is called. If you're using the app, you may need to stop it and run ollama serve directly

<!-- gh-comment-id:1786136502 --> @mxyng commented on GitHub (Oct 30, 2023): This error doesn't look quite the same as #915 since it's clearly encountering issues with the proxy. Can you make sure ollama has been updated and running latest? Also make sure whatever proxy you need is set wherever `ollama serve` is called. If you're using the app, you may need to stop it and run `ollama serve` directly
Author
Owner

@daaniyaan commented on GitHub (Oct 31, 2023):

This error doesn't look quite the same as #915 since it's clearly encountering issues with the proxy. Can you make sure ollama has been updated and running latest? Also make sure whatever proxy you need is set wherever ollama serve is called. If you're using the app, you may need to stop it and run ollama serve directly

Nothing happens after this!
CleanShot 2023-10-31 at 12 39 52
CleanShot 2023-10-31 at 12 41 19
also typing "https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha" result in getting this error : {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}

<!-- gh-comment-id:1786798783 --> @daaniyaan commented on GitHub (Oct 31, 2023): > This error doesn't look quite the same as #915 since it's clearly encountering issues with the proxy. Can you make sure ollama has been updated and running latest? Also make sure whatever proxy you need is set wherever `ollama serve` is called. If you're using the app, you may need to stop it and run `ollama serve` directly Nothing happens after this! ![CleanShot 2023-10-31 at 12 39 52](https://github.com/jmorganca/ollama/assets/31348710/20a4a08e-5c73-40ac-8b32-35346e032562) ![CleanShot 2023-10-31 at 12 41 19](https://github.com/jmorganca/ollama/assets/31348710/9be9ec59-e486-4c94-acbe-80af56c6adb3) also typing "https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha" result in getting this error : {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}
Author
Owner

@daaniyaan commented on GitHub (Nov 3, 2023):

Getting the same error with docker and using proxy
image

<!-- gh-comment-id:1792439756 --> @daaniyaan commented on GitHub (Nov 3, 2023): Getting the same error with docker and using proxy ![image](https://github.com/jmorganca/ollama/assets/31348710/41eff728-dde0-40ee-92e8-2e567cf17b78)
Author
Owner

@arashtavoosi commented on GitHub (Nov 9, 2023):

@daaniyaan It seems that they forbid access to some countries including Iran. you can download the .gguf files directly from hugging-face and create your own model as described in import-from-gguf.

<!-- gh-comment-id:1804057436 --> @arashtavoosi commented on GitHub (Nov 9, 2023): @daaniyaan It seems that they forbid access to some countries including Iran. you can download the .gguf files directly from hugging-face and create your own model as described in [import-from-gguf](https://github.com/jmorganca/ollama#import-from-gguf).
Author
Owner

@daaniyaan commented on GitHub (Nov 9, 2023):

@daaniyaan It seems that they forbid access to some countries including Iran. you can download the .gguf files directly from hugging-face and create your own model as described in import-from-gguf.

Thank you. I've already tried that, but I didn't get really good results. I'm not sure if something was messed up in the process, or if the Zephyr model is bad, which I don't think is the case.
image

<!-- gh-comment-id:1804837800 --> @daaniyaan commented on GitHub (Nov 9, 2023): > @daaniyaan It seems that they forbid access to some countries including Iran. you can download the .gguf files directly from hugging-face and create your own model as described in [import-from-gguf](https://github.com/jmorganca/ollama#import-from-gguf). Thank you. I've already tried that, but I didn't get really good results. I'm not sure if something was messed up in the process, or if the Zephyr model is bad, which I don't think is the case. ![image](https://github.com/jmorganca/ollama/assets/31348710/c0d5784b-e43a-431e-9b37-62238b89d107)
Author
Owner

@marcellodesales commented on GitHub (Nov 11, 2023):

🚨 Same problem here: 403 from a Kubernetes Cluster

  • The calls from behind a kubernetes cluster...
  • It's successful from my laptop
Run docker run --network host  -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama pull llama2
  
Unable to find image 'ollama/ollama:latest' locally
latest: Pulling from ollama/ollama
aece8[4](https://git.company.com/vionix-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:4)93d397: Already exists
f656c668328a: Pulling fs layer
d72375596185: Pulling fs layer
f656c668328a: Verifying Checksum
f656c668328a: Download complete
f656c668328a: Pull complete
d72375596185: Verifying Checksum
d72375596185: Download complete
d72375596185: Pull complete
Digest: sha256:732df[4](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:5)267d[5](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:6)e113a2ab9981973299edbf1014[6](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:7)0e4a1f5b431a[7](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:8)bb7ed635e2f37
Status: Downloaded newer image for ollama/ollama:latest
pulling manifest
Error: 403: 
Error: Process completed with exit code 1.
Screenshot 2023-11-10 at 4 24 22 PM

👍 It works from my laptop

  • Not sure how to differentiate between the environments
  • I know we go through an egress firewall
$ docker run --network host  -ti -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama pull llama2                                                                             
pulling manifest
pulling 22f7f8ef5f4c... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (3.8/3.8 GB, 6.2 TB/s)        
pulling 8c17c2ebb0ea... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (7.0/7.0 kB, 109 MB/s)        
pulling 7c23fb36d801... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (4.8/4.8 kB, 46 MB/s)        
pulling 2e0493f67d0c... 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (59/59 B, 453 kB/s)        
pulling 2759286baa87... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (105/105 B, 2.4 MB/s)        
pulling 5407e3188df9... 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (529/529 B, 10 MB/s)        
verifying sha256 digest
writing manifest
removing any unused layers
success
<!-- gh-comment-id:1806593442 --> @marcellodesales commented on GitHub (Nov 11, 2023): # 🚨 Same problem here: 403 from a Kubernetes Cluster * The calls from behind a kubernetes cluster... * It's successful from my laptop ```console Run docker run --network host -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama pull llama2 Unable to find image 'ollama/ollama:latest' locally latest: Pulling from ollama/ollama aece8[4](https://git.company.com/vionix-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:4)93d397: Already exists f656c668328a: Pulling fs layer d72375596185: Pulling fs layer f656c668328a: Verifying Checksum f656c668328a: Download complete f656c668328a: Pull complete d72375596185: Verifying Checksum d72375596185: Download complete d72375596185: Pull complete Digest: sha256:732df[4](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:5)267d[5](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:6)e113a2ab9981973299edbf1014[6](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:7)0e4a1f5b431a[7](https://git.company.com/seceng-devsecops-platform/devsecops-ai-llm-prreviewer/actions/runs/508388/jobs/1423474#step:6:8)bb7ed635e2f37 Status: Downloaded newer image for ollama/ollama:latest pulling manifest Error: 403: Error: Process completed with exit code 1. ``` <img width="929" alt="Screenshot 2023-11-10 at 4 24 22 PM" src="https://github.com/jmorganca/ollama/assets/131457/bb76c9ef-5ec5-42af-8860-365b1c18bd44"> # 👍 It works from my laptop * Not sure how to differentiate between the environments * I know we go through an egress firewall ```console $ docker run --network host -ti -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama pull llama2 pulling manifest pulling 22f7f8ef5f4c... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (3.8/3.8 GB, 6.2 TB/s) pulling 8c17c2ebb0ea... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (7.0/7.0 kB, 109 MB/s) pulling 7c23fb36d801... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (4.8/4.8 kB, 46 MB/s) pulling 2e0493f67d0c... 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (59/59 B, 453 kB/s) pulling 2759286baa87... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (105/105 B, 2.4 MB/s) pulling 5407e3188df9... 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (529/529 B, 10 MB/s) verifying sha256 digest writing manifest removing any unused layers success ```
Author
Owner

@mxyng commented on GitHub (Nov 17, 2023):

also typing "https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha" result in getting this error : {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}

The registry requires Accept headers to return a valid response, e.g.

$ curl https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha
{"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}
$ curl -H 'Accept:application/vnd.docker.distribution.manifest.v2+json' https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha
{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:7b3b2a09cd248598ad6de496587e452ce2792b56cfcfe67e1de12fe28d105eee","size":381},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:155ebc41bb3029316fd71d42843a5326876ae425b07a4039c15953ecf88baabc","size":4108916384},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:e49aa37df5b4e21ae1aa75210dbc02fbcb7c99da7d5331f25b0012ca1eb5af50","size":72},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:53b998086229660c93b50334a0ecdc4ec22e898e40b90a91ee8c8a1031ea41ed","size":27}]}

Getting the same error with docker and using proxy

The screenshot shows http_proxy and https_proxy being set in the shell before running docker exec. This has no effect for two reasons:

  1. docker exec creates a shell inside the docker container. environment variables not explicitly exported to the container will not be set inside the container.
  2. The environment variables must be set for ollama serve. Setting it for pull or other operations has no effect.

Here are more details for using Ollama behind a proxy: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy

It seems that they forbid access to some countries including Iran.

We don't explicitly block any location, region, or country. However, the backing cloud service e (Cloudflare in this case) might block certain locations.

<!-- gh-comment-id:1815526812 --> @mxyng commented on GitHub (Nov 17, 2023): > also typing "https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha" result in getting this error : {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]} The registry requires Accept headers to return a valid response, e.g. ``` $ curl https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]} $ curl -H 'Accept:application/vnd.docker.distribution.manifest.v2+json' https://registry.ollama.ai/v2/library/zephyr/manifests/7b-alpha {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:7b3b2a09cd248598ad6de496587e452ce2792b56cfcfe67e1de12fe28d105eee","size":381},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:155ebc41bb3029316fd71d42843a5326876ae425b07a4039c15953ecf88baabc","size":4108916384},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:e49aa37df5b4e21ae1aa75210dbc02fbcb7c99da7d5331f25b0012ca1eb5af50","size":72},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:53b998086229660c93b50334a0ecdc4ec22e898e40b90a91ee8c8a1031ea41ed","size":27}]} ``` > Getting the same error with docker and using proxy The screenshot shows `http_proxy` and `https_proxy` being set in the shell before running `docker exec`. This has no effect for two reasons: 1. `docker exec` creates a shell inside the docker container. environment variables not explicitly exported to the container will not be set inside the container. 2. The environment variables must be set for `ollama serve`. Setting it for `pull` or other operations has no effect. Here are more details for using Ollama behind a proxy: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy > It seems that they forbid access to some countries including Iran. We don't explicitly block any location, region, or country. However, the backing cloud service e (Cloudflare in this case) might block certain locations.
Author
Owner

@marcellodesales commented on GitHub (Nov 20, 2023):

@mxyng It would be great to understand if Cloudflare is to blame, to give further information about it... We have Artifactory proxying DockerHub and multiple online Docker Registries and we don't have problems pulling from them... Is it possible to collect the information about it?

<!-- gh-comment-id:1819722919 --> @marcellodesales commented on GitHub (Nov 20, 2023): @mxyng It would be great to understand if Cloudflare is to blame, to give further information about it... We have Artifactory proxying DockerHub and multiple online Docker Registries and we don't have problems pulling from them... Is it possible to collect the information about it?
Author
Owner

@mxyng commented on GitHub (Nov 20, 2023):

The potential issue with Cloudflare I mentioned specifically relates to the earlier comment about geoblocking.

In your case, the most likely issue, without knowing more about your environment, is HTTPS_PROXY is configured for the Docker host but not the container. docker pull works because it uses the system proxy settings while ollama pull doesn't because the ollama server is running inside a container with proxy settings (or certificates)

<!-- gh-comment-id:1819997212 --> @mxyng commented on GitHub (Nov 20, 2023): The potential issue with Cloudflare I mentioned specifically relates to the earlier comment about geoblocking. In your case, the most likely issue, without knowing more about your environment, is HTTPS_PROXY is configured for the Docker host but not the container. `docker pull` works because it uses the system proxy settings while `ollama pull` doesn't because the ollama server is running inside a container with proxy settings (or certificates)
Author
Owner

@OM-EL commented on GitHub (Jan 24, 2024):

I'm encountering a smilar issues while trying to run an Ollama container behind a proxy. Here are the steps I've taken and the issues I've faced:

  1. Creating an Image with Certificate:

    cat Dockerfile
    FROM ollama/ollama
    COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
    RUN update-ca-certificates
    
  2. Starting a Container Using This Image with Proxy Variables Injected:

    docker run -d \
    -e HTTPS_PROXY=http://x.x.x.x:3128 \
    -e HTTP_PROXY=http://x.x.x.x:3128 \
    -e http_proxy=http://x.x.x.x:3128 \
    -e https_proxy=http://x.x.x.x:3128 \
    -p 11434:11434 ollama-with-ca
    
  3. Inside the Container:

    • Ran apt-get update to confirm internet access and proper proxy functionality.
    • Executed ollama pull mistral and ollama run mistral:instruct, but consistently encountered the error: "Error: something went wrong, please see the Ollama server logs for details."
    • Container logs (docker logs 8405972b3d6b) showed no errors, only the following information:
      Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
      Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194
      2024/01/24 08:40:55 images.go:808: total blobs: 0
      2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0
      2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20)
      2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda]
      2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
      2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library libnvidia-ml.so
      2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: []
      2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library librocm_smi64.so
      2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: []
      2024/01/24 08:40:56 routes.go:953: no GPU detected
      
  4. Using Wget to Download the Model:

    • Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via wget.
    • Created a simple ModelFile:
      FROM /home/mistral-7b-instruct-v0.1.Q5_K_M.gguf
      
    • Executed ollama create mistralModel -f Modelfile, resulting in the same error: "Error: something went wrong, please see the Ollama server logs for details."
    • The logs from docker logs 8405972b3d6b again showed no error:
      Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
      Your new public key is:
      
      ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194
      
      2024/01/24 08:40:55 images.go:808: total blobs: 0
      2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0
      2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20)
      2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda]
      2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
      
      
      

    When Making a http request on the ollama server in my Navigator i get an "Ollama running"

    i also found that even the "ollama list"
    gives the same error " Error: something went wrong, please see the ollama server logs for details " ans still no logs.

    i did not find any logs in the files where Ollama saves logs , the only logs are the docker logs , and they contain nothing

<!-- gh-comment-id:1907752070 --> @OM-EL commented on GitHub (Jan 24, 2024): I'm encountering a smilar issues while trying to run an Ollama container behind a proxy. Here are the steps I've taken and the issues I've faced: 1. **Creating an Image with Certificate**: ``` cat Dockerfile FROM ollama/ollama COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt RUN update-ca-certificates ``` 2. **Starting a Container Using This Image with Proxy Variables Injected**: ``` docker run -d \ -e HTTPS_PROXY=http://x.x.x.x:3128 \ -e HTTP_PROXY=http://x.x.x.x:3128 \ -e http_proxy=http://x.x.x.x:3128 \ -e https_proxy=http://x.x.x.x:3128 \ -p 11434:11434 ollama-with-ca ``` 3. **Inside the Container**: - Ran `apt-get update` to confirm internet access and proper proxy functionality. - Executed `ollama pull mistral` and `ollama run mistral:instruct`, but consistently encountered the error: "Error: something went wrong, please see the Ollama server logs for details." - Container logs (`docker logs 8405972b3d6b`) showed no errors, only the following information: ``` Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194 2024/01/24 08:40:55 images.go:808: total blobs: 0 2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0 2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024/01/24 08:40:56 gpu.go:88: Detecting GPU type 2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library libnvidia-ml.so 2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: [] 2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library librocm_smi64.so 2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: [] 2024/01/24 08:40:56 routes.go:953: no GPU detected ``` 4. **Using Wget to Download the Model**: - Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via `wget`. - Created a simple ModelFile: ``` FROM /home/mistral-7b-instruct-v0.1.Q5_K_M.gguf ``` - Executed `ollama create mistralModel -f Modelfile`, resulting in the same error: "Error: something went wrong, please see the Ollama server logs for details." - The logs from `docker logs 8405972b3d6b` again showed no error: ``` Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194 2024/01/24 08:40:55 images.go:808: total blobs: 0 2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0 2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024/01/24 08:40:56 gpu.go:88: Detecting GPU type When Making a http request on the ollama server in my Navigator i get an "Ollama running" i also found that even the "ollama list" gives the same error " Error: something went wrong, please see the ollama server logs for details " ans still no logs. i did not find any logs in the files where Ollama saves logs , the only logs are the docker logs , and they contain nothing
Author
Owner

@mehdi395 commented on GitHub (Apr 12, 2024):

@daaniyaan It seems that they forbid access to some countries including Iran. you can download the .gguf files directly from hugging-face and create your own model as described in import-from-gguf.

Thank you. I've already tried that, but I didn't get really good results. I'm not sure if something was messed up in the process, or if the Zephyr model is bad, which I don't think is the case. image

work for me ..

Before downloading the previous one you downloaded, delete it, and right from the beginning when you want to download it again, use a more valid IP. It'll be fine then.

<!-- gh-comment-id:2051190612 --> @mehdi395 commented on GitHub (Apr 12, 2024): > > @daaniyaan It seems that they forbid access to some countries including Iran. you can download the .gguf files directly from hugging-face and create your own model as described in [import-from-gguf](https://github.com/jmorganca/ollama#import-from-gguf). > > Thank you. I've already tried that, but I didn't get really good results. I'm not sure if something was messed up in the process, or if the Zephyr model is bad, which I don't think is the case. ![image](https://private-user-images.githubusercontent.com/31348710/281909513-c0d5784b-e43a-431e-9b37-62238b89d107.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTI5MDc1NTYsIm5iZiI6MTcxMjkwNzI1NiwicGF0aCI6Ii8zMTM0ODcxMC8yODE5MDk1MTMtYzBkNTc4NGItZTQzYS00MzFlLTliMzctNjIyMzhiODlkMTA3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA0MTIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNDEyVDA3MzQxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWMyMTgyNDExMzBmNjhiNjZlNjNjNjIyNmZkZTBmYjVkMTM3ZDIzMTE1ZGQ0MjQ3OWQ4NjQ0MzlmMDM4MTVkZjImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.IUrm8wJN1QR7ZPTjz1t3YpsIJ4qAfsH7GwQUqnc6PBY) work for me .. Before downloading the previous one you downloaded, delete it, and right from the beginning when you want to download it again, use a more valid IP. It'll be fine then.
Author
Owner

@AliBigdeli0 commented on GitHub (Apr 23, 2024):

===>For users in Iran<===
IF ollama is installed on your machine as a daemon or service, stop it,
In most Linux distributions you can stop the service by executing the following command:

sudo systemctl stop ollama

then open a terminal, and set your proxy information like this:

export ALL_PROXY=<your proxy address and port>

Be sure you are in the same Terminal then you can run the ollama using the following command:

ollama serve

you can run the ollama from another terminal (or you can run it as a background process and then download your LLM using the ollama run llm_name)

<!-- gh-comment-id:2073287311 --> @AliBigdeli0 commented on GitHub (Apr 23, 2024): ===>For users in Iran<=== IF ollama is installed on your machine as a daemon or service, stop it, In most Linux distributions you can stop the service by executing the following command: ```sudo systemctl stop ollama``` then open a terminal, and set your proxy information like this: ``` export ALL_PROXY=<your proxy address and port>``` **_Be sure you are in the same Terminal_** then you can run the ollama using the following command: ``` ollama serve ``` you can run the ollama from another terminal (or you can run it as a background process and then download your LLM using the `ollama run llm_name`)
Author
Owner

@ImSheida commented on GitHub (Apr 25, 2024):

===>For users in Iran<=== IF ollama is installed on your machine as a daemon or service, stop it, In most Linux distributions you can stop the service by executing the following command:

sudo systemctl stop ollama

then open a terminal, and set your proxy information like this:

export ALL_PROXY=<your proxy address and port>

Be sure you are in the same Terminal then you can run the ollama using the following command:

ollama serve

you can run the ollama from another terminal (or you can run it as a background process and then download your LLM using the ollama run llm_name)

i have the same issue trying to run llama2 in " iran "
i just installed ollama and now can't run any model like as "llama2"
tried all the Shecan , 403.online and others dns even used vpn but get the same error "this client does not have permission to get URL"

any recommendation that how can solve this problem?

<!-- gh-comment-id:2076905150 --> @ImSheida commented on GitHub (Apr 25, 2024): > ===>For users in Iran<=== IF ollama is installed on your machine as a daemon or service, stop it, In most Linux distributions you can stop the service by executing the following command: > > `sudo systemctl stop ollama` > > then open a terminal, and set your proxy information like this: > > ` export ALL_PROXY=<your proxy address and port>` > > **_Be sure you are in the same Terminal_** then you can run the ollama using the following command: > > `ollama serve` > > you can run the ollama from another terminal (or you can run it as a background process and then download your LLM using the `ollama run llm_name`) i have the same issue trying to run llama2 in " iran " i just installed ollama and now can't run any model like as "llama2" tried all the Shecan , 403.online and others dns even used vpn but get the same error "this client does not have permission to get URL" any recommendation that how can solve this problem?
Author
Owner

@MKdir98 commented on GitHub (Apr 30, 2024):

===>For users in Iran<=== IF ollama is installed on your machine as a daemon or service, stop it, In most Linux distributions you can stop the service by executing the following command:

sudo systemctl stop ollama

then open a terminal, and set your proxy information like this:

export ALL_PROXY=<your proxy address and port>

Be sure you are in the same Terminal then you can run the ollama using the following command:

ollama serve

you can run the ollama from another terminal (or you can run it as a background process and then download your LLM using the ollama run llm_name)

Thanks Ali jan

<!-- gh-comment-id:2085034206 --> @MKdir98 commented on GitHub (Apr 30, 2024): > ===>For users in Iran<=== IF ollama is installed on your machine as a daemon or service, stop it, In most Linux distributions you can stop the service by executing the following command: > > `sudo systemctl stop ollama` > > then open a terminal, and set your proxy information like this: > > ` export ALL_PROXY=<your proxy address and port>` > > **_Be sure you are in the same Terminal_** then you can run the ollama using the following command: > > `ollama serve` > > you can run the ollama from another terminal (or you can run it as a background process and then download your LLM using the `ollama run llm_name`) Thanks Ali jan
Author
Owner

@LucianoVandi commented on GitHub (May 24, 2024):

I have the same issue from Hetzner servers. The pull works sporadically, most of the time I get the error Your client does not have permission to get URL '/token' from this server, then suddenly it works and stops working again.

It always works without any problems on my machine.

Any suggestions?

<!-- gh-comment-id:2128912734 --> @LucianoVandi commented on GitHub (May 24, 2024): I have the same issue from Hetzner servers. The pull works sporadically, most of the time I get the error `Your client does not have permission to get URL '/token' from this server`, then suddenly it works and stops working again. It always works without any problems on my machine. Any suggestions?
Author
Owner

@arwinvdv commented on GitHub (Jun 3, 2024):

@LucianoVandi Same problem here on Hetzner server, after install it manually I get this error when I run ollama run llama3

pulling manifest
Error: pull model manifest: 403:
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Forbidden</h1>
<h2>Your client does not have permission to get URL <code>/token</code> from this server.</h2>
<h2></h2>
</body></html>
ollama version is 0.1.41

I think the IP range is blocked? 88.198.**

<!-- gh-comment-id:2145774291 --> @arwinvdv commented on GitHub (Jun 3, 2024): @LucianoVandi Same problem here on Hetzner server, after install it manually I get this error when I run `ollama run llama3` ``` pulling manifest Error: pull model manifest: 403: <html><head> <meta http-equiv="content-type" content="text/html;charset=utf-8"> <title>403 Forbidden</title> </head> <body text=#000000 bgcolor=#ffffff> <h1>Error: Forbidden</h1> <h2>Your client does not have permission to get URL <code>/token</code> from this server.</h2> <h2></h2> </body></html> ``` ``` ollama version is 0.1.41 ``` I think the IP range is blocked? 88.198.**
Author
Owner

@albertsikkema commented on GitHub (Jun 15, 2024):

I'm running into the same problem, using a Hetzner hosted vps I get 403 for all types of request from that IP. I'm assuming this is blocked by Ollama website/hosting?

<!-- gh-comment-id:2169892281 --> @albertsikkema commented on GitHub (Jun 15, 2024): I'm running into the same problem, using a Hetzner hosted vps I get 403 for all types of request from that IP. I'm assuming this is blocked by Ollama website/hosting?
Author
Owner

@marcellodesales commented on GitHub (Jun 25, 2024):

I have created a hack to backup the ollama images from my machine as Docker images :) And then pull them and place them in the volume used by any ollama server :)

https://gist.github.com/marcellodesales/7be67c13d6799628dcb6954155fbd765#cache-ollama-models-as-docker-data-images

The problem is that we deploy Ollama in Kubernetes and the IP address is not a public one... So, at this point, this is my workaround... The manual steps are described in the GIST above.

<!-- gh-comment-id:2188339660 --> @marcellodesales commented on GitHub (Jun 25, 2024): I have created a hack to backup the ollama images from my machine as Docker images :) And then pull them and place them in the volume used by any ollama server :) https://gist.github.com/marcellodesales/7be67c13d6799628dcb6954155fbd765#cache-ollama-models-as-docker-data-images The problem is that we deploy Ollama in Kubernetes and the IP address is not a public one... So, at this point, this is my workaround... The manual steps are described in the GIST above.
Author
Owner

@thaikolja commented on GitHub (Nov 3, 2024):

Not solution yet to host it on my web server? Debian 12, Plesk. Latest version of everything

<!-- gh-comment-id:2453569804 --> @thaikolja commented on GitHub (Nov 3, 2024): Not solution yet to host it on my web server? Debian 12, Plesk. Latest version of everything
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46815