[GH-ISSUE #2168] Issues Running Ollama Container Behind Proxy - No Error Logs Found #47752

Closed
opened 2026-04-28 05:10:06 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @OM-EL on GitHub (Jan 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2168

I'm encountering issues while trying to run an Ollama container behind a proxy. Here are the steps I've taken and the issues I've faced:

  1. Creating an Image with Certificate:

    cat Dockerfile
    FROM ollama/ollama
    COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
    RUN update-ca-certificates
    
  2. Starting a Container Using This Image with Proxy Variables Injected:

    docker run -d \
    -e HTTPS_PROXY=http://x.x.x.x:3128 \
    -e HTTP_PROXY=http://x.x.x.x:3128 \
    -e http_proxy=http://x.x.x.x:3128 \
    -e https_proxy=http://x.x.x.x:3128 \
    -p 11434:11434 ollama-with-ca
    
  3. Inside the Container:

    • Ran apt-get update to confirm internet access and proper proxy functionality.
    • Executed ollama pull mistral and ollama run mistral:instruct, but consistently encountered the error: "Error: something went wrong, please see the Ollama server logs for details."
    • Container logs (docker logs 8405972b3d6b) showed no errors, only the following information:
      Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
      Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194
      2024/01/24 08:40:55 images.go:808: total blobs: 0
      2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0
      2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20)
      2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda]
      2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
      2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library libnvidia-ml.so
      2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: []
      2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library librocm_smi64.so
      2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: []
      2024/01/24 08:40:56 routes.go:953: no GPU detected
      
  4. Using Wget to Download the Model:

    • Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via wget.
    • Created a simple ModelFile:
      FROM /home/mistral-7b-instruct-v0.1.Q5_K_M.gguf
      
    • Executed ollama create mistralModel -f Modelfile, resulting in the same error: "Error: something went wrong, please see the Ollama server logs for details."
    • The logs from docker logs 8405972b3d6b again showed no error:
      Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
      Your new public key is:
      
      ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194
      
      2024/01/24 08:40:55 images.go:808: total blobs: 0
      2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0
      2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20)
      2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda]
      2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
      
      
      

    When Making a http request on the ollama server in my Navigator i get an "Ollama running"

    i also found that even the "ollama list"
    gives the same error " Error: something went wrong, please see the ollama server logs for details " ans still no logs.

    i did not find any logs in the files where Ollama saves logs , the only logs are the docker logs , and they contain nothing

Originally created by @OM-EL on GitHub (Jan 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2168 I'm encountering issues while trying to run an Ollama container behind a proxy. Here are the steps I've taken and the issues I've faced: 1. **Creating an Image with Certificate**: ``` cat Dockerfile FROM ollama/ollama COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt RUN update-ca-certificates ``` 2. **Starting a Container Using This Image with Proxy Variables Injected**: ``` docker run -d \ -e HTTPS_PROXY=http://x.x.x.x:3128 \ -e HTTP_PROXY=http://x.x.x.x:3128 \ -e http_proxy=http://x.x.x.x:3128 \ -e https_proxy=http://x.x.x.x:3128 \ -p 11434:11434 ollama-with-ca ``` 3. **Inside the Container**: - Ran `apt-get update` to confirm internet access and proper proxy functionality. - Executed `ollama pull mistral` and `ollama run mistral:instruct`, but consistently encountered the error: "Error: something went wrong, please see the Ollama server logs for details." - Container logs (`docker logs 8405972b3d6b`) showed no errors, only the following information: ``` Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194 2024/01/24 08:40:55 images.go:808: total blobs: 0 2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0 2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024/01/24 08:40:56 gpu.go:88: Detecting GPU type 2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library libnvidia-ml.so 2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: [] 2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library librocm_smi64.so 2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: [] 2024/01/24 08:40:56 routes.go:953: no GPU detected ``` 4. **Using Wget to Download the Model**: - Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via `wget`. - Created a simple ModelFile: ``` FROM /home/mistral-7b-instruct-v0.1.Q5_K_M.gguf ``` - Executed `ollama create mistralModel -f Modelfile`, resulting in the same error: "Error: something went wrong, please see the Ollama server logs for details." - The logs from `docker logs 8405972b3d6b` again showed no error: ``` Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194 2024/01/24 08:40:55 images.go:808: total blobs: 0 2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0 2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024/01/24 08:40:56 gpu.go:88: Detecting GPU type When Making a http request on the ollama server in my Navigator i get an "Ollama running" i also found that even the "ollama list" gives the same error " Error: something went wrong, please see the ollama server logs for details " ans still no logs. i did not find any logs in the files where Ollama saves logs , the only logs are the docker logs , and they contain nothing
Author
Owner

@mlewis1973 commented on GitHub (Jan 24, 2024):

see closed ticket
https://github.com/ollama/ollama/issues/1337

IMHO was closed without being resolved

<!-- gh-comment-id:1908545594 --> @mlewis1973 commented on GitHub (Jan 24, 2024): see closed ticket https://github.com/ollama/ollama/issues/1337 IMHO was closed without being resolved
Author
Owner

@mlewis1973 commented on GitHub (Jan 24, 2024):

interestingly my HPC colleagues tell me that if you convert the Docker image to Singularity and run the ollama CLI commands as root (ollama list, pull etc) , then the proxy settings do work correctly......

<!-- gh-comment-id:1908549763 --> @mlewis1973 commented on GitHub (Jan 24, 2024): interestingly my HPC colleagues tell me that if you convert the Docker image to Singularity and run the ollama CLI commands as root (ollama list, pull etc) , then the proxy settings do work correctly......
Author
Owner

@OM-EL commented on GitHub (Jan 24, 2024):

@mlewis1973
I convinced my manager in a company to give me a machine and let me deploy ollama .... , because they are a Bank they re very strict and i cant run processus as a root in a machine all i can do is run a container.

now i am blocked with issue.
In my macbook it worked with two commands.

<!-- gh-comment-id:1908557419 --> @OM-EL commented on GitHub (Jan 24, 2024): @mlewis1973 I convinced my manager in a company to give me a machine and let me deploy ollama .... , because they are a Bank they re very strict and i cant run processus as a root in a machine all i can do is run a container. now i am blocked with issue. In my macbook it worked with two commands.
Author
Owner

@mxyng commented on GitHub (Jan 24, 2024):

Can you describe in detail the steps you took? In particular, 1) where the Ollama container is running (remote, local) 2) where proxy settings are configured and 3) where the Ollama CLI is run executed and to which Ollama instance.

The lack of request logs indicates the request never made it from the CLI to the server. This could be a proxy setting or lack of on the CLI depending on where it's being executed.

<!-- gh-comment-id:1908642416 --> @mxyng commented on GitHub (Jan 24, 2024): Can you describe in detail the steps you took? In particular, 1) where the Ollama container is running (remote, local) 2) where proxy settings are configured and 3) where the Ollama CLI is run executed and to which Ollama instance. The lack of request logs indicates the request never made it from the CLI to the server. This could be a proxy setting or lack of on the CLI depending on where it's being executed.
Author
Owner

@OM-EL commented on GitHub (Jan 25, 2024):

@mxyng @mlewis1973
First i created A docker file based on the ollama image, just like i saw in the documentation how to run ollma on a container behind a proxy :

  • My DockerFile :
    image

  • Documentation :
    image

  • i then Started a container with ollama with proxy as environement variable :

docker run -d -v ollama:/root/.ollama -e HTTPS_PROXY=http://x.x.x.x:3128 -e HTTP_PROXY=http://x.x.x.x:3128 -e http_proxy=http://x.x.x.x:3128 -e https_proxy=http://x.x.x.x:3128 -p 11434:11434 --name ollama ollama-with-ca

  • Docker container is Up, i then docker exec inside the container :

image

  • Ollama seems to be up :

image

  • I try to pull a model :

image

  • When i go out of the container and try to see the logs using "Docker logs xxxx" command i see this (no error seems to be loged)

image

<!-- gh-comment-id:1910258617 --> @OM-EL commented on GitHub (Jan 25, 2024): @mxyng @mlewis1973 First i created A docker file based on the ollama image, just like i saw in the documentation how to run ollma on a container behind a proxy : - My DockerFile : ![image](https://github.com/ollama/ollama/assets/36996895/ce87443c-91b4-4e54-bd65-a2f4d6d88f08) - Documentation : ![image](https://github.com/ollama/ollama/assets/36996895/11bb7f03-a568-48a2-83ae-57088d1b2d5a) - i then Started a container with ollama with proxy as environement variable : ` docker run -d -v ollama:/root/.ollama -e HTTPS_PROXY=http://x.x.x.x:3128 -e HTTP_PROXY=http://x.x.x.x:3128 -e http_proxy=http://x.x.x.x:3128 -e https_proxy=http://x.x.x.x:3128 -p 11434:11434 --name ollama ollama-with-ca` - Docker container is Up, i then docker exec inside the container : ![image](https://github.com/ollama/ollama/assets/36996895/6986cb54-6529-4fa3-ba28-793c091fd72b) - Ollama seems to be up : ![image](https://github.com/ollama/ollama/assets/36996895/a9b65964-eaf2-4f8c-9d73-68ec56394282) - I try to pull a model : ![image](https://github.com/ollama/ollama/assets/36996895/89f53d57-f07b-4338-baf7-223bb69d8453) - When i go out of the container and try to see the logs using "Docker logs xxxx" command i see this (no error seems to be loged) ![image](https://github.com/ollama/ollama/assets/36996895/85f7c177-2b8d-4642-b111-16b7c6d7967b)
Author
Owner

@mxyng commented on GitHub (Jan 25, 2024):

By setting HTTP_PROXY and running ollama subcommands inside the docker container, it applies proxy the CLI request through your proxy. You should remove HTTP_PROXY but keep HTTPS_PROXY. This will still apply the proxy to HTTPS requests, i.e. the external requests to pull the image.

Here's a simple example using a local, mitm proxy:

  1. Create a docker network so the containers can communicate with each other:

    docker network create ollama
    
  2. Create the proxy container. Since mitmproxy generates and uses a self signed certificate, expose it so Ollama can use it later:

    docker run -d -v ./mitmproxy:/home/mitmproxy/.mitmproxy --name mitmproxy --net ollama mitmproxy/mitmproxy mitmdump
    
  3. Create the ollama container. Mount the mitmproxy self-signed certificates and set HTTPS_PROXY

    docker run -d -v ./mitmproxy/mitmproxy-ca.pem:/usr/local/share/ca-certificates/mitmproxy-ca.crt --name ollama --net ollama -e HTTPS_PROXY=http://mitmproxy:8080 --entrypoint sh ollama/ollama -c 'update-ca-certificates; ollama serve'
    
  4. In the container, set HTTP_PROXY and try ollama list

    export HTTP_PROXY=http://mitmproxy:8080
    ollama list
    

    This list will error with the message you described.

  5. Unset HTTP_PROXY and retry ollama list:

    unset HTTP_PROXY
    ollama list
    

    This list should succeed since it's no longer using the proxy to communicate with the server.

<!-- gh-comment-id:1910826811 --> @mxyng commented on GitHub (Jan 25, 2024): By setting `HTTP_PROXY` and running `ollama` subcommands inside the docker container, it applies proxy the CLI request through your proxy. You should remove `HTTP_PROXY` but keep `HTTPS_PROXY`. This will still apply the proxy to HTTPS requests, i.e. the external requests to pull the image. Here's a simple example using a local, mitm proxy: 1. Create a docker network so the containers can communicate with each other: ``` docker network create ollama ``` 1. Create the proxy container. Since mitmproxy generates and uses a self signed certificate, expose it so Ollama can use it later: ``` docker run -d -v ./mitmproxy:/home/mitmproxy/.mitmproxy --name mitmproxy --net ollama mitmproxy/mitmproxy mitmdump ``` 3. Create the ollama container. Mount the mitmproxy self-signed certificates and set `HTTPS_PROXY` ``` docker run -d -v ./mitmproxy/mitmproxy-ca.pem:/usr/local/share/ca-certificates/mitmproxy-ca.crt --name ollama --net ollama -e HTTPS_PROXY=http://mitmproxy:8080 --entrypoint sh ollama/ollama -c 'update-ca-certificates; ollama serve' ``` 4. In the container, set `HTTP_PROXY` and try `ollama list` ``` export HTTP_PROXY=http://mitmproxy:8080 ollama list ``` This `list` will error with the message you described. 5. Unset `HTTP_PROXY` and retry `ollama list`: ``` unset HTTP_PROXY ollama list ``` This `list` should succeed since it's no longer using the proxy to communicate with the server.
Author
Owner

@jan-schaeffer commented on GitHub (Feb 13, 2024):

By setting HTTP_PROXY and running ollama subcommands inside the docker container, it applies proxy the CLI request through your proxy. You should remove HTTP_PROXY but keep HTTPS_PROXY. This will still apply the proxy to HTTPS requests, i.e. the external requests to pull the image.

Just removing HTTP_PROXY from my docker-compose fixed this issue for me.

<!-- gh-comment-id:1941135495 --> @jan-schaeffer commented on GitHub (Feb 13, 2024): > By setting `HTTP_PROXY` and running `ollama` subcommands inside the docker container, it applies proxy the CLI request through your proxy. You should remove `HTTP_PROXY` but keep `HTTPS_PROXY`. This will still apply the proxy to HTTPS requests, i.e. the external requests to pull the image. Just removing `HTTP_PROXY` from my docker-compose fixed this issue for me.
Author
Owner

@mxyng commented on GitHub (Mar 11, 2024):

Going to close this since there hasn't been an update in a while. If it continues to be a problem, please open a new issue

<!-- gh-comment-id:1989222283 --> @mxyng commented on GitHub (Mar 11, 2024): Going to close this since there hasn't been an update in a while. If it continues to be a problem, please open a new issue
Author
Owner

@lishunan246 commented on GitHub (Jun 20, 2024):

Removing HTTP_PROXY works for me too.

<!-- gh-comment-id:2180831768 --> @lishunan246 commented on GitHub (Jun 20, 2024): Removing HTTP_PROXY works for me too.
Author
Owner

@devarshi16 commented on GitHub (Oct 17, 2024):

By just modifying the docker run command, and not adding HTTP_PROXY in it we can have it running aswell,

docker run -d --gpus '"device=2,4,5"' -v ./ollama:/root/.ollama -e HTTPS_PROXY=http://<your_proxy_ip>:8080 -p 11434:11434 --name ollama ollama/ollama
<!-- gh-comment-id:2418668195 --> @devarshi16 commented on GitHub (Oct 17, 2024): By just modifying the docker run command, and not adding HTTP_PROXY in it we can have it running aswell, ``` docker run -d --gpus '"device=2,4,5"' -v ./ollama:/root/.ollama -e HTTPS_PROXY=http://<your_proxy_ip>:8080 -p 11434:11434 --name ollama ollama/ollama ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47752