[GH-ISSUE #12272] issue: #55193

Closed
opened 2026-05-05 17:17:28 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Mozartuss on GitHub (Apr 1, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12272

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

v0.5.20

Ollama Version (if applicable)

0.6.2

Operating System

Ubuntu 24.04 / Docker

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

The model should be downloaded over the UI like calling docker exec -it ollama ollama pull gemma3:latest, which works perfectly.

Actual Behavior

I use an Nginx reverse proxy to secure my open-webui, as well as the Olama API and comfy UI. Everything works normally, but I can't download models directly over the open-webui. Surprisingly, I can't see any error in the open-webui docker log, only in the ollama logs.

It also surprised me that this ollama API call will be performed over my public IP instead of the localhost.

When I enter the Model I want, for example gemma3:latest it makes an ollama api call with /api/pull/0

The Following is my nginx config:

server {
    listen 80;
    server_name ai.ai.tha.de;

    return 302 https://$host$request_uri;
} 

server {
    listen [::]:443 ssl;
    listen 443 ssl;
    http2 on;

    client_max_body_size 0;
    server_name MY_DOMAIN;

    ssl_certificate /etc/letsencrypt/live/MY_DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/MY_DOMAIN/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    add_header Strict-Transport-Security max-age=15768000;

    location / {
        proxy_pass http://open-webui:8080;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_buffering off;

	proxy_read_timeout 600;
    	proxy_send_timeout 600;
    }

    location /openwebui/api {
        proxy_buffering off;
        proxy_set_header Origin '';
        proxy_set_header Referer '';
        proxy_pass http://open-webui:8080/api;

	proxy_read_timeout 600;
    	proxy_send_timeout 600;
     }

    location /ollama/api {
        proxy_pass http://ollama:11434/api;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_buffering off;

        # Increase timeouts to prevent premature gateway errors
        proxy_connect_timeout 300s;
        proxy_send_timeout 300s;
        proxy_read_timeout 300s;
        send_timeout 300s;

        # Disable request buffering for real-time streaming responses
        proxy_request_buffering off;

        # Prevent dropped connections
        proxy_http_version 1.1;
        proxy_set_header Connection "Keep-Alive";
        proxy_set_header Proxy-Connection "Keep-Alive";

        # Avoid caching issues
        proxy_set_header Cache-Control "no-cache";

        # Enable retries if the server is slow
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    }

    location /openwebui/api/v1 {
	    proxy_pass http://open-webui:8080/api/v1;
    }

And my docker-compose.yml:

services:
  ollama:
    image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    volumes:
      - ollama:/root/.ollama
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 11434
    environment:
      - OLLAMA_DEBUG=0
      - OLLAMA_FLASH_ATTENTION=1
      - OLLAMA_LOAD_TIMEOUT=30m
      - OLLAMA_NEW_ENGINE=1
      - OLLAMA_NUM_PARALLEL=4
      - OLLAMA_KEEP_ALIVE=30m
      - OLLAMA_MAX_LOADED_MODELS=7
      - OLLAMA_CONTEXT_LENGTH=4096
    logging:
      driver: json-file
      options:
        max-size: "5m"
        max-file: "2"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ["0", "1", "2", "3", "4", "5", "6", "7"]
              capabilities: 
                - gpu
    networks:
      - internet
  

  open-webui:
    image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
    container_name: open-webui
    volumes:
      - /var/ollama/open-webui:/app/backend/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - ollama
    ports:
      - 8080
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
      - FORWARDED_ALLOW_IPS=*
      - 'WEBUI_SECRET_KEY='
      - ENABLE_AUTOCOMPLETE_GENERATION=False
    restart: unless-stopped
    extra_hosts:
      - host.docker.internal:host-gateway
    logging:
      driver: json-file
      options:
        max-size: "5m"
        max-file: "2"
    networks:
      - internet
  
    nginx:
    image: nginx:alpine
    container_name: reverse-proxy
    ports:
      - "80:80"
      - "443:443"
    restart: unless-stopped
    depends_on:
      - open-webui
    volumes:
      - ./open-webui.conf:/etc/nginx/conf.d/open-webui.conf
      - ./certbot/letsencrypt:/etc/letsencrypt:ro
      - /etc/ssl/certs/dhparam.pem:/etc/ssl/certs/dhparam.pem:ro
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: json-file
      options:
        max-size: "5m"
        max-file: "2"
    networks:
      - internet

Steps to Reproduce

  1. Use the docker compose with the nginx config
  2. try to download a Model with the openwebui

Logs & Screenshots

Ollama logs

....
[GIN] 2025/04/01 - 09:11:37 | 200 |  787.609778ms |      172.18.0.4 | POST     "/api/chat"
[GIN] 2025/04/01 - 09:11:54 | 200 | 13.983479806s |      172.18.0.4 | POST     "/api/chat"
[GIN] 2025/04/01 - 09:11:54 | 200 |  669.648151ms |      172.18.0.4 | POST     "/api/chat"
[GIN] 2025/04/01 - 09:11:55 | 200 |  900.411233ms |      172.18.0.4 | POST     "/api/chat"
[GIN] 2025/04/01 - 09:19:59 | 404 |      12.482µs |     <public_ip> | POST     "/api/pull/0"
[GIN] 2025/04/01 - 09:22:06 | 200 |     177.282µs |     <public_ip> | GET      "/api/version"

Additional Information

No response

Originally created by @Mozartuss on GitHub (Apr 1, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/12272 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version v0.5.20 ### Ollama Version (if applicable) 0.6.2 ### Operating System Ubuntu 24.04 / Docker ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior The model should be downloaded over the UI like calling `docker exec -it ollama ollama pull gemma3:latest`, which works perfectly. ### Actual Behavior I use an Nginx reverse proxy to secure my open-webui, as well as the Olama API and comfy UI. Everything works normally, but I can't download models directly over the open-webui. Surprisingly, I can't see any error in the open-webui docker log, only in the ollama logs. It also surprised me that this ollama API call will be performed over my public IP instead of the localhost. When I enter the Model I want, for example `gemma3:latest` it makes an ollama api call with `/api/pull/0` The Following is my nginx config: ``` server { listen 80; server_name ai.ai.tha.de; return 302 https://$host$request_uri; } server { listen [::]:443 ssl; listen 443 ssl; http2 on; client_max_body_size 0; server_name MY_DOMAIN; ssl_certificate /etc/letsencrypt/live/MY_DOMAIN/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/MY_DOMAIN/privkey.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_stapling on; ssl_stapling_verify on; add_header Strict-Transport-Security max-age=15768000; location / { proxy_pass http://open-webui:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffering off; proxy_read_timeout 600; proxy_send_timeout 600; } location /openwebui/api { proxy_buffering off; proxy_set_header Origin ''; proxy_set_header Referer ''; proxy_pass http://open-webui:8080/api; proxy_read_timeout 600; proxy_send_timeout 600; } location /ollama/api { proxy_pass http://ollama:11434/api; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffering off; # Increase timeouts to prevent premature gateway errors proxy_connect_timeout 300s; proxy_send_timeout 300s; proxy_read_timeout 300s; send_timeout 300s; # Disable request buffering for real-time streaming responses proxy_request_buffering off; # Prevent dropped connections proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; proxy_set_header Proxy-Connection "Keep-Alive"; # Avoid caching issues proxy_set_header Cache-Control "no-cache"; # Enable retries if the server is slow proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; } location /openwebui/api/v1 { proxy_pass http://open-webui:8080/api/v1; } ``` And my docker-compose.yml: ``` services: ollama: image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest} container_name: ollama pull_policy: always tty: true restart: unless-stopped volumes: - ollama:/root/.ollama - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 11434 environment: - OLLAMA_DEBUG=0 - OLLAMA_FLASH_ATTENTION=1 - OLLAMA_LOAD_TIMEOUT=30m - OLLAMA_NEW_ENGINE=1 - OLLAMA_NUM_PARALLEL=4 - OLLAMA_KEEP_ALIVE=30m - OLLAMA_MAX_LOADED_MODELS=7 - OLLAMA_CONTEXT_LENGTH=4096 logging: driver: json-file options: max-size: "5m" max-file: "2" deploy: resources: reservations: devices: - driver: nvidia device_ids: ["0", "1", "2", "3", "4", "5", "6", "7"] capabilities: - gpu networks: - internet open-webui: image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main} container_name: open-webui volumes: - /var/ollama/open-webui:/app/backend/data - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro depends_on: - ollama ports: - 8080 environment: - OLLAMA_BASE_URL=http://ollama:11434 - FORWARDED_ALLOW_IPS=* - 'WEBUI_SECRET_KEY=' - ENABLE_AUTOCOMPLETE_GENERATION=False restart: unless-stopped extra_hosts: - host.docker.internal:host-gateway logging: driver: json-file options: max-size: "5m" max-file: "2" networks: - internet nginx: image: nginx:alpine container_name: reverse-proxy ports: - "80:80" - "443:443" restart: unless-stopped depends_on: - open-webui volumes: - ./open-webui.conf:/etc/nginx/conf.d/open-webui.conf - ./certbot/letsencrypt:/etc/letsencrypt:ro - /etc/ssl/certs/dhparam.pem:/etc/ssl/certs/dhparam.pem:ro - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro logging: driver: json-file options: max-size: "5m" max-file: "2" networks: - internet ``` ### Steps to Reproduce 1. Use the docker compose with the nginx config 2. try to download a Model with the openwebui ### Logs & Screenshots ### Ollama logs ``` .... [GIN] 2025/04/01 - 09:11:37 | 200 | 787.609778ms | 172.18.0.4 | POST "/api/chat" [GIN] 2025/04/01 - 09:11:54 | 200 | 13.983479806s | 172.18.0.4 | POST "/api/chat" [GIN] 2025/04/01 - 09:11:54 | 200 | 669.648151ms | 172.18.0.4 | POST "/api/chat" [GIN] 2025/04/01 - 09:11:55 | 200 | 900.411233ms | 172.18.0.4 | POST "/api/chat" [GIN] 2025/04/01 - 09:19:59 | 404 | 12.482µs | <public_ip> | POST "/api/pull/0" [GIN] 2025/04/01 - 09:22:06 | 200 | 177.282µs | <public_ip> | GET "/api/version" ``` ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 17:17:28 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#55193