[GH-ISSUE #87] Doesn't work behind cloudflare + reverse proxy #27457

Closed
opened 2026-04-25 02:08:46 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @semioniy on GitHub (Nov 11, 2023).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/87

Describe the bug
When accessed through LAN, webui works properly. When accessing it through cloudflare + reverse proxy, like I access all the applications running on my virtualization NAS, webui seems to be unable to find ollama by its address. (If I had to guess, something in the webui doesn't use the Ollama Server URL variable and instead tries to resolve the ollama's address by itself, which fails because it gets my domain instead of localhost.)

To Reproduce

  • Run with docker-compose up from the cloned git repo.
  • In your reverse proxy, create a record for accessing the webui from outside internet
  • open webui both over LAN and using the domain name

Expected behavior
Both instances of webui work the same

Actual behavior
webui over LAN can find the ollama's endpoint, and you can download models / make prompts
webui over domain name can's find the ollama's endpoint, the default value of Ollama Server URL is http://:11434/api which cannot be reached, values using localhost or 127.0.0.1 don't work either, and you can't download models / select downloaded ones

Screenshots
image
image
image
image
image

Desktop (please complete the following information):

  • Ubuntu server 22.04
  • Chrome
  • Docker & docker-compose
  • Cloudflare
  • nginx reverse proxy
Originally created by @semioniy on GitHub (Nov 11, 2023). Original GitHub issue: https://github.com/open-webui/open-webui/issues/87 **Describe the bug** When accessed through LAN, webui works properly. When accessing it through cloudflare + reverse proxy, like I access all the applications running on my virtualization NAS, webui seems to be unable to find ollama by its address. (If I had to guess, something in the webui doesn't use the `Ollama Server URL` variable and instead tries to resolve the ollama's address by itself, which fails because it gets my domain instead of localhost.) **To Reproduce** * Run with docker-compose up from the cloned git repo. * In your reverse proxy, create a record for accessing the webui from outside internet * open webui both over LAN and using the domain name **Expected behavior** Both instances of webui work the same **Actual behavior** webui over LAN can find the ollama's endpoint, and you can download models / make prompts webui over domain name can's find the ollama's endpoint, the default value of `Ollama Server URL` is http://<your domain name>:11434/api which cannot be reached, values using localhost or 127.0.0.1 don't work either, and you can't download models / select downloaded ones **Screenshots** ![image](https://github.com/ollama-webui/ollama-webui/assets/21012972/781cf29a-a756-45ca-9d79-07f885e495b9) ![image](https://github.com/ollama-webui/ollama-webui/assets/21012972/1bc84cab-07d9-4850-b7f4-19d8bfc2f17c) ![image](https://github.com/ollama-webui/ollama-webui/assets/21012972/ea173451-943e-4f18-86a3-76913720387c) ![image](https://github.com/ollama-webui/ollama-webui/assets/21012972/e89fb6e0-7cd9-4457-a366-9bb447e0fbd2) ![image](https://github.com/ollama-webui/ollama-webui/assets/21012972/8d5d4830-8813-4add-9cf8-56d74b628f02) **Desktop (please complete the following information):** - Ubuntu server 22.04 - Chrome - Docker & docker-compose - Cloudflare - nginx reverse proxy
Author
Owner

@tjbck commented on GitHub (Nov 11, 2023):

Hi, it seems like you're experiencing an issue because you're trying to access HTTP over HTTPS, make sure to also put Ollama behind the reverse proxy and it should solve your issue. Keep us updated. Thanks!

<!-- gh-comment-id:1806609125 --> @tjbck commented on GitHub (Nov 11, 2023): Hi, it seems like you're experiencing an issue because you're trying to access HTTP over HTTPS, make sure to also put Ollama behind the reverse proxy and it should solve your issue. Keep us updated. Thanks!
Author
Owner

@semioniy commented on GitHub (Nov 11, 2023):

Hi, @tjbck. You mean so that webui and ollama would communicate over the internet rather than over LAN? If so - than that's... a very roundabout solution. And one that AFAIK won't work, cause cloudflare returns you this html page if your server doesn't respond to GET requests on port 80 / 443, and ollama's endpoint, even if put behind reverse proxy with mapping 11434:80, won't respond to GET requests.
image

<!-- gh-comment-id:1806610565 --> @semioniy commented on GitHub (Nov 11, 2023): Hi, @tjbck. You mean so that webui and ollama would communicate over the internet rather than over LAN? If so - than that's... a very roundabout solution. And one that AFAIK won't work, cause cloudflare returns you this html page if your server doesn't respond to GET requests on port 80 / 443, and ollama's endpoint, even if put behind reverse proxy with mapping 11434:80, won't respond to GET requests. ![image](https://github.com/ollama-webui/ollama-webui/assets/21012972/a9108b89-7ebd-4aea-8c3a-f243b35f5375)
Author
Owner

@tjbck commented on GitHub (Nov 11, 2023):

Hi, The issue is occurring because you're accessing the web ui over https when the ollama server is served with http, and most browsers don't allow you to connect to insecure connection if the web is served over https. What I would suggest in your situation is to have reverse proxy software like nginx and expose 11343 port behind the proxy. Please try to access the webui over http and everything should work fine in your case. Keep us updated. Thanks.

<!-- gh-comment-id:1806719290 --> @tjbck commented on GitHub (Nov 11, 2023): Hi, The issue is occurring because you're accessing the web ui over https when the ollama server is served with http, and most browsers don't allow you to connect to insecure connection if the web is served over https. What I would suggest in your situation is to have reverse proxy software like nginx and expose 11343 port behind the proxy. Please try to access the webui over http and everything should work fine in your case. Keep us updated. Thanks.
Author
Owner

@oliverbob commented on GitHub (Nov 17, 2023):

<IfModule mod_ssl.c>
    <VirtualHost *:443>
        ServerName models.example.com

        # ProxyPass to the backend using HTTPS
        ProxyPass / http://example:11434/ nocanon
        ProxyPassReverse / http://example:11434/ nocanon

        ProxyPreserveHost On

        ErrorLog ${APACHE_LOG_DIR}/models_error.log
        CustomLog ${APACHE_LOG_DIR}/models_access.log combined

        SSLEngine on
        SSLCertificateFile /etc/letsencrypt/live/models.example.com/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/models.example.com/privkey.pem
        Include /etc/letsencrypt/options-ssl-apache.conf
    </VirtualHost>
</IfModule>

sudo a2enmod ssl
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_wstunnel # Enable if you need WebSocket support
sudo systemctl restart apache2 # Restart Apache to apply changes

sudo systemctl restart apache2

Then in your local terminal try it with:

curl -X POST https://models.example.com/api/generate -d '{
  "model": "orca-mini",
  "prompt":"Why is the sky blue?"
}'

You should now be able to connect with your API.

If you have some more questions, you can ask your model.

<!-- gh-comment-id:1815671761 --> @oliverbob commented on GitHub (Nov 17, 2023): ``` <IfModule mod_ssl.c> <VirtualHost *:443> ServerName models.example.com # ProxyPass to the backend using HTTPS ProxyPass / http://example:11434/ nocanon ProxyPassReverse / http://example:11434/ nocanon ProxyPreserveHost On ErrorLog ${APACHE_LOG_DIR}/models_error.log CustomLog ${APACHE_LOG_DIR}/models_access.log combined SSLEngine on SSLCertificateFile /etc/letsencrypt/live/models.example.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/models.example.com/privkey.pem Include /etc/letsencrypt/options-ssl-apache.conf </VirtualHost> </IfModule> ``` sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod proxy_wstunnel # Enable if you need WebSocket support sudo systemctl restart apache2 # Restart Apache to apply changes sudo systemctl restart apache2 Then in your local terminal try it with: ``` curl -X POST https://models.example.com/api/generate -d '{ "model": "orca-mini", "prompt":"Why is the sky blue?" }' ``` You should now be able to connect with your API. If you have some more questions, you can ask your model.
Author
Owner

@tjbck commented on GitHub (Nov 17, 2023):

This issue should've been solved with #111, Keep us updated. Thanks!

<!-- gh-comment-id:1817111382 --> @tjbck commented on GitHub (Nov 17, 2023): This issue should've been solved with #111, Keep us updated. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#27457