[GH-ISSUE #1053] Requesting support for basic auth or API key authentication #47028

Closed
opened 2026-04-28 02:46:28 -05:00 by GiteaMirror · 25 comments
Owner

Originally created by @sebiweise on GitHub (Nov 9, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1053

It would be great to have some sort of authentication in front of the ollama api. Currently I´m using Nginx Proxy Manager to add a Access List to prevent unauthorized access but a standard way implemented into Ollama itself would be great for all developers that are integrating Ollama into there software.

Originally created by @sebiweise on GitHub (Nov 9, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1053 It would be great to have some sort of authentication in front of the ollama api. Currently I´m using Nginx Proxy Manager to add a Access List to prevent unauthorized access but a standard way implemented into Ollama itself would be great for all developers that are integrating Ollama into there software.
GiteaMirror added the feature request label 2026-04-28 02:46:28 -05:00
Author
Owner

@priamai commented on GitHub (Nov 20, 2023):

Please that will be fantastic!

<!-- gh-comment-id:1818427578 --> @priamai commented on GitHub (Nov 20, 2023): Please that will be fantastic!
Author
Owner

@johnnyq commented on GitHub (Feb 8, 2024):

Yes I'm also looking for API Key Support, were currently working on the integration with Ollama with our opensource project ITFlow and it would be a life saver. Thanks

<!-- gh-comment-id:1934917655 --> @johnnyq commented on GitHub (Feb 8, 2024): Yes I'm also looking for API Key Support, were currently working on the integration with Ollama with our opensource project ITFlow and it would be a life saver. Thanks
Author
Owner

@d3cline commented on GitHub (Feb 27, 2024):

I read #849 but this should be baked into the main app imo, as others have stated its better for standardization.

<!-- gh-comment-id:1967674517 --> @d3cline commented on GitHub (Feb 27, 2024): I read #849 but this should be baked into the main app imo, as others have stated its better for standardization.
Author
Owner

@tweedge commented on GitHub (Mar 4, 2024):

+1, IMHO Ollama should offer some sort of simple, native authentication. While I understand that the team is designing around local access, publicly accessible Ollama instances are already showing up on the internet.

Edit: and software using Ollama is encouraging people to make their instances internet-accessible

<!-- gh-comment-id:1975642106 --> @tweedge commented on GitHub (Mar 4, 2024): +1, IMHO Ollama should offer some sort of simple, native authentication. While I understand that the team is designing around local access, publicly accessible Ollama instances are already showing up on the internet. * https://www.shodan.io/search?query=%22Ollama+is+running%22 * https://search.censys.io/search?resource=hosts&sort=RELEVANCE&per_page=25&virtual_hosts=EXCLUDE&q=%22Ollama+is+running%22 Edit: and software using Ollama is encouraging people to make their instances internet-accessible * https://github.com/sugarforever/chat-ollama/blob/main/pages/index.vue#L12-14
Author
Owner

@AnthonMS commented on GitHub (Mar 4, 2024):

@sebiweise Can you provide the nginx config for proxying the traffic to ollama? I am new to the world of nginx and proxy servers, but really want this set up with some basic authentication and I have nginx serving other services already. Sorry if I'm inconveniencing.

For example, I have ollama running on the host 192.168.1.100 and nginx running on host 192.168.1.200

I have tried with some basic configs like these

server {
    listen 11434;

    location / {
        auth_basic "Restricted Content";
        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_pass http://192.168.1.100:11434;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

But I am not getting a connection. I have set the OLLAMA_HOST variable on the host running that, to be 0.0.0.0 and I can connect to it on the local network by going to it's IP and port.

<!-- gh-comment-id:1977331176 --> @AnthonMS commented on GitHub (Mar 4, 2024): @sebiweise Can you provide the nginx config for proxying the traffic to ollama? I am new to the world of nginx and proxy servers, but really want this set up with some basic authentication and I have nginx serving other services already. Sorry if I'm inconveniencing. For example, I have ollama running on the host 192.168.1.100 and nginx running on host 192.168.1.200 I have tried with some basic configs like these ``` server { listen 11434; location / { auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; proxy_pass http://192.168.1.100:11434; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } ``` But I am not getting a connection. I have set the OLLAMA_HOST variable on the host running that, to be `0.0.0.0` and I can connect to it on the local network by going to it's IP and port.
Author
Owner

@p-try commented on GitHub (Mar 15, 2024):

@sebiweise This doesn't look too bad. But:

  • You use the same port for nginx and ollama which doesn't work. Change the "listen" directive to another port, e.g. 11435. You may also reset OLLAMA_HOST to the original value (as it will only receive connections from localhost once the proxy is set up).
  • Basic Auth will probably not work with most API clients. Instead, use JWT authentication.
<!-- gh-comment-id:1999855561 --> @p-try commented on GitHub (Mar 15, 2024): @sebiweise This doesn't look too bad. But: - You use the same port for nginx and ollama which doesn't work. Change the "listen" directive to another port, e.g. 11435. You may also reset OLLAMA_HOST to the original value (as it will only receive connections from localhost once the proxy is set up). - Basic Auth will probably not work with most API clients. Instead, use [JWT authentication](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-jwt-authentication/).
Author
Owner

@Kavan72 commented on GitHub (Apr 24, 2024):

@sebiweise This doesn't look too bad. But:

  • You use the same port for nginx and ollama which doesn't work. Change the "listen" directive to another port, e.g. 11435. You may also reset OLLAMA_HOST to the original value (as it will only receive connections from localhost once the proxy is set up).
  • Basic Auth will probably not work with most API clients. Instead, use JWT authentication.

or just a basic API key, similar to how OpenAI uses authentication on their client.

<!-- gh-comment-id:2074218671 --> @Kavan72 commented on GitHub (Apr 24, 2024): > @sebiweise This doesn't look too bad. But: > > * You use the same port for nginx and ollama which doesn't work. Change the "listen" directive to another port, e.g. 11435. You may also reset OLLAMA_HOST to the original value (as it will only receive connections from localhost once the proxy is set up). > * Basic Auth will probably not work with most API clients. Instead, use [JWT authentication](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-jwt-authentication/). or just a basic API key, similar to how OpenAI uses authentication on their client.
Author
Owner

@AnthonMS commented on GitHub (Apr 24, 2024):

@sebiweise This doesn't look too bad. But:

  • You use the same port for nginx and ollama which doesn't work. Change the "listen" directive to another port, e.g. 11435. You may also reset OLLAMA_HOST to the original value (as it will only receive connections from localhost once the proxy is set up).
  • Basic Auth will probably not work with most API clients. Instead, use JWT authentication.

I use the same port on 2 different host machines, correct. So the ollama service is not running on the same host as the nginx is. So this is fine.

<!-- gh-comment-id:2074652275 --> @AnthonMS commented on GitHub (Apr 24, 2024): > @sebiweise This doesn't look too bad. But: > > - You use the same port for nginx and ollama which doesn't work. Change the "listen" directive to another port, e.g. 11435. You may also reset OLLAMA_HOST to the original value (as it will only receive connections from localhost once the proxy is set up). > - Basic Auth will probably not work with most API clients. Instead, use [JWT authentication](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-jwt-authentication/). I use the same port on 2 different host machines, correct. So the ollama service is not running on the same host as the nginx is. So this is fine.
Author
Owner

@bartolli commented on GitHub (Jun 5, 2024):

I spent a few days trying to get the Ollama Go server to work with native api_key authentication but had no luck. So, I ended up making a Docker image with a Caddy server to securely handle authentication and proxy requests to a local Ollama instance. There are two methods, environment-based API key validation or multiple API keys stored in .conf file for better security. Check out these repos.

Uses OLLAMA_API_KEY as a local environment variable:
https://github.com/bartolli/ollama-bearer-auth

If you want to support multiple API keys stored in a config file, check out this repo:
https://github.com/bartolli/ollama-bearer-auth-caddy.

<!-- gh-comment-id:2151125065 --> @bartolli commented on GitHub (Jun 5, 2024): I spent a few days trying to get the Ollama Go server to work with native `api_key` authentication but had no luck. So, I ended up making a Docker image with a Caddy server to securely handle authentication and proxy requests to a local Ollama instance. There are two methods, environment-based API key validation or multiple API keys stored in .conf file for better security. Check out these repos. Uses `OLLAMA_API_KEY` as a local environment variable: https://github.com/bartolli/ollama-bearer-auth If you want to support multiple API keys stored in a config file, check out this repo: https://github.com/bartolli/ollama-bearer-auth-caddy.
Author
Owner

@PyroGenesis commented on GitHub (Jul 26, 2024):

Sharing my nginx configuration to expose Ollama over the web with an API Key and HTTPS:

    upstream ollama {
        server               localhost:11434;
    }

    server {
        listen              8000 ssl;
        server_name         ollama.mydomain.com;

        # for ssl
        ssl_certificate     "path/to/cert.crt";
        ssl_certificate_key "path/to/private.key";
        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers         HIGH:!aNULL:!MD5;

        location / {
            # Check if the Authorization header is present and has the correct Bearer token / API Key
            set $token "Bearer MY_PRIVATE_API_KEY";
            if ($http_authorization != $token) {
                return 401 "Unauthorized";
            }
            
            # The localhost headers are to simulate the forwarded request as coming from localhost
            # because I didn't want to set the Ollama origins as *
            proxy_set_header  Host "localhost";
            proxy_set_header  X-Real-IP "127.0.0.1";
            proxy_set_header  X-Forwarded-For "127.0.0.1";
            proxy_set_header  X-Forwarded-Proto $scheme;
            proxy_pass        http://ollama;  # Forward request to the actual web service
        }
    }
<!-- gh-comment-id:2253445923 --> @PyroGenesis commented on GitHub (Jul 26, 2024): Sharing my nginx configuration to expose Ollama over the web with an API Key and HTTPS: ```nginx upstream ollama { server localhost:11434; } server { listen 8000 ssl; server_name ollama.mydomain.com; # for ssl ssl_certificate "path/to/cert.crt"; ssl_certificate_key "path/to/private.key"; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { # Check if the Authorization header is present and has the correct Bearer token / API Key set $token "Bearer MY_PRIVATE_API_KEY"; if ($http_authorization != $token) { return 401 "Unauthorized"; } # The localhost headers are to simulate the forwarded request as coming from localhost # because I didn't want to set the Ollama origins as * proxy_set_header Host "localhost"; proxy_set_header X-Real-IP "127.0.0.1"; proxy_set_header X-Forwarded-For "127.0.0.1"; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://ollama; # Forward request to the actual web service } } ```
Author
Owner

@neuhaus commented on GitHub (Sep 17, 2024):

@AnthonMS don't use this line in your nginx host configuration file:

    proxy_set_header Host $host;

instead use this line:

    proxy_set_header Host localhost:11434;

Works for me.

<!-- gh-comment-id:2354883922 --> @neuhaus commented on GitHub (Sep 17, 2024): @AnthonMS don't use this line in your nginx host configuration file: proxy_set_header Host $host; instead use this line: proxy_set_header Host localhost:11434; Works for me.
Author
Owner

@AnthonMS commented on GitHub (Sep 17, 2024):

@AnthonMS don't use this line in your nginx host configuration file:

    proxy_set_header Host $host;

instead use this line:

    proxy_set_header Host localhost:11434;

Works for me.

In this example the Ollama service is not running on the same machine as the nginx service. The nginx config I have given works.

But I see that the host should probably still be changed now that I think about it.

<!-- gh-comment-id:2355366219 --> @AnthonMS commented on GitHub (Sep 17, 2024): > @AnthonMS don't use this line in your nginx host configuration file: > > proxy_set_header Host $host; > > instead use this line: > > proxy_set_header Host localhost:11434; > > Works for me. In this example the Ollama service is not running on the same machine as the nginx service. The nginx config I have given works. But I see that the host should probably still be changed now that I think about it.
Author
Owner

@sammcj commented on GitHub (Oct 2, 2024):

I'd be very keen to see this.

While it's easy to add a basic Caddy proxy in front of Ollama that doesn't solve most problems, e.g:

  • The Ollama libraries do not have a method to provide an auth token, thus no tools/scripts/apps that use the Ollama libraries can use the auth.
  • Ollama clients don't have any way to provide auth, likely due to the above lack of support in the underlying Ollama libraries.

I would have already contributed a PR to add simple bearer token authentication to Ollama (the same as other servers/providers like OpenAI/Anthropic/Deepseek/Tabby etc... have) but in my experience (e.g. #6279) it's very hard to get contributions to the Ollama project reviewed and merged in.


Note: For those that do want to add auth via a proxy and are only making raw HTTP queries to the API - or are using the OpenAI compatible API (which is quite limited compared to the native one), you can use Caddy:

:8081 {
  # Define a matcher for authorised API access
  @apiAuth {
    header Authorization "Bearer {env.OLLAMA_API_KEY}"
  }

  # Proxy authorised requests
  reverse_proxy @apiAuth http://ollama:11434 {
    header_up Host {http.reverse_proxy.upstream.hostport}
  }

  # Define a matcher for unauthorised access
  @unauthorized {
    not {
      header Authorization "Bearer {env.OLLAMA_API_KEY}"
    }
  }

  # Respond to unauthorised access
  respond @unauthorized "Unauthorized" 401 {
    close
  }
}
services:
  ollama:
    image: ollama/ollama:latest
    extra_hosts:
      - "host.docker.internal:host-gateway"
    ...
  caddy:
    image: caddy:latest
    environment:
      OLLAMA_API_KEY: abc123
    ports:
      - 0.0.0.0:11434:8081
    extra_hosts:
      - "host.docker.internal:host-gateway"
    links:
      - ollama
    volumes:
      - /path/to/Caddyfile:/etc/caddy/Caddyfile

But again - remember this won't allow client libraries / apps to connect to the Ollama API and provide authentication unless they add support for a Ollama authentication or are limited to using the OpenAI compatible API.

<!-- gh-comment-id:2389752279 --> @sammcj commented on GitHub (Oct 2, 2024): I'd be very keen to see this. While it's easy to add a basic Caddy proxy in front of Ollama that doesn't solve most problems, e.g: - The Ollama libraries do not have a method to provide an auth token, thus no tools/scripts/apps that use the Ollama libraries can use the auth. - Ollama clients don't have any way to provide auth, likely due to the above lack of support in the underlying Ollama libraries. I would have already contributed a PR to add simple bearer token authentication to Ollama (the same as other servers/providers like OpenAI/Anthropic/Deepseek/Tabby etc... have) but in my experience (e.g. #6279) it's _very_ hard to get contributions to the Ollama project reviewed and merged in. --- Note: For those that do want to add auth via a proxy and are only making raw HTTP queries to the API - or are using the OpenAI compatible API (which is quite limited compared to the native one), you can use Caddy: ```Caddyfile :8081 { # Define a matcher for authorised API access @apiAuth { header Authorization "Bearer {env.OLLAMA_API_KEY}" } # Proxy authorised requests reverse_proxy @apiAuth http://ollama:11434 { header_up Host {http.reverse_proxy.upstream.hostport} } # Define a matcher for unauthorised access @unauthorized { not { header Authorization "Bearer {env.OLLAMA_API_KEY}" } } # Respond to unauthorised access respond @unauthorized "Unauthorized" 401 { close } } ``` ```yaml services: ollama: image: ollama/ollama:latest extra_hosts: - "host.docker.internal:host-gateway" ... caddy: image: caddy:latest environment: OLLAMA_API_KEY: abc123 ports: - 0.0.0.0:11434:8081 extra_hosts: - "host.docker.internal:host-gateway" links: - ollama volumes: - /path/to/Caddyfile:/etc/caddy/Caddyfile ``` But again - remember this won't allow client libraries / apps to connect to the Ollama API and provide authentication unless they add support for a Ollama authentication or are limited to using the OpenAI compatible API.
Author
Owner

@kemalelmizan commented on GitHub (Oct 3, 2024):

Ollama server is using gin, and gin offers basic auth middleware. I created PR for introducing basic auth for ollama here but have similar experience with @sammcj, I have asked on the discord server but could not get the PR reviewed 😔

<!-- gh-comment-id:2390616266 --> @kemalelmizan commented on GitHub (Oct 3, 2024): Ollama server is using gin, and gin offers [basic auth middleware](https://gin-gonic.com/docs/examples/using-basicauth-middleware/). I created PR for introducing basic auth for ollama [here](https://github.com/ollama/ollama/pull/6223) but have similar experience with @sammcj, I have asked on the discord server but could not get the PR reviewed 😔
Author
Owner

@sammcj commented on GitHub (Oct 3, 2024):

Nice work @kemalelmizan! Hopefully someone eventually picks up your PR :)

<!-- gh-comment-id:2390724611 --> @sammcj commented on GitHub (Oct 3, 2024): Nice work @kemalelmizan! Hopefully someone eventually picks up your PR :)
Author
Owner

@neuhaus commented on GitHub (Oct 8, 2024):

Ollama server is using gin, and gin offers basic auth middleware. I created PR for introducing basic auth for ollama here but have similar experience with @sammcj, I have asked on the discord server but could not get the PR reviewed 😔

Very nice. It would also be useful to add Bearer authentication to gin (instead?) because some clients only support Bearer authentication but not basic auth, for example the Leo AI feature of the Brave browser.

Ideally the list of valid tokens is taken from a file.

<!-- gh-comment-id:2399551269 --> @neuhaus commented on GitHub (Oct 8, 2024): > Ollama server is using gin, and gin offers [basic auth middleware](https://gin-gonic.com/docs/examples/using-basicauth-middleware/). I created PR for introducing basic auth for ollama [here](https://github.com/ollama/ollama/pull/6223) but have similar experience with @sammcj, I have asked on the discord server but could not get the PR reviewed 😔 Very nice. It would also be useful to add **Bearer authentication** to gin (instead?) because some clients only support Bearer authentication but not basic auth, for example the Leo AI feature of the Brave browser. Ideally the list of valid tokens is taken from a file.
Author
Owner

@kesor commented on GitHub (Oct 12, 2024):

I released a docker container that packaged a reverse proxy (nginx) into a Dockerfile with Cloudflare tunnel to open it up on a public hostname on the internet, but also with a hardcoded token required for access. https://github.com/kesor/ollama-proxy

<!-- gh-comment-id:2408714372 --> @kesor commented on GitHub (Oct 12, 2024): I released a docker container that packaged a reverse proxy (nginx) into a Dockerfile with Cloudflare tunnel to open it up on a public hostname on the internet, but also with a hardcoded token required for access. https://github.com/kesor/ollama-proxy
Author
Owner

@PyroGenesis commented on GitHub (Oct 15, 2024):

Ollama server is using gin, and gin offers basic auth middleware. I created PR for introducing basic auth for ollama here but have similar experience with @sammcj, I have asked on the discord server but could not get the PR reviewed 😔

Very nice. It would also be useful to add Bearer authentication to gin (instead?) because some clients only support Bearer authentication but not basic auth, for example the Leo AI feature of the Brave browser.

Ideally the list of valid tokens is taken from a file.

Autogen is another framework that supports Bearer authentication only.

<!-- gh-comment-id:2412641274 --> @PyroGenesis commented on GitHub (Oct 15, 2024): > > Ollama server is using gin, and gin offers [basic auth middleware](https://gin-gonic.com/docs/examples/using-basicauth-middleware/). I created PR for introducing basic auth for ollama [here](https://github.com/ollama/ollama/pull/6223) but have similar experience with @sammcj, I have asked on the discord server but could not get the PR reviewed 😔 > > Very nice. It would also be useful to add **Bearer authentication** to gin (instead?) because some clients only support Bearer authentication but not basic auth, for example the Leo AI feature of the Brave browser. > > Ideally the list of valid tokens is taken from a file. Autogen is another framework that supports Bearer authentication only.
Author
Owner

@StefMa commented on GitHub (Dec 16, 2024):

It seems like this issue can be closed because the relevant PR got rejected.
See https://github.com/ollama/ollama/pull/6223#issuecomment-2496432344

@jmorganca can we close this? 🤔
This is this something you don't want to add, right? 🤔

<!-- gh-comment-id:2545839020 --> @StefMa commented on GitHub (Dec 16, 2024): It seems like this issue can be closed because the relevant PR got rejected. See https://github.com/ollama/ollama/pull/6223#issuecomment-2496432344 @jmorganca can we close this? 🤔 This is this something you don't want to add, right? 🤔
Author
Owner

@jmorganca commented on GitHub (Dec 23, 2024):

Hi folks, and thanks for the ping @StefMa. Given the different auth configurations, we try to keep Ollama focused on serving an http API that can be fronted by a number of proxy servers such as nginx, caddy and more – which often support many different auth services. This allows the maintainer team to focus on core model serving. I really appreciate the comments on this thread and would be happy to add links in the main README or elsewhere on how to configure different frontends in front of Ollama.

<!-- gh-comment-id:2558705136 --> @jmorganca commented on GitHub (Dec 23, 2024): Hi folks, and thanks for the ping @StefMa. Given the different auth configurations, we try to keep Ollama focused on serving an http API that can be fronted by a number of proxy servers such as nginx, caddy and more – which often support many different auth services. This allows the maintainer team to focus on core model serving. I really appreciate the comments on this thread and would be happy to add links in the main README or elsewhere on how to configure different frontends in front of Ollama.
Author
Owner

@LeisureLinux commented on GitHub (Feb 12, 2025):

Nginx as reverse proxy, and put pam auth at backend for Nginx.
put my ollama instance on ollama.lan, was able to get model using below cli:

curl -q -sS -H "Authorization: Basic b2xsYW1hLWFwaTpMZWlzdXJlTGludXg=" http://ollama.lan/api/tags |jq -r ".models[].name"

while the base64 part is generated with:

echo -ne "username:password" |base64 --wrap 0

Nginx config part:

  auth_pam "Secured Ollama";
  auth_pam_service_name "nginx";
  proxy_set_header Host 127.0.0.1;
  proxy_pass http://127.0.0.1:11434;

work nicely in Page-Assist extention, with additional header set as:

"Authorization: Basic b2xsYW1hLWFwaTpMZWlzdXJlTGludXg="

so far it only work with Page-Assist, because it provide the addition header setup.

while all other client, like CherryStudio, Chatbox, AnythingLLM etc, even ollama client itself(e.g OLLAMA_HOST) was not able to put a basic authentication URL like:

http://ollama-api:LeisureLinux%40ollama.lan:80

Which the "@" is escaped with %40.

(base) /tmp ᐅ export OLLAMA_HOST="ollama-api:LeisureLinux%40ollama.lan:80"
(base) /tmp ᐅ ollama list
Error: Head "http://[ollama-api:LeisureLinux%2540ollama.lan:80]:11434/": dial tcp: lookup ollama-api:LeisureLinux%40ollama.lan:80: no such host

it is not resolved,but hope can give the followers some idea.

For those want to understand detail, can view the video on Bilibili:
https://www.bilibili.com/video/BV19EKLeqE3P/

HOPE that Ollama can add a new Env. Variable like:

OLLAMA_AUTH="b2xsYW1hLWFwaTpMZWlzdXJlTGludXg="

which will then pass the following header to OLLAMA_HOST

Authorization: Basic b2xsYW1hLWFwaTpMZWlzdXJlTGludXg=

<!-- gh-comment-id:2653885309 --> @LeisureLinux commented on GitHub (Feb 12, 2025): Nginx as reverse proxy, and put pam auth at backend for Nginx. put my ollama instance on ollama.lan, was able to get model using below cli: `curl -q -sS -H "Authorization: Basic b2xsYW1hLWFwaTpMZWlzdXJlTGludXg=" http://ollama.lan/api/tags |jq -r ".models[].name"` while the base64 part is generated with: `echo -ne "username:password" |base64 --wrap 0` Nginx config part: ```go auth_pam "Secured Ollama"; auth_pam_service_name "nginx"; proxy_set_header Host 127.0.0.1; proxy_pass http://127.0.0.1:11434; ``` work nicely in Page-Assist extention, with additional header set as: "Authorization: Basic b2xsYW1hLWFwaTpMZWlzdXJlTGludXg=" so far it only work with Page-Assist, because it provide the addition header setup. while all other client, like CherryStudio, Chatbox, AnythingLLM etc, even ollama client itself(e.g OLLAMA_HOST) was not able to put a basic authentication URL like: http://ollama-api:LeisureLinux%40ollama.lan:80 Which the "@" is escaped with %40. (base) /tmp ᐅ export OLLAMA_HOST="ollama-api:LeisureLinux%40ollama.lan:80" (base) /tmp ᐅ ollama list Error: Head "http://[ollama-api:LeisureLinux%2540ollama.lan:80]:11434/": dial tcp: lookup ollama-api:LeisureLinux%40ollama.lan:80: no such host it is not resolved,but hope can give the followers some idea. For those want to understand detail, can view the video on Bilibili: https://www.bilibili.com/video/BV19EKLeqE3P/ **HOPE** that Ollama can add a new Env. Variable like: `OLLAMA_AUTH="b2xsYW1hLWFwaTpMZWlzdXJlTGludXg=" ` which will then pass the following header to OLLAMA_HOST `Authorization: Basic b2xsYW1hLWFwaTpMZWlzdXJlTGludXg=`
Author
Owner

@neerax commented on GitHub (May 7, 2025):

I understand that the team prefers to focus on the core functionality, but the lack of even a simple and standard native authentication mechanism discourages support in libraries, frontends, and low/no-code tools.

<!-- gh-comment-id:2858361969 --> @neerax commented on GitHub (May 7, 2025): I understand that the team prefers to focus on the core functionality, but the lack of even a simple and standard native authentication mechanism discourages support in libraries, frontends, and low/no-code tools.
Author
Owner

@hammadtq commented on GitHub (Jul 2, 2025):

I just finished wiring up Attach Gateway – a PyPi package that adds OIDC/JWT (works with Google, Auth0, etc.) and A2A hand-off in front of Ollama.

pip install attach-dev
export ENGINE_URL=http://localhost:11434      # Ollama port  
export OIDC_ISSUER=https://your-issuer/       # Google/Auth0/Okta
attach-gateway --port 8080                    # gated

Feedback / issues welcome!

https://github.com/attach-dev/attach-gateway

<!-- gh-comment-id:3029074081 --> @hammadtq commented on GitHub (Jul 2, 2025): I just finished wiring up **Attach Gateway** – a PyPi package that adds OIDC/JWT (works with Google, Auth0, etc.) and A2A hand-off in front of Ollama. ``` pip install attach-dev export ENGINE_URL=http://localhost:11434 # Ollama port export OIDC_ISSUER=https://your-issuer/ # Google/Auth0/Okta attach-gateway --port 8080 # gated ``` Feedback / issues welcome! https://github.com/attach-dev/attach-gateway
Author
Owner

@neuhaus commented on GitHub (Jul 3, 2025):

Feedback / issues welcome!

https://github.com/attach-dev/attach-gateway

Interesting. Could this component also be used to limit usage, i.e. a maximum amount of incoming and/or outgoing tokens?

<!-- gh-comment-id:3030786830 --> @neuhaus commented on GitHub (Jul 3, 2025): > Feedback / issues welcome! > > https://github.com/attach-dev/attach-gateway Interesting. Could this component also be used to limit usage, i.e. a maximum amount of incoming and/or outgoing tokens?
Author
Owner

@hammadtq commented on GitHub (Jul 7, 2025):

Feedback / issues welcome!
https://github.com/attach-dev/attach-gateway

Interesting. Could this component also be used to limit usage, i.e. a maximum amount of incoming and/or outgoing tokens?

Yes, I will be adding a rate-limiting layer this week for v0.3. Happy to accept PRs or issues for any specific quota or billing-related use cases you have in mind.

<!-- gh-comment-id:3045931150 --> @hammadtq commented on GitHub (Jul 7, 2025): > > Feedback / issues welcome! > > https://github.com/attach-dev/attach-gateway > > Interesting. Could this component also be used to limit usage, i.e. a maximum amount of incoming and/or outgoing tokens? Yes, I will be adding a rate-limiting layer this week for v0.3. Happy to accept PRs or issues for any specific quota or billing-related use cases you have in mind.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47028