[GH-ISSUE #10283] Zeroconf (mDNS/Bonjour) Support for LAN Discovery #68810

Open
opened 2026-05-04 15:18:28 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @danemadsen on GitHub (Apr 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10283

It would be helpful if Ollama instances could advertise themselves over Zeroconf (mDNS) on the local network, enabling seamless discovery by other devices and apps (e.g. mobile clients, browser frontends, embedded devices).

This would allow clients to automatically discover Ollama instances without manual IP or port configuration, which is especially useful in LAN setups for distributed AI inference or collaborative environments.

Proposed Service Format:

Service Type: _ollama._tcp.local.
Service Name: <Hostname or InstanceName>
Port: 11434
TXT Records:
  - model=llama3
  - status=ready
  - version=0.1.25

related #751

Originally created by @danemadsen on GitHub (Apr 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10283 It would be helpful if Ollama instances could advertise themselves over Zeroconf (mDNS) on the local network, enabling seamless discovery by other devices and apps (e.g. mobile clients, browser frontends, embedded devices). This would allow clients to automatically discover Ollama instances without manual IP or port configuration, which is especially useful in LAN setups for distributed AI inference or collaborative environments. Proposed Service Format: ```yaml Service Type: _ollama._tcp.local. Service Name: <Hostname or InstanceName> Port: 11434 TXT Records: - model=llama3 - status=ready - version=0.1.25 ``` related #751
GiteaMirror added the feature request label 2026-05-04 15:18:28 -05:00
Author
Owner

@JamesClarke7283 commented on GitHub (Apr 20, 2025):

I think this should be an opt in. Just because Ollama doesnt have builtin auth yet.

You could have it gated as under serve as a flag or env var like OLLAMA_ADVERTISE etc.

I realise many people dont have ollama exposed to the internet (assuming they set have it listening on its public IPv6 address).

Nether the less, i think this would be a good precaution.

Just to warn about the state of things, i checked shodan some months ago, there was not an insignificant number of exposed ollama servers, probably people have ineffective auth setup(nginx/etc) or raw ollama port forward.

Ollama does have a large attack surface already in many ways, we dont want to make it easier for users to be penetrated.

I just thought i would raise this, another note, for zeroconf/mDNS support it might be equally up to the packaging downstream to handle that.

I know if i wanted i could add 'zeroconf' support to the ollama PKGBUILD.

Just some food for thought.

Thanks,
James Clarke

<!-- gh-comment-id:2817371475 --> @JamesClarke7283 commented on GitHub (Apr 20, 2025): I think this should be an opt in. Just because Ollama doesnt have builtin auth yet. You could have it gated as under `serve` as a flag or env var like `OLLAMA_ADVERTISE` etc. I realise many people dont have ollama exposed to the internet (assuming they set have it listening on its public IPv6 address). Nether the less, i think this would be a good precaution. Just to warn about the state of things, i checked shodan some months ago, there was not an insignificant number of exposed ollama servers, probably people have ineffective auth setup(nginx/etc) or raw ollama port forward. Ollama does have a large attack surface already in many ways, we dont want to make it easier for users to be penetrated. I just thought i would raise this, another note, for zeroconf/mDNS support it might be equally up to the packaging downstream to handle that. I know if i wanted i could add 'zeroconf' support to the ollama PKGBUILD. Just some food for thought. Thanks, James Clarke
Author
Owner

@rndmcnlly commented on GitHub (Nov 21, 2025):

I like the idea of making ollama discoverable via mDNS, but I think this might be a key moment to setup a broader ecosystem of LLM-service discovery.

Consider https://github.com/jperrello/Saturn Here's whats's different from what's proposed earlier in this thread:

  • More general service name: _saturn._tcp.local.
  • Promises less support/control to people on the LAN (just OpenAI-compatible completions, no create/pull/delete).

Think about how the one tech expert in every household sets up the wifi and makes sure you can use it to connect to the printer. That same person might setup a Saturn bridge that provides LLM services to anyone on the LAN. Those services might be locally provided by an in-house instance of Ollama, or they might just be a simply proxy to a cloud provider with a pre-configured API key. The main idea is that most users don't need to set anything up. Apps just sense LLM access nearby and start using it.

If we go with the _ollama._tcp.local. design, it'll be harder to get get app developers to integrate discovery features because there's only one kind of service they could discover. With something more general, we might have a healthier ecosystem in which Ollama is just one of many thriving LLM services.

<!-- gh-comment-id:3561448581 --> @rndmcnlly commented on GitHub (Nov 21, 2025): I like the idea of making ollama discoverable via mDNS, but I think this might be a key moment to setup a broader ecosystem of LLM-service discovery. Consider https://github.com/jperrello/Saturn Here's whats's different from what's proposed earlier in this thread: - More general service name: `_saturn._tcp.local.` - Promises less support/control to people on the LAN (just OpenAI-compatible completions, no create/pull/delete). Think about how the one tech expert in every household sets up the wifi and makes sure you can use it to connect to the printer. That same person might setup a Saturn bridge that provides LLM services to anyone on the LAN. Those services *might* be locally provided by an in-house instance of Ollama, or they might just be a simply proxy to a cloud provider with a pre-configured API key. The main idea is that most users don't need to set anything up. Apps just *sense* LLM access nearby and start using it. If we go with the `_ollama._tcp.local.` design, it'll be harder to get get app developers to integrate discovery features because there's only one kind of service they could discover. With something more general, we might have a healthier ecosystem in which Ollama is just one of many thriving LLM services.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68810