[GH-ISSUE #2865] Privacy settings on models #27510

Closed
opened 2026-04-22 04:54:09 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @trymeouteh on GitHub (Mar 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2865

The ability to customise the privacy settings for each model. These privacy setting can restrict what the model can do with your device...

  • Access the internet. You can disable a model from accessing the internet, making sure it only runs offline
  • Reading files (Cannot read files on your system)
  • Writing files (Cannot create files on your system)
Originally created by @trymeouteh on GitHub (Mar 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2865 The ability to customise the privacy settings for each model. These privacy setting can restrict what the model can do with your device... - Access the internet. You can disable a model from accessing the internet, making sure it only runs offline - Reading files (Cannot read files on your system) - Writing files (Cannot create files on your system)
Author
Owner

@pdevine commented on GitHub (Mar 4, 2024):

Ollama consists of two parts which run locally on your own machine; an API server which serves the model, and a CLI which is used to talk to the local API server. A model running on the server absolutely cannot access the internet, read a file, or write a file. The CLI can read an image file (for instance w/ multi-modal models) which it will then pass the image data to the local server, but again, none of that data will ever leave your machine. You also would have to have specified the file name that you wanted to read and pass to the server (the server running the model will never access the file system).

That said, the server can access the internet, but only for two specific cases:

  • pulling a model; and
  • pushing a model

But it will never do that during inference. We would definitely have privacy settings if we were to ever add a feature like that. I'm going to go ahead and close the issue for now.

<!-- gh-comment-id:1975999567 --> @pdevine commented on GitHub (Mar 4, 2024): Ollama consists of two parts which run locally on your own machine; an API server which serves the model, and a CLI which is used to talk to the local API server. A model running on the server absolutely cannot access the internet, read a file, or write a file. The CLI _can_ read an image file (for instance w/ multi-modal models) which it will then pass the image data to the local server, but again, none of that data will ever leave your machine. You also would have to have specified the file name that you wanted to read and pass to the server (the server running the model will never access the file system). That said, the server _can_ access the internet, but only for two specific cases: * pulling a model; and * pushing a model But it will _never_ do that during inference. We would definitely have privacy settings if we were to ever add a feature like that. I'm going to go ahead and close the issue for now.
Author
Owner

@trymeouteh commented on GitHub (Mar 8, 2024):

I hope someday there are search engine models like Phind and YOU.com that you can run on your device that will make API calls to a search engine and get results back for you and for there to be models that can generate images for you.

If these models are available in Ollama, please do add privacy settings to perhaps limit what these models can do.

<!-- gh-comment-id:1985746376 --> @trymeouteh commented on GitHub (Mar 8, 2024): I hope someday there are search engine models like Phind and YOU.com that you can run on your device that will make API calls to a search engine and get results back for you and for there to be models that can generate images for you. If these models are available in Ollama, please do add privacy settings to perhaps limit what these models can do.
Author
Owner

@ti2ger92 commented on GitHub (Jan 28, 2025):

what about this? how do I turn it off?

https://ollama.com/blog/tool-support

<!-- gh-comment-id:2619742472 --> @ti2ger92 commented on GitHub (Jan 28, 2025): what about this? how do I turn it off? https://ollama.com/blog/tool-support
Author
Owner

@Gladwell5 commented on GitHub (Feb 2, 2025):

With Activity Monitor on a Mac, I see ollama has a small amount of data sent and received at inference time when the model is already downloaded. Any idea what that is?

<!-- gh-comment-id:2629380047 --> @Gladwell5 commented on GitHub (Feb 2, 2025): With Activity Monitor on a Mac, I see ollama has a small amount of data sent and received at inference time when the model is already downloaded. Any idea what that is?
Author
Owner

@kravivar commented on GitHub (Apr 8, 2026):

I just wanted to report this incident here. Attached screenshot.

Short Summary: I asked the model to count number of words and it initially gave me wrong answer and I challenged it saying you are wrong can re-evaluate and give me correct number. It used an online service.

@pdevine: FYI.

Image Image Image Image
<!-- gh-comment-id:4205642431 --> @kravivar commented on GitHub (Apr 8, 2026): I just wanted to report this incident here. Attached screenshot. Short Summary: I asked the model to count number of words and it initially gave me wrong answer and I challenged it saying you are wrong can re-evaluate and give me correct number. It used an online service. @pdevine: FYI. <img width="746" height="258" alt="Image" src="https://github.com/user-attachments/assets/68643551-f49a-495f-99c8-f1e24b594935" /> <img width="768" height="669" alt="Image" src="https://github.com/user-attachments/assets/bb364910-92b9-473d-98b5-417e57dad057" /> <img width="692" height="220" alt="Image" src="https://github.com/user-attachments/assets/5b40bc41-334a-4689-911f-356af25d2e76" /> <img width="283" height="164" alt="Image" src="https://github.com/user-attachments/assets/aaa40f8d-5b26-4f14-8d1b-e8401e05e3a1" />
Author
Owner

@pdevine commented on GitHub (Apr 8, 2026):

@kravivar The model basically just lied to you (i.e. it was a hallucination). You can check the log to see if there was a web search, but if you have everything turned off it's not possible for it to do that.

<!-- gh-comment-id:4208366138 --> @pdevine commented on GitHub (Apr 8, 2026): @kravivar The model basically just lied to you (i.e. it was a hallucination). You can check the log to see if there was a web search, but if you have everything turned off it's not possible for it to do that.
Author
Owner

@kravivar commented on GitHub (Apr 9, 2026):

You are correct @pdevine it did lie to me. I was too naive to post this prematurely, apologies. I was able to validate on server logs and system log messages.

~ cat ~/.ollama/logs/server.log | grep -i 'wordcounter.net'
~
~ sudo log show --predicate 'eventMessage CONTAINS[c] "wordcounter.net"' --last 24h --debug --info
Filtering the log data using "composedMessage CONTAINS[c] "wordcounter.net""
Timestamp                       Thread     Type        Activity             PID    TTL  
--------------------------------------------------------------------------------------------------------------------
Log      - Default:          0, Info:                0, Debug:             0, Error:          0, Fault:          0
Activity - Create:           0, Transition:          0, Actions:           0
Image
<!-- gh-comment-id:4210964306 --> @kravivar commented on GitHub (Apr 9, 2026): You are correct @pdevine it did lie to me. I was too naive to post this prematurely, apologies. I was able to validate on server logs and system log messages. ``` ~ cat ~/.ollama/logs/server.log | grep -i 'wordcounter.net' ~ ~ sudo log show --predicate 'eventMessage CONTAINS[c] "wordcounter.net"' --last 24h --debug --info Filtering the log data using "composedMessage CONTAINS[c] "wordcounter.net"" Timestamp Thread Type Activity PID TTL -------------------------------------------------------------------------------------------------------------------- Log - Default: 0, Info: 0, Debug: 0, Error: 0, Fault: 0 Activity - Create: 0, Transition: 0, Actions: 0 ``` <img width="744" height="551" alt="Image" src="https://github.com/user-attachments/assets/11831270-d309-47ea-8873-dae66477e6cf" />
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27510