[GH-ISSUE #12404] Update FAQ in light of new Cloud models feature #8238

Closed
opened 2026-04-12 20:44:33 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @laniakea64 on GitHub (Sep 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12404

Currently Ollama FAQ says:

Does Ollama send my prompts and responses back to ollama.com?

If you're running a model locally, your prompts and responses will always stay on your machine. Ollama Turbo in the App allows you to run your queries on Ollama's servers if you don't have a powerful enough GPU. Web search lets a model query the web, giving you more accurate and up-to-date information. Both Turbo and web search require sending your prompts and responses to Ollama.com. This data is neither logged nor stored.

If you don't want to see the Turbo and web search options in the app, you can disable them in Settings by turning on Airplane mode. In Airplane mode, all models will run locally, and your prompts and responses will stay on your machine.

With the new Cloud models feature, this FAQ answer is no longer complete, especially when using only the standalone CLI/server binary:

  1. The ability to run queries on Ollama's servers is no longer limited to Ollama Turbo in the app. For CLI/server users, as of Ollama 0.12 this is now available as Cloud models.

  2. IIUC a single running Ollama server instance can serve both Cloud models and locally-run models at the same time? If so: The FAQ mentions that the Ollama app has an "Airplane mode". Is there an equivalent of Airplane mode for the CLI/server binary? Or is an explicit "Airplane mode" useless there because it can be achieved by just not pulling or running Cloud models?

  3. Some models are available in both cloud and non-cloud versions, with these options being different tags of the same model. How can CLI/API users definitively tell whether using an Ollama model will send anything over the network...

    • ... while viewing the model on ollama.com, and before pulling the model/tag - without relying on things like looking for the string "cloud" in tag names? (IIUC the ability to offer models on the registry on ollama.com is not limited to Ollama devs, and malicious actors have done it before, so assuming that models and names are trustworthy would be unwise.)
    • ... before running an already-pulled model/tag?

Cloud models are an interesting idea for enabling more performance and supporting Ollama development, but the ability to run models completely locally/offline is still central, so updated information on how to run the Ollama CLI/server without it touching the network (except to pull models) would be greatly appreciated. Thanks for clarification.

Originally created by @laniakea64 on GitHub (Sep 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12404 Currently [Ollama FAQ says](https://github.com/ollama/ollama/blob/main/docs/faq.md#does-ollama-send-my-prompts-and-responses-back-to-ollamacom): > ## Does Ollama send my prompts and responses back to ollama.com? > > If you're running a model locally, your prompts and responses will always stay on your machine. Ollama Turbo in the App allows you to run your queries on Ollama's servers if you don't have a powerful enough GPU. Web search lets a model query the web, giving you more accurate and up-to-date information. Both Turbo and web search require sending your prompts and responses to Ollama.com. This data is neither logged nor stored. > > If you don't want to see the Turbo and web search options in the app, you can disable them in Settings by turning on Airplane mode. In Airplane mode, all models will run locally, and your prompts and responses will stay on your machine. With the new Cloud models feature, this FAQ answer is no longer complete, especially when using only the standalone CLI/server binary: 1) The ability to run queries on Ollama's servers is no longer limited to Ollama Turbo in the app. For CLI/server users, as of Ollama 0.12 this [is now available as Cloud models](https://ollama.com/blog/cloud-models). 2) IIUC a single running Ollama server instance can serve both Cloud models and locally-run models at the same time? If so: The FAQ mentions that the Ollama app has an "Airplane mode". Is there an equivalent of Airplane mode for the CLI/server binary? Or is an explicit "Airplane mode" useless there because it can be achieved by just not pulling or running Cloud models? 3) Some models are available in both cloud and non-cloud versions, with these options being different tags of the same model. How can CLI/API users definitively tell whether using an Ollama model will send anything over the network... - ... while viewing the model on ollama.com, and before pulling the model/tag - without relying on things like looking for the string "cloud" in tag names? (IIUC the ability to offer models on the registry on ollama.com is not limited to Ollama devs, and [malicious actors have done it before](https://github.com/ollama/ollama/issues/9134), so assuming that models and names are trustworthy would be unwise.) - ... before running an already-pulled model/tag? Cloud models are an interesting idea for enabling more performance and supporting Ollama development, but the ability to run models completely locally/offline is still central, so updated information on how to run the Ollama CLI/server without it touching the network (except to pull models) would be greatly appreciated. Thanks for clarification.
Author
Owner
<!-- gh-comment-id:3894197826 --> @laniakea64 commented on GitHub (Feb 13, 2026): https://docs.ollama.com/faq#does-ollama-send-my-prompts-and-answers-back-to-ollama-com https://docs.ollama.com/faq#how-do-i-disable-ollama%E2%80%99s-cloud-features Resolved with https://github.com/ollama/ollama/pull/14221 . Thank you for the thorough solution!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8238