[GH-ISSUE #11670] Turbo mode presentation. #7718

Closed
opened 2026-04-12 19:49:23 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @razvanab on GitHub (Aug 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11670

What is the issue?

If you consider this issue to be stupid, then just ignore it.

In the Oleama turbo mode presentation page, there is this:
"Privacy first
Ollama does not retain your data to ensure privacy and security."

For me at least this looks like only if I have turbo mode I have privacy and security.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @razvanab on GitHub (Aug 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11670 ### What is the issue? If you consider this issue to be stupid, then just ignore it. In the Oleama turbo mode presentation page, there is this: "Privacy first Ollama does not retain your data to ensure privacy and security." For me at least this looks like only if I have turbo mode I have privacy and security. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 19:49:23 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Aug 5, 2025):

Hey @razvanab thank you for asking. Ollama can be run completely offline and private (without Turbo mode). It has always been the same.

Turbo mode gives users who might not have the GPUs to run a large model well locally, the GPUs needed. This is an optional service.

For web search, we wanted to bridge the gap between the models we are running with the most current information, so we are providing that service. That being said, we need to prevent abuse since it does cost us some money to run it.

hope this helps!

Thank you for asking. I really appreciate it.

<!-- gh-comment-id:3156878631 --> @mchiang0610 commented on GitHub (Aug 5, 2025): Hey @razvanab thank you for asking. Ollama can be run completely offline and private (without Turbo mode). It has always been the same. Turbo mode gives users who might not have the GPUs to run a large model well locally, the GPUs needed. This is an optional service. For web search, we wanted to bridge the gap between the models we are running with the most current information, so we are providing that service. That being said, we need to prevent abuse since it does cost us some money to run it. hope this helps! Thank you for asking. I really appreciate it.
Author
Owner

@razvanab commented on GitHub (Aug 5, 2025):

"Ollama can be run completely offline and private (without Turbo mode). It has always been the same."
I know this but the wording might make some people believe that it is only for the Turbo mode is what I was trying to say. Anyway, thank you for your answer.

<!-- gh-comment-id:3156915217 --> @razvanab commented on GitHub (Aug 5, 2025): "Ollama can be run completely offline and private (without Turbo mode). It has always been the same." I know this but the wording might make some people believe that it is only for the Turbo mode is what I was trying to say. Anyway, thank you for your answer.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7718