Local mode instead of Airplane mode #7913

Open
opened 2025-11-12 14:23:11 -06:00 by GiteaMirror · 3 comments
Owner

Originally created by @owenzhao on GitHub (Aug 14, 2025).

Originally assigned to: @hoyyeva on GitHub.

Currently there is an airplane mode that will disabling Turbo mode and web search. However, it still shows the models that are not local but downloadable. Issue: https://github.com/ollama/ollama/issues/11789

My advice is made it more local and only shows the models locally. The current suggest models are not friendly to the new user as 20b and 120b models are too large to run locally for most people. For Mac people, the best value is 16GB Mac mini. And for Windows people, a 16GB dedicated GPU is not cheap too.

A model like qwen3-4b should be set as the most suggested model for most people. For advanced users, they know what they need so the suggestion is irrelevant.

Originally created by @owenzhao on GitHub (Aug 14, 2025). Originally assigned to: @hoyyeva on GitHub. Currently there is an airplane mode that will disabling Turbo mode and web search. However, it still shows the models that are not local but downloadable. Issue: https://github.com/ollama/ollama/issues/11789 My advice is made it more local and only shows the models locally. The current suggest models are not friendly to the new user as 20b and 120b models are too large to run locally for most people. For Mac people, the best value is 16GB Mac mini. And for Windows people, a 16GB dedicated GPU is not cheap too. A model like qwen3-4b should be set as the most suggested model for most people. For advanced users, they know what they need so the suggestion is irrelevant.
GiteaMirror added the feature requestapp labels 2025-11-12 14:23:12 -06:00
Author
Owner

@jasencarroll commented on GitHub (Aug 17, 2025):

In the end, users are going to want to be able to set their default models to ones that suit their hardware and turn off the ones that don't. Would @owenzhao say that adequately captures your requirements as well?

  1. local mode should let you select 1 (one) default model.
  2. local mode should let you deselect (or only show) the models that are downloaded locally.

I was thinking the local mode should also:

  1. provide the ability to change the Ollama API from localhost to an internal IP for local client consumption.
  2. expose a web search API for local web searching if the user wants to add their own search API.

I'm not familiar with the codebase (yet) but I'm curious about how those requirements could be satisfied and going to make a fork shortly.

I'm a huge fan of the project and willing to contribute/support - @rick-github if you (or someone on the team) want(s) to give me some light direction, I'm a pretty good self starter requiring minimal touch from there. I'd be happy to understand how many pull requests the team would be looking for to appropriately manage these requirements, whether TDD is acceptable or BDD would be preferred here, etc. Anything that could be shared to ensure my work is valueable before I start would be great.

@jasencarroll commented on GitHub (Aug 17, 2025): In the end, users are going to want to be able to set their default models to ones that suit their hardware and turn off the ones that don't. Would @owenzhao say that adequately captures your requirements as well? 1. local mode should let you select 1 (one) default model. 2. local mode should let you deselect (or only show) the models that are downloaded locally. I was thinking the local mode should also: 3. provide the ability to change the Ollama API from localhost to an internal IP for local client consumption. 4. expose a web search API for local web searching if the user wants to add their own search API. I'm not familiar with the codebase (yet) but I'm curious about how those requirements could be satisfied and going to make a fork shortly. I'm a huge fan of the project and willing to contribute/support - @rick-github if you (or someone on the team) want(s) to give me some light direction, I'm a pretty good self starter requiring minimal touch from there. I'd be happy to understand how many pull requests the team would be looking for to appropriately manage these requirements, whether TDD is acceptable or BDD would be preferred here, etc. Anything that could be shared to ensure my work is valueable before I start would be great.
Author
Owner

@jasencarroll commented on GitHub (Aug 17, 2025):

Sorry - I'm still working through your documentation. I'll work on drafting up the following and write back:

Tips for proposals:

  • Explain the problem you are trying to solve, not what you are trying to do.
  • Explain why the change is important.
  • Explain how the change will be used.
  • Explain how the change will be tested.

Additionally, for bonus points: Provide draft documentation you would expect to
see if the change were accepted.

@jasencarroll commented on GitHub (Aug 17, 2025): Sorry - I'm still working through your documentation. I'll work on drafting up the following and write back: Tips for proposals: * Explain the problem you are trying to solve, not what you are trying to do. * Explain why the change is important. * Explain how the change will be used. * Explain how the change will be tested. Additionally, for bonus points: Provide draft documentation you would expect to see if the change were accepted.
Author
Owner

@owenzhao commented on GitHub (Aug 18, 2025):

I think Ollama just remembers what the model that the user last use is good enough. As the default model must be the mostly used model. At it is more natural to let the user choose whether to change a model than Ollama change the model to default each time.

The only drawback is maybe the user remove the model that last used. In that case, Ollama should use another model, maybe the first one that in alphabet order.

@owenzhao commented on GitHub (Aug 18, 2025): I think Ollama just remembers what the model that the user last use is good enough. As the default model must be the mostly used model. At it is more natural to let the user choose whether to change a model than Ollama change the model to default each time. The only drawback is maybe the user remove the model that last used. In that case, Ollama should use another model, maybe the first one that in alphabet order.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#7913