[GH-ISSUE #14782] Allow Control List for AI Models Downloaded #9552

Closed
opened 2026-04-12 22:28:17 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @jamboNum5 on GitHub (Mar 11, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14782

I work as a sys-admin at a University and there is a growing demand for using Ollama in computer labs for teaching. There are a few considerations we have for faciliating AI models in Labs.

  • Due to data governance policies, we have to consider what Student data is uploaded to cloud services. This limits the kind of models we use on local machines.
  • We need to ensure we are working inline with the licence terms of a given AI model, the licences seem to vary between models and we don't have the ability to control what model is downloaded.
  • To optimise performance, we would likely want to have the models installed locally on the machine rather than in user profiles.

This feels like a big ask and is more than one feature, but I was keen to better understand whether the following was a consideration for development:

  • Models stored on local disks rather than user profile (located on network shares). This would optimise read/write locally but also mean models aren't being downloaded from servers by individual users, reducing duplicating data and saving bandwidth.
  • ACL type feature to limit which models are downloaded and limit to approved models.

Thanks in advance

Originally created by @jamboNum5 on GitHub (Mar 11, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14782 I work as a sys-admin at a University and there is a growing demand for using Ollama in computer labs for teaching. There are a few considerations we have for faciliating AI models in Labs. - Due to data governance policies, we have to consider what Student data is uploaded to cloud services. This limits the kind of models we use on local machines. - We need to ensure we are working inline with the licence terms of a given AI model, the licences seem to vary between models and we don't have the ability to control what model is downloaded. - To optimise performance, we would likely want to have the models installed locally on the machine rather than in user profiles. This feels like a big ask and is more than one feature, but I was keen to better understand whether the following was a consideration for development: - Models stored on local disks rather than user profile (located on network shares). This would optimise read/write locally but also mean models aren't being downloaded from servers by individual users, reducing duplicating data and saving bandwidth. - ACL type feature to limit which models are downloaded and limit to approved models. Thanks in advance
GiteaMirror added the feature request label 2026-04-12 22:28:17 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 11, 2026):

https://github.com/ollama/ollama/issues/11667#issuecomment-3155719517

Control is easier if a single ollama instance is available to the students (as opposed to them running their own ollama servers). You can restrict access to just the OpenAI compatibility endpoint, this allows them to use the available models but they cannot download new ones. It does mean that they would have to request a new model from the sys-admin department if it was pertinent to their studies.

Storing models on local disk only makes sense if the students are running their own servers, in which case you would need to firewall off ollama.ai, hf.co, modelscope.cn etc to prevent them from downloading random models.

<!-- gh-comment-id:4039797550 --> @rick-github commented on GitHub (Mar 11, 2026): https://github.com/ollama/ollama/issues/11667#issuecomment-3155719517 Control is easier if a single ollama instance is available to the students (as opposed to them running their own ollama servers). You can restrict access to just the OpenAI compatibility endpoint, this allows them to use the available models but they cannot download new ones. It does mean that they would have to request a new model from the sys-admin department if it was pertinent to their studies. Storing models on local disk only makes sense if the students are running their own servers, in which case you would need to firewall off ollama.ai, hf.co, modelscope.cn etc to prevent them from downloading random models.
Author
Owner

@fcorneli commented on GitHub (Mar 11, 2026):

Simply put the allowed models somewhere on the machines, and give your students the path, which they can use as follows:

export OLLAMA_MODELS=/path/where/the/policy/allowed/ollama/models/live/
ollama ls

Of course they can always pull in something else I guess. Would have to see whether ollama can work on a read-only $OLLAMA_MODELS directory.

<!-- gh-comment-id:4039849830 --> @fcorneli commented on GitHub (Mar 11, 2026): Simply put the allowed models somewhere on the machines, and give your students the path, which they can use as follows: ``` export OLLAMA_MODELS=/path/where/the/policy/allowed/ollama/models/live/ ollama ls ``` Of course they can always pull in something else I guess. Would have to see whether `ollama` can work on a read-only `$OLLAMA_MODELS` directory.
Author
Owner

@jamboNum5 commented on GitHub (Mar 13, 2026):

Thank you both for the suggestions, between firewalling and the environment variable, that should be a great place to start.

I believe they are intending to run a local instance to interact with some robots. They wanted to give students more hands on experience configuring the models for work later on.

I'll set this feature to closed for the time being. Thanks again!

<!-- gh-comment-id:4053708952 --> @jamboNum5 commented on GitHub (Mar 13, 2026): Thank you both for the suggestions, between firewalling and the environment variable, that should be a great place to start. I believe they are intending to run a local instance to interact with some robots. They wanted to give students more hands on experience configuring the models for work later on. I'll set this feature to closed for the time being. Thanks again!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9552