[GH-ISSUE #1976] Cloud storage support #26899

Closed
opened 2026-04-22 03:36:35 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @beliboba on GitHub (Jan 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1976

Is there any support for cloud storage for models? If no, will it be ever implemented?

Originally created by @beliboba on GitHub (Jan 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1976 Is there any support for cloud storage for models? If no, will it be ever implemented?
Author
Owner

@easp commented on GitHub (Jan 13, 2024):

What are you wanting, exactly? Do you want to store private models in the cloud and have Ollama retrieve them automatically?

<!-- gh-comment-id:1890715879 --> @easp commented on GitHub (Jan 13, 2024): What are you wanting, exactly? Do you want to store private models in the cloud and have Ollama retrieve them automatically?
Author
Owner

@beliboba commented on GitHub (Jan 14, 2024):

I see it like this
I have a cloud storage with ftp access or some other method, and ability to set it somewhere in config so Ollama can retrieve my models from there

<!-- gh-comment-id:1890967519 --> @beliboba commented on GitHub (Jan 14, 2024): I see it like this I have a cloud storage with ftp access or some other method, and ability to set it somewhere in config so Ollama can retrieve my models from there
Author
Owner

@pdevine commented on GitHub (Jan 15, 2024):

Hey @beliboba , you can already do this right now. Go to https://ollama.ai/signup and create an account. You can then go to https://ollama.ai/settings/keys when you're signed in and upload your ollama public key (on macos it's in ~/.ollama/id_ed25519.pub).

If you then create a model called something like <yournamespace>/<yourmodel> you can push it to ollama using ollama push <yournamespace>/<yourmodel>.

<!-- gh-comment-id:1892839210 --> @pdevine commented on GitHub (Jan 15, 2024): Hey @beliboba , you can already do this right now. Go to `https://ollama.ai/signup` and create an account. You can then go to `https://ollama.ai/settings/keys` when you're signed in and upload your ollama public key (on macos it's in `~/.ollama/id_ed25519.pub`). If you then create a model called something like `<yournamespace>/<yourmodel>` you can push it to ollama using `ollama push <yournamespace>/<yourmodel>`.
Author
Owner

@beliboba commented on GitHub (Jan 16, 2024):

So i wouldnt need to download them?

<!-- gh-comment-id:1893080253 --> @beliboba commented on GitHub (Jan 16, 2024): So i wouldnt need to download them?
Author
Owner

@pdevine commented on GitHub (Jan 16, 2024):

I think I misinterpreted what your request was. Are you asking to store all of your models in the cloud and then run then from there (but on your local machine)? Or do you mean you want to save a model that you made to the cloud and be able to pull it?

The first use case wouldn't work very well, because you'd have to download the weights every time you wanted to run a model. Unless you had a lot of bandwidth, that wouldn't really be feasible. You could do it though with NFS or some other protocol and then use the OLLAMA_MODELS environment variable when you start ollama serve to change the location of your models. So it could work, but it won't be very performant.

For the second use case, you can do that with what I was describing earlier. You would still need to ollama pull the models before using them.

<!-- gh-comment-id:1893951111 --> @pdevine commented on GitHub (Jan 16, 2024): I think I misinterpreted what your request was. Are you asking to store all of your models in the cloud and then run then from there (but on your local machine)? Or do you mean you want to save a model that you made to the cloud and be able to pull it? The first use case wouldn't work very well, because you'd have to download the weights every time you wanted to run a model. Unless you had a lot of bandwidth, that wouldn't really be feasible. You could do it though with NFS or some other protocol and then use the `OLLAMA_MODELS` environment variable when you start `ollama serve` to change the location of your models. So it could work, but it won't be very performant. For the second use case, you can do that with what I was describing earlier. You would still need to `ollama pull` the models before using them.
Author
Owner

@beliboba commented on GitHub (Jan 18, 2024):

I was talking about first use case. Thank you for response!

<!-- gh-comment-id:1898934088 --> @beliboba commented on GitHub (Jan 18, 2024): I was talking about first use case. Thank you for response!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26899