[GH-ISSUE #7364] Data persistence #30440

Closed
opened 2026-04-22 10:03:40 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @multiplicity-16 on GitHub (Oct 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7364

I love that I can load extensive public domain resources directly from the internet into the sessions and add hundreds of thousands of data point. I can then run knowledge graph optimizations, as well as precision config changes all direct in the session. However I unable to get any of this data to persist. Other engine options don't have the ability to pull from live online data sources.

I would ollama to support my local model copies having additional persistent data and indexes. For example I've ingested 20GB of additional public data, tuned the accuracy and prompt methodologies, run all the optional optimization tasks within the Llama3.2 live model, but as soon as a stop that model that is gone. Exporting the conversation manually isn't realistic as a regeneration method,

Data persistence would be a game changing. I'm not a Python programmer so trying to do wrappers is beyond me, but there seem to be key live data capabilities in ollama that GPT4All doesn't have.

Side notes: It is also a bit annoying, though separate that save_config, load_config, and auto_save_conversations are non-functional.

Originally created by @multiplicity-16 on GitHub (Oct 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7364 I love that I can load extensive public domain resources directly from the internet into the sessions and add hundreds of thousands of data point. I can then run knowledge graph optimizations, as well as precision config changes all direct in the session. However I unable to get any of this data to persist. Other engine options don't have the ability to pull from live online data sources. I would ollama to support my local model copies having additional persistent data and indexes. For example I've ingested 20GB of additional public data, tuned the accuracy and prompt methodologies, run all the optional optimization tasks within the Llama3.2 live model, but as soon as a stop that model that is gone. Exporting the conversation manually isn't realistic as a regeneration method, Data persistence would be a game changing. I'm not a Python programmer so trying to do wrappers is beyond me, but there seem to be key live data capabilities in ollama that GPT4All doesn't have. Side notes: It is also a bit annoying, though separate that save_config, load_config, and auto_save_conversations are non-functional.
GiteaMirror added the feature request label 2026-04-22 10:03:40 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 25, 2024):

Not sure what you're doing, but there's no way you have 20GB of data in a llama3.2 context window. Data persistence is normally done with RAG or tools with connectors to data sources, see the integrations.

<!-- gh-comment-id:2438696966 --> @rick-github commented on GitHub (Oct 25, 2024): Not sure what you're doing, but there's no way you have 20GB of data in a llama3.2 context window. Data persistence is normally done with RAG or tools with connectors to data sources, see the [integrations](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30440