[GH-ISSUE #1519] LLM Model Cache files #47339

Closed
opened 2026-04-28 03:36:11 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @PrasannaVnewtglobal on GitHub (Dec 14, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1519

Ollama store the LLM model in the modelfile "List", When I try to run the model in the first SSH session it giving the good results and store some caches, but when i try to open new session it not utilizing the previous response cache, where the cache file is present for the LLM model, i couldn't find the cache file. what is the possible way to achieve the consistency results, same time i can't find the configuration file for the LLM model. give the update for this issue.

Originally created by @PrasannaVnewtglobal on GitHub (Dec 14, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1519 Ollama store the LLM model in the modelfile "List", When I try to run the model in the first SSH session it giving the good results and store some caches, but when i try to open new session it not utilizing the previous response cache, where the cache file is present for the LLM model, i couldn't find the cache file. what is the possible way to achieve the consistency results, same time i can't find the configuration file for the LLM model. give the update for this issue.
Author
Owner

@easp commented on GitHub (Dec 14, 2023):

Ollama doesn't cache responses. It does download models.

If I understand you right, you are sshing in to the system running ollama and pulling or running a model(s) which you can view with ollama list.

When you open another ssh session the models your previously downloaded are not showing with ollama list

Am I understanding the behavior you are encountering properly?

How did you install Ollama? Are you running it in Docker? Are you logging in as the same user both times?

<!-- gh-comment-id:1856394290 --> @easp commented on GitHub (Dec 14, 2023): Ollama doesn't cache responses. It does download models. If I understand you right, you are sshing in to the system running ollama and pulling or running a model(s) which you can view with `ollama list`. When you open another ssh session the models your previously downloaded are not showing with `ollama list` Am I understanding the behavior you are encountering properly? How did you install Ollama? Are you running it in Docker? Are you logging in as the same user both times?
Author
Owner

@PrasannaVnewtglobal commented on GitHub (Dec 15, 2023):

I am trying to install ollama in VM using "Curl" command. There is no problem in installing Ollama and downloading the models, When i open one SSH connection run all the command's after the installation when i pass some DDL queries for conversion it give some results, same time when i open new SSH connection and run the "Ollama run modelname" give the same DDL queries it gives different conversion results no consistency in the results, i am not aware what's happening inside the models. So that i try to figure out where the cache files are stored. Give the possible solution.

<!-- gh-comment-id:1857259470 --> @PrasannaVnewtglobal commented on GitHub (Dec 15, 2023): I am trying to install ollama in VM using "Curl" command. There is no problem in installing Ollama and downloading the models, When i open one SSH connection run all the command's after the installation when i pass some DDL queries for conversion it give some results, same time when i open new SSH connection and run the "Ollama run modelname" give the same DDL queries it gives different conversion results no consistency in the results, i am not aware what's happening inside the models. So that i try to figure out where the cache files are stored. Give the possible solution.
Author
Owner

@sealad886 commented on GitHub (Apr 11, 2024):

So it's been a few months since anyone responded to this one, but I actually can answer this for you so I'm going to.

You have a fundamental misunderstanding of how these LLMs work. Not a judgement or accusation, but just an acknowledgment that using LLMs is a fundamental shift in thinking about human-computer interaction (imo).

There is no stored cache of input-output or instance-level interaction, or database that indicates how the output was arrived at. The only dynamic data is the output from the model at that point in time. When you ollama run <model>, you instantiate a one-time-only version of that model. Each response is not programmatically decided but rather is calculated based on the probability of each token (and usually more than 2-3 tokens will form a word) being the correct next token. And even then, most models don't want the answer to be too correct, so they actually penalize the very highly matched answers. Once that instance of the model is terminated, executing the same code (ollama run <model>) will lead to a new instance with no references even possible to previous instances' data (caveat below).

To top it all off, your seed value is (probably) not hard-set for any model that you download from Ollama.com (might be set if you loaded one yourself).

LLMs are designed to be creative, and it's intentional that they don't have reproducible output. You can customize a model, and pass it a known message history as described [here] (https://github.com/ollama/ollama/blob/main/docs/modelfile.md#message) to try to get results that are somewhat more consistent. It's even possible for you to code this out and actually save the messages in a log-file type system and pass them as input if, for example, you use the API. I don't have example code right by me.

<!-- gh-comment-id:2048840444 --> @sealad886 commented on GitHub (Apr 11, 2024): So it's been a few months since anyone responded to this one, but I actually can answer this for you so I'm going to. You have a fundamental misunderstanding of how these LLMs work. Not a judgement or accusation, but just an acknowledgment that using LLMs is a fundamental shift in thinking about human-computer interaction (imo). There is no stored cache of input-output or instance-level interaction, or database that indicates how the output was arrived at. The only dynamic data is the output from the model at that point in time. When you `ollama run <model>`, you instantiate a one-time-only version of that model. Each response is not programmatically decided but rather is calculated based on the probability of each token (and usually more than 2-3 tokens will form a word) being the _correct_ next token. And even then, most models don't want the answer to be _too correct_, so they actually penalize the very highly matched answers. Once that instance of the model is terminated, executing the same code (`ollama run <model>`) will lead to a new instance with no references even possible to previous instances' data (caveat below). To top it all off, your seed value is (probably) not hard-set for any model that you download from Ollama.com (might be set if you loaded one yourself). LLMs are designed to be creative, and it's intentional that they don't have reproducible output. You can customize a model, and pass it a known message history as described [here] (https://github.com/ollama/ollama/blob/main/docs/modelfile.md#message) to try to get results that are somewhat more consistent. It's even possible for you to code this out and actually save the messages in a log-file type system and pass them as input if, for example, you use the API. I don't have example code right by me.
Author
Owner

@jmorganca commented on GitHub (May 10, 2024):

Hi @PrasannaVnewtglobal thanks for the issue. Ollama doesn't cache LLM responses or state – although there's an issue for that #976 – so I'll close this to merge it

<!-- gh-comment-id:2103663559 --> @jmorganca commented on GitHub (May 10, 2024): Hi @PrasannaVnewtglobal thanks for the issue. Ollama doesn't cache LLM responses or state – although there's an issue for that #976 – so I'll close this to merge it
Author
Owner

@PrasannaVnewtglobal commented on GitHub (May 10, 2024):

So it's been a few months since anyone responded to this one, but I actually can answer this for you so I'm going to.

You have a fundamental misunderstanding of how these LLMs work. Not a judgement or accusation, but just an acknowledgment that using LLMs is a fundamental shift in thinking about human-computer interaction (imo).

There is no stored cache of input-output or instance-level interaction, or database that indicates how the output was arrived at. The only dynamic data is the output from the model at that point in time. When you ollama run <model>, you instantiate a one-time-only version of that model. Each response is not programmatically decided but rather is calculated based on the probability of each token (and usually more than 2-3 tokens will form a word) being the correct next token. And even then, most models don't want the answer to be too correct, so they actually penalize the very highly matched answers. Once that instance of the model is terminated, executing the same code (ollama run <model>) will lead to a new instance with no references even possible to previous instances' data (caveat below).

To top it all off, your seed value is (probably) not hard-set for any model that you download from Ollama.com (might be set if you loaded one yourself).

LLMs are designed to be creative, and it's intentional that they don't have reproducible output. You can customize a model, and pass it a known message history as described [here] (https://github.com/ollama/ollama/blob/main/docs/modelfile.md#message) to try to get results that are somewhat more consistent. It's even possible for you to code this out and actually save the messages in a log-file type system and pass them as input if, for example, you use the API. I don't have example code right by me.

Thanks for the information @sealad886

<!-- gh-comment-id:2103847117 --> @PrasannaVnewtglobal commented on GitHub (May 10, 2024): > So it's been a few months since anyone responded to this one, but I actually can answer this for you so I'm going to. > > You have a fundamental misunderstanding of how these LLMs work. Not a judgement or accusation, but just an acknowledgment that using LLMs is a fundamental shift in thinking about human-computer interaction (imo). > > There is no stored cache of input-output or instance-level interaction, or database that indicates how the output was arrived at. The only dynamic data is the output from the model at that point in time. When you `ollama run <model>`, you instantiate a one-time-only version of that model. Each response is not programmatically decided but rather is calculated based on the probability of each token (and usually more than 2-3 tokens will form a word) being the _correct_ next token. And even then, most models don't want the answer to be _too correct_, so they actually penalize the very highly matched answers. Once that instance of the model is terminated, executing the same code (`ollama run <model>`) will lead to a new instance with no references even possible to previous instances' data (caveat below). > > To top it all off, your seed value is (probably) not hard-set for any model that you download from Ollama.com (might be set if you loaded one yourself). > > LLMs are designed to be creative, and it's intentional that they don't have reproducible output. You can customize a model, and pass it a known message history as described [here] (https://github.com/ollama/ollama/blob/main/docs/modelfile.md#message) to try to get results that are somewhat more consistent. It's even possible for you to code this out and actually save the messages in a log-file type system and pass them as input if, for example, you use the API. I don't have example code right by me. Thanks for the information @sealad886
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47339