[GH-ISSUE #11510] Ollama Now Supports 300 GB Prompts. Click to learn more.. #33362

Closed
opened 2026-04-22 15:56:20 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @WizardMiner on GitHub (Jul 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11510

Hello,

Just found out that the context parameter in chat API is being deprecated. Fear not, @jmorganca and @pd95 assure us that we can do the same thing with generate API. ..which I thought was amazing and totally leading edge. Somehow the original incident was closed without a solid, detailed workaround. Probably an accident.

Not sure on the particulars but sounds like..

You will see that what you call "context" is basically the tokenized history of messages. It is effectively the 1000 messages. So if you "load the context" with the old generate API, then Ollama is effectively decoding the context into plain text.

That's so amazing. I didn't realize the entire chat history of prompt/response rounds was stored in the context. Had no idea it could grow to hundreds of gigabytes. ahead of each and every single prompt That's so amazing! Wow! Bruteous Forcemus lives on!

What I would like to know is really straightforward. Lets say we have thousands of turns in a chat that's been ongoing for weeks. We're feeding in lot of documents like these:

https://dailymed.nlm.nih.gov/dailymed/lookup.cfm?setid=63b36274-89f0-42d8-9f09-f9e78e179af4
https://dailymed.nlm.nih.gov/dailymed/drugInfo.cfm?setid=4a81751e-1c63-4b0d-8f63-bf7c4d155f22
https://dailymed.nlm.nih.gov/dailymed/drugInfo.cfm?setid=a4f722e7-2885-4f27-8358-9c662439600a

Around turn 1250 that I make this comment..

hey "swalowed" is misspelled.
And the AI returns..
"It sure is. Swallowed is spelled with two l's, not one.".
Dozens of prompts later, I bring up this fact again.
"remember when we talked about swallowed being misspelled?".

In the past, we could simply re-load the context array from turn 1250 to go in a new direction. Maybe I want to revise the conversation and say "we want to check and fix misspelled words.". Here's the response and context for "What is a tree?". Can you imagine gigabytes of this being pumped in at every prompt?

"What is a tree?" with llama3.2 returns this answer and context for turn 2 or 3..

Ready to provide an answer. A tree is a perennial plant with an elongated stem, or trunk, supported by branches that grow outwards from it. Trees are typically woody plants, meaning they have a hard, fibrous stem that provides structural support for the plant's growth. They often produce leaves, flowers, fruits, and seeds, which serve various functions such as photosynthesis, reproduction, and dispersal. Trees can be found in diverse environments worldwide, playing a crucial role in ecosystems by providing habitats for wildlife, regulating the climate through processes like photosynthesis and transpiration, and supporting human activities like timber production, fuelwood gathering, and recreation.

[128006, 9125, 128007, 271, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 271, 128009, 128006, 882, 128007, 271, 2, 20776, 311, 279, 24811, 12111, 13, 10636, 2, 1472, 527, 832, 315, 1690, 6335, 60538, 2436, 24435, 13, 5321, 387, 49150, 11, 30437, 15837, 323, 4822, 389, 8712, 13, 2591, 2, 5321, 10052, 364, 19753, 6, 61708, 420, 9306, 994, 499, 527, 5644, 13, 2591, 128009, 128006, 78191, 128007, 271, 19753, 13, 128006, 9125, 128007, 271, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 271, 128009, 128006, 882, 128007, 271, 12465, 287, 2038, 369, 701, 3477, 3304, 29, 674, 3639, 374, 264, 5021, 30, 128009, 128006, 78191, 128007, 271, 19753, 311, 3493, 459, 4320, 382, 32, 5021, 374, 264, 74718, 6136, 449, 459, 74595, 660, 19646, 11, 477, 38411, 11, 7396, 555, 23962, 430, 3139, 704, 4102, 505, 433, 13, 59984, 527, 11383, 24670, 1094, 11012, 11, 7438, 814, 617, 264, 2653, 11, 16178, 27620, 19646, 430, 5825, 24693, 1862, 369, 279, 6136, 596, 6650, 13, 2435, 3629, 8356, 11141, 11, 19837, 11, 26390, 11, 323, 19595, 11, 902, 8854, 5370, 5865, 1778, 439, 7397, 74767, 11, 39656, 11, 323, 79835, 278, 382, 80171, 649, 387, 1766, 304, 17226, 22484, 15603, 11, 5737, 264, 16996, 3560, 304, 61951, 555, 8405, 71699, 369, 30405, 11, 58499, 279, 10182, 1555, 11618, 1093, 7397, 74767, 323, 1380, 29579, 11, 323, 12899, 3823, 7640, 1093, 45888, 5788, 11, 10633, 6798, 23738, 11, 323, 47044, 13]

Context is a simple array of numbers to anchor an LLM to a place in the conversation.

Instead of doing that, Ollama and community are recommending re-seeding or prefixing the next turn/prompt with however many megabytes of data it took to get here, in exact order so that it seems just like we go here organically. (OK, that still sounds crazy to run weeks worth of prompts in order to replay from a particular turn).

So my questions / incident is this..

So how do we do that? Can anyone confirm they can get to the exact same context w/ this multi-gigabyte firehose method? And doesn't that take a lot longer than just re-running the context? My LLM get data paged beyond boundaries. All those pages will need to be re-fed. Just seems like bad software when we have an easy straightforward way to do it already w/o the hundreds of gigabytes of prompt prefixes.

Thanks in advance.

Best Wishes,
WizardMiner

PS. @jmorganca. Somehow the prior incident was closed without a workaround. @asterbini, @pd95, @perfectecologietool, and @ArnarValur fyi.

Originally created by @WizardMiner on GitHub (Jul 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11510 Hello, Just found out that the context parameter in chat API is being deprecated. Fear not, @jmorganca and @pd95 assure us that we can do the same thing with generate API. ..which I thought was amazing and totally leading edge. Somehow the original incident was closed without a solid, detailed workaround. Probably an accident. Not sure on the particulars but sounds like.. > You will see that what you call "context" is basically the tokenized history of messages. It is effectively the 1000 messages. So if you "load the context" with the old generate API, then Ollama is effectively decoding the context into plain text. That's so amazing. I didn't realize the entire chat history of prompt/response rounds was stored in the context. Had no idea it could grow to hundreds of gigabytes. ahead of each and every single prompt That's so amazing! Wow! Bruteous Forcemus lives on! What I would like to know is really straightforward. Lets say we have thousands of turns in a chat that's been ongoing for weeks. We're feeding in lot of documents like these: https://dailymed.nlm.nih.gov/dailymed/lookup.cfm?setid=63b36274-89f0-42d8-9f09-f9e78e179af4 https://dailymed.nlm.nih.gov/dailymed/drugInfo.cfm?setid=4a81751e-1c63-4b0d-8f63-bf7c4d155f22 https://dailymed.nlm.nih.gov/dailymed/drugInfo.cfm?setid=a4f722e7-2885-4f27-8358-9c662439600a Around turn 1250 that I make this comment.. > hey "swalowed" is misspelled. And the AI returns.. > "It sure is. Swallowed is spelled with two l's, not one.". Dozens of prompts later, I bring up this fact again. > "remember when we talked about swallowed being misspelled?". In the past, we could simply re-load the context array from turn 1250 to go in a new direction. Maybe I want to revise the conversation and say "we want to check and fix misspelled words.". Here's the response and context for "What is a tree?". Can you imagine gigabytes of this being pumped in at every prompt? "What is a tree?" with llama3.2 returns this answer and context for turn 2 or 3.. > Ready to provide an answer. A tree is a perennial plant with an elongated stem, or trunk, supported by branches that grow outwards from it. Trees are typically woody plants, meaning they have a hard, fibrous stem that provides structural support for the plant's growth. They often produce leaves, flowers, fruits, and seeds, which serve various functions such as photosynthesis, reproduction, and dispersal. Trees can be found in diverse environments worldwide, playing a crucial role in ecosystems by providing habitats for wildlife, regulating the climate through processes like photosynthesis and transpiration, and supporting human activities like timber production, fuelwood gathering, and recreation. [128006, 9125, 128007, 271, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 271, 128009, 128006, 882, 128007, 271, 2, 20776, 311, 279, 24811, 12111, 13, 10636, 2, 1472, 527, 832, 315, 1690, 6335, 60538, 2436, 24435, 13, 5321, 387, 49150, 11, 30437, 15837, 323, 4822, 389, 8712, 13, 2591, 2, 5321, 10052, 364, 19753, 6, 61708, 420, 9306, 994, 499, 527, 5644, 13, 2591, 128009, 128006, 78191, 128007, 271, 19753, 13, 128006, 9125, 128007, 271, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 271, 128009, 128006, 882, 128007, 271, 12465, 287, 2038, 369, 701, 3477, 3304, 29, 674, 3639, 374, 264, 5021, 30, 128009, 128006, 78191, 128007, 271, 19753, 311, 3493, 459, 4320, 382, 32, 5021, 374, 264, 74718, 6136, 449, 459, 74595, 660, 19646, 11, 477, 38411, 11, 7396, 555, 23962, 430, 3139, 704, 4102, 505, 433, 13, 59984, 527, 11383, 24670, 1094, 11012, 11, 7438, 814, 617, 264, 2653, 11, 16178, 27620, 19646, 430, 5825, 24693, 1862, 369, 279, 6136, 596, 6650, 13, 2435, 3629, 8356, 11141, 11, 19837, 11, 26390, 11, 323, 19595, 11, 902, 8854, 5370, 5865, 1778, 439, 7397, 74767, 11, 39656, 11, 323, 79835, 278, 382, 80171, 649, 387, 1766, 304, 17226, 22484, 15603, 11, 5737, 264, 16996, 3560, 304, 61951, 555, 8405, 71699, 369, 30405, 11, 58499, 279, 10182, 1555, 11618, 1093, 7397, 74767, 323, 1380, 29579, 11, 323, 12899, 3823, 7640, 1093, 45888, 5788, 11, 10633, 6798, 23738, 11, 323, 47044, 13] Context is a simple array of numbers to anchor an LLM to a place in the conversation. Instead of doing that, Ollama and community are recommending re-seeding or prefixing the next turn/prompt with however many megabytes of data it took to get here, in exact order so that it seems just like we go here organically. (OK, that still sounds crazy to run weeks worth of prompts in order to replay from a particular turn). So my questions / incident is this.. So how do we do that? Can anyone confirm they can get to the exact same context w/ this multi-gigabyte firehose method? And doesn't that take a lot longer than just re-running the context? My LLM get data paged beyond boundaries. All those pages will need to be re-fed. Just seems like bad software when we have an easy straightforward way to do it already w/o the hundreds of gigabytes of prompt prefixes. Thanks in advance. Best Wishes, WizardMiner PS. @jmorganca. Somehow the prior incident was closed without a workaround. @asterbini, @pd95, @perfectecologietool, and @ArnarValur fyi.
Author
Owner

@WizardMiner commented on GitHub (Jul 23, 2025):

@rick-github what are you doing? It is not a closed issue. Until today we didn't know we could fire hose 300 GB into an Ollama prompt. Now we do. This is new and prescient.

<!-- gh-comment-id:3110641687 --> @WizardMiner commented on GitHub (Jul 23, 2025): @rick-github what are you doing? It is not a closed issue. Until today we didn't know we could fire hose 300 GB into an Ollama prompt. Now we do. This is new and prescient.
Author
Owner

@WizardMiner commented on GitHub (Jul 24, 2025):

Sounds like Ollama is going the way of AOL. Good luck to them.
Haven't tested these others yet...

vLLM

LMDeploy

KoboldCPP
(based on Llama.cpp)

OllamaWithContext
Comming soon: Copy of latest Ollama with Context parameter intact.
(use as last resort because they are going off on their own and don't play well with others)

<!-- gh-comment-id:3111538081 --> @WizardMiner commented on GitHub (Jul 24, 2025): Sounds like Ollama is going the way of AOL. Good luck to them. Haven't tested these others yet... [vLLM](https://docs.vllm.ai/en/stable/index.html) [LMDeploy](https://github.com/InternLM/lmdeploy) [KoboldCPP](https://github.com/LostRuins/koboldcpp) (based on Llama.cpp) [OllamaWithContext](https://github.com/WizardMiner/OllamaWithContext) Comming soon: Copy of latest Ollama with Context parameter intact. (use as last resort because they are going off on their own and don't play well with others)
Author
Owner

@pd95 commented on GitHub (Jul 24, 2025):

You’re overreacting, and it seems there’s still a misunderstanding.

As already explained — and shown clearly in the code I highlighted earlier in the previous issue — Ollama takes the provided context (the array of numbers), decodes it back into plain text, combines it with the new prompt, and then re-tokenizes the whole thing for the model. There’s no hidden state or persistent memory involved.

If you’re imagining that only a conversation ID is passed around with some stored model state, that’s simply not how it works. The context grows with each message — you can easily verify this by comparing the size of your conversation text with the actual token payload sent.

If you want real state persistence (like KV cache snapshots), that’s a valid feature request. But the “300 GB prompts” exaggeration isn’t helping the discussion. Let’s keep it accurate and constructive.

<!-- gh-comment-id:3111965332 --> @pd95 commented on GitHub (Jul 24, 2025): You’re overreacting, and it seems there’s still a misunderstanding. As already explained — and shown clearly in the code I highlighted earlier in the previous issue — Ollama takes the provided context (the array of numbers), decodes it back into plain text, combines it with the new prompt, and then re-tokenizes the whole thing for the model. There’s no hidden state or persistent memory involved. If you’re imagining that only a conversation ID is passed around with some stored model state, that’s simply not how it works. The context grows with each message — you can easily verify this by comparing the size of your conversation text with the actual token payload sent. If you want real state persistence (like KV cache snapshots), that’s a valid feature request. But the “300 GB prompts” exaggeration isn’t helping the discussion. Let’s keep it accurate and constructive.
Author
Owner

@WizardMiner commented on GitHub (Jul 24, 2025):

@pd95 hyperbole to get the point across. There must be some upper limit.

But here's the thing. You already have the feature up and running, right now. Please don't take it away. I do not know how the structure of the context is created. Until recently, didn't care. As I've been using it, seems to remember early details and focus on the topic at hand. Maybe the llms are doing a great job of fooling me into thinking it's more.

Ollama python chat with history

Are you talking about this pattern essentially? Is there a generate example somewhere that gets it done? Do you happen to know how many turns to send? Was targeting around 800 words before chunking. I'm just not sure a simple string variable is going to hold a long running conversation like this. Pumping in large documents in chunks and tons of cross analysis. But maybe so, you all seem convinced.

Would love to see a working example with generate of you happen to find one.

Many Thanks,
WizardMiner

<!-- gh-comment-id:3112068530 --> @WizardMiner commented on GitHub (Jul 24, 2025): @pd95 hyperbole to get the point across. There must be some upper limit. But here's the thing. You already have the feature up and running, right now. Please don't take it away. I do not know how the structure of the context is created. Until recently, didn't care. As I've been using it, seems to remember early details and focus on the topic at hand. Maybe the llms are doing a great job of fooling me into thinking it's more. [Ollama python chat with history](https://github.com/ollama/ollama-python/blob/main/examples/chat-with-history.py) Are you talking about this pattern essentially? Is there a generate example somewhere that gets it done? Do you happen to know how many turns to send? Was targeting around 800 words before chunking. I'm just not sure a simple string variable is going to hold a long running conversation like this. Pumping in large documents in chunks and tons of cross analysis. But maybe so, you all seem convinced. Would love to see a working example with generate of you happen to find one. Many Thanks, WizardMiner
Author
Owner

@WizardMiner commented on GitHub (Jul 25, 2025):

Not sure if this will make sense. Early on multiple LLM are given the same information. The context arrays that are returned are used to the determine radius of the spheres and dot distribution. This is what made me think it's not just tokenized history. Would've expected similar sized spheres if that's all it was. Mistral is always small, Hermes is always big, and Qwen is in the middle. ..always. I don't know how we would get this information if not from the contexts.

Image
<!-- gh-comment-id:3116470890 --> @WizardMiner commented on GitHub (Jul 25, 2025): Not sure if this will make sense. Early on multiple LLM are given the same information. The context arrays that are returned are used to the determine radius of the spheres and dot distribution. This is what made me think it's not just tokenized history. Would've expected similar sized spheres if that's all it was. Mistral is always small, Hermes is always big, and Qwen is in the middle. ..always. I don't know how we would get this information if not from the contexts. <img width="696" height="638" alt="Image" src="https://github.com/user-attachments/assets/c4a9428b-03b9-46b9-b3e9-96675e3f1f49" />
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33362