[GH-ISSUE #5980] Context in /api/generate response grows too big. #3740

Closed
opened 2026-04-12 14:33:04 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @slouffka on GitHub (Jul 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5980

What is the issue?

I'm coding my own Chat UI for Ollama and using context feature to implement dialog mode. So every time Ollama generates a response the returned context (embeddings) is saved into chat object. On the next prompt this context is passed into /api/generate then after response resulting context is saved into chat object again.

After upgrading to latest Ollama I've noticed generation speed degraded considerably and the context returned by /api/generate grows too fast compared to previous versions.

Looks like it doubles context size after each generation and soon in relatively small chat with 26 messages it becomes like 3-7Mb in size which causes my UI being unresponsive and also browser freezes because it has to process such a huge amount of data (mostly for debugging like converting JSON to string, but this is not normal anyway). When earlier (at least for the 0.2.1 version I've used) it could be around 8-16Kb which is totally fine and also fits model capacity.

This is pretty hard to measure (and I don't know how to) but I've also noticed that with latest Ollama newer models like gemma2 or llama3.1 do not adhere to context as well as some older models like mistral on earlier Ollama version. This could be related to a context changes, which was broken since 0.2.2 then response was fixed but it looks like the fix was not completely correct.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.0

Originally created by @slouffka on GitHub (Jul 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5980 ### What is the issue? I'm coding my own Chat UI for Ollama and using context feature to implement dialog mode. So every time Ollama generates a response the returned context (embeddings) is saved into chat object. On the next prompt this context is passed into `/api/generate` then after response resulting context is saved into chat object again. After upgrading to latest Ollama I've noticed generation speed degraded considerably and the context returned by `/api/generate` grows too fast compared to previous versions. Looks like it doubles context size after each generation and soon in relatively small chat with 26 messages it becomes like 3-7Mb in size which causes my UI being unresponsive and also browser freezes because it has to process such a huge amount of data (mostly for debugging like converting JSON to string, but this is not normal anyway). When earlier (at least for the 0.2.1 version I've used) it could be around 8-16Kb which is totally fine and also fits model capacity. This is pretty hard to measure (and I don't know how to) but I've also noticed that with latest Ollama newer models like gemma2 or llama3.1 do not adhere to context as well as some older models like mistral on earlier Ollama version. This could be related to a context changes, which was broken since 0.2.2 then response was fixed but it looks like the fix was not completely correct. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.0
GiteaMirror added the bug label 2026-04-12 14:33:04 -05:00
Author
Owner

@wangzhezhe commented on GitHub (Jul 29, 2024):

Hi, I have similar issues and not sure how to solve it properly. According to description about the context here, the context returned by current conversation can be used as input for the next conversation. One issue of using context by this way is that the context could expand quickly. If I only want to track results of last 2-3 questions as the context (instead of all history), what are proper ways to do that, thanks!

<!-- gh-comment-id:2254878625 --> @wangzhezhe commented on GitHub (Jul 29, 2024): Hi, I have similar issues and not sure how to solve it properly. According to description about the `context` [here](https://github.com/ollama/ollama/blob/main/docs/api.md), the `context` returned by current conversation can be used as input for the next conversation. One issue of using context by this way is that the `context` could expand quickly. If I only want to track results of last 2-3 questions as the context (instead of all history), what are proper ways to do that, thanks!
Author
Owner

@slouffka commented on GitHub (Jul 31, 2024):

There was no real issue in using context this way cause it's actually smaller and more efficient than sending whole chat history in JSON. Because this way context is already stored as embeddings. To me it also feels more efficient to process on Ollama side because it does not need to convert text to embeddings but using them as is to provide context for an LLM.

I've written the client code for this feature and using my Chat UI for a 6+ month without any changes. It was perfectly working and clean solution but now it's broken so I'm stick with ollama version 0.2.1 or I have an option to switch to an Open AI like chat endpoint where you pass chat history as a context. Personally I don't feel it's a lot worse but I would not like to rewrite working app because of a bug. Though last broken version I've tried was 0.3.0, maybe there was a fix I dunno. But this issue is still unsolved.

<!-- gh-comment-id:2261131674 --> @slouffka commented on GitHub (Jul 31, 2024): There was no real issue in using context this way cause it's actually smaller and more efficient than sending whole chat history in JSON. Because this way context is already stored as embeddings. To me it also feels more efficient to process on Ollama side because it does not need to convert text to embeddings but using them as is to provide context for an LLM. I've written the client code for this feature and using my Chat UI for a 6+ month without any changes. It was perfectly working and clean solution but now it's broken so I'm stick with ollama version 0.2.1 or I have an option to switch to an Open AI like chat endpoint where you pass chat history as a context. Personally I don't feel it's a lot worse but I would not like to rewrite working app because of a bug. Though last broken version I've tried was 0.3.0, maybe there was a fix I dunno. But this issue is still unsolved.
Author
Owner

@slouffka commented on GitHub (Jul 31, 2024):

If I only want to track results of last 2-3 questions as the context (instead of all history), what are proper ways to do that, thanks!

For this your option is to use /api/chat endpoint and pass only messages you want. This is more flexible than passing embeddings for a whole chat.

docs/api.md#chat-request-with-history

<!-- gh-comment-id:2261142340 --> @slouffka commented on GitHub (Jul 31, 2024): > If I only want to track results of last 2-3 questions as the context (instead of all history), what are proper ways to do that, thanks! For this your option is to use /api/chat endpoint and pass only messages you want. This is more flexible than passing embeddings for a whole chat. [docs/api.md#chat-request-with-history](https://github.com/ollama/ollama/blob/main/docs/api.md#chat-request-with-history)
Author
Owner

@wangzhezhe commented on GitHub (Aug 1, 2024):

Yeah, it seems that I should use api/chat instead of api/generate, thanks for the information!@slouffka

<!-- gh-comment-id:2261774309 --> @wangzhezhe commented on GitHub (Aug 1, 2024): Yeah, it seems that I should use api/chat instead of api/generate, thanks for the information!@slouffka
Author
Owner

@mdhuzaifapatel commented on GitHub (Nov 20, 2024):

There was no real issue in using context this way cause it's actually smaller and more efficient than sending whole chat history in JSON. Because this way context is already stored as embeddings. To me it also feels more efficient to process on Ollama side because it does not need to convert text to embeddings but using them as is to provide context for an LLM.

I've written the client code for this feature and using my Chat UI for a 6+ month without any changes. It was perfectly working and clean solution but now it's broken so I'm stick with ollama version 0.2.1 or I have an option to switch to an Open AI like chat endpoint where you pass chat history as a context. Personally I don't feel it's a lot worse but I would not like to rewrite working app because of a bug. Though last broken version I've tried was 0.3.0, maybe there was a fix I dunno. But this issue is still unsolved.

Hi can you please tell me how to use context of previous responses and sent it as context to current one.
I'm unable to achieve it. Everytime I'm hitting Ollama API it's looks like it is considering it as a new session. I'm not sure, can you please let me know how to implement it.
Pls provide code example if possible as you have already worked on it. Thanks in advance

<!-- gh-comment-id:2489425972 --> @mdhuzaifapatel commented on GitHub (Nov 20, 2024): > There was no real issue in using context this way cause it's actually smaller and more efficient than sending whole chat history in JSON. Because this way context is already stored as embeddings. To me it also feels more efficient to process on Ollama side because it does not need to convert text to embeddings but using them as is to provide context for an LLM. > > I've written the client code for this feature and using my Chat UI for a 6+ month without any changes. It was perfectly working and clean solution but now it's broken so I'm stick with ollama version 0.2.1 or I have an option to switch to an Open AI like chat endpoint where you pass chat history as a context. Personally I don't feel it's a lot worse but I would not like to rewrite working app because of a bug. Though last broken version I've tried was 0.3.0, maybe there was a fix I dunno. But this issue is still unsolved. Hi can you please tell me how to use context of previous responses and sent it as context to current one. I'm unable to achieve it. Everytime I'm hitting Ollama API it's looks like it is considering it as a new session. I'm not sure, can you please let me know how to implement it. Pls provide code example if possible as you have already worked on it. Thanks in advance
Author
Owner

@slouffka commented on GitHub (Nov 20, 2024):

Hi can you please tell me how to use context of previous responses and sent it as context to current one. I'm unable to achieve it. Everytime I'm hitting Ollama API it's looks like it is considering it as a new session. I'm not sure, can you please let me know how to implement it. Pls provide code example if possible as you have already worked on it. Thanks in advance

I used this doc as a reference to implement chat context:

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion

You should explicitly take the context property from a previous response and set it as a context property for your new request. Basically that's it.

Another way is to use /api/chat endpoint I've mentioned here:

https://github.com/ollama/ollama/issues/5980#issuecomment-2261142340

<!-- gh-comment-id:2489667898 --> @slouffka commented on GitHub (Nov 20, 2024): > Hi can you please tell me how to use context of previous responses and sent it as context to current one. I'm unable to achieve it. Everytime I'm hitting Ollama API it's looks like it is considering it as a new session. I'm not sure, can you please let me know how to implement it. Pls provide code example if possible as you have already worked on it. Thanks in advance I used this doc as a reference to implement chat context: https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion You should explicitly take the context property from a previous response and set it as a context property for your new request. Basically that's it. Another way is to use /api/chat endpoint I've mentioned here: https://github.com/ollama/ollama/issues/5980#issuecomment-2261142340
Author
Owner

@mdhuzaifapatel commented on GitHub (Nov 21, 2024):

Thanks!

On Thu, Nov 21, 2024, 4:01 AM slouffka @.***> wrote:

Hi can you please tell me how to use context of previous responses and
sent it as context to current one. I'm unable to achieve it. Everytime I'm
hitting Ollama API it's looks like it is considering it as a new session.
I'm not sure, can you please let me know how to implement it. Pls provide
code example if possible as you have already worked on it. Thanks in advance

I used this doc as a reference to implement chat context:

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion

You should explicitly take the context property from a previous response
and set it as a context property for your new request. Basically that's it.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/5980#issuecomment-2489667898,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AVEXRVTJBBGP7472R2CPJAL2BUETJAVCNFSM6AAAAABLQIHXC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBZGY3DOOBZHA
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2491049555 --> @mdhuzaifapatel commented on GitHub (Nov 21, 2024): Thanks! On Thu, Nov 21, 2024, 4:01 AM slouffka ***@***.***> wrote: > Hi can you please tell me how to use context of previous responses and > sent it as context to current one. I'm unable to achieve it. Everytime I'm > hitting Ollama API it's looks like it is considering it as a new session. > I'm not sure, can you please let me know how to implement it. Pls provide > code example if possible as you have already worked on it. Thanks in advance > > I used this doc as a reference to implement chat context: > > > https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion > > You should explicitly take the context property from a previous response > and set it as a context property for your new request. Basically that's it. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/5980#issuecomment-2489667898>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AVEXRVTJBBGP7472R2CPJAL2BUETJAVCNFSM6AAAAABLQIHXC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBZGY3DOOBZHA> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3740