[GH-ISSUE #11885] /api/chat over-truncates history with large context (48k): trims almost everything instead of just overflow #33649

Closed
opened 2026-04-22 16:32:38 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Tarmenale2 on GitHub (Aug 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11885

What is the issue?

With the official qwen3:instruct (48k context), /api/chat is over-truncating conversation history. Adding a single message can leave only a tiny fraction of the available context, even though there is plenty of budget left.


Example Data

Before client-side filtering (no trimming):

Prompt had 7460 tokens; Generated 136 tokens; Request length: 557406

After client-side filtering (keep newest messages until cumulative JSON length ≤ ~100k):

Prompt had 33628 tokens; Generated 136 tokens; Request length: 105601

This shows that without my filter, a larger request produces fewer prompt tokens (≈7.4k) than a smaller request after filtering (≈33.6k tokens), which clearly fits in the 48k context budget.


Filtering logic I used

Call site (C#):

request.Messages = RemoveOldestMessages(request.Messages, 100000);

Implementation:

private static List<OllamaChatMessage> RemoveOldestMessages(List<OllamaChatMessage> messages, int maxTextLength)
{
    return messages
        .AsEnumerable()
        .Reverse()
        .Aggregate(
            (list: new List<OllamaChatMessage>(), total: 0),
            (acc, msg) =>
            {
                int len = msg.ToJsonString().Length;
                if (acc.total + len > maxTextLength)
                    return acc; // stop adding
                acc.list.Add(msg);
                acc.total += len;
                return acc;
            }
        ).list
        .AsEnumerable()
        .Reverse()
        .ToList();
}

The method simply takes the newest messages first, accumulates their serialized JSON length until the limit is reached, then sends only that reduced list to /api/chat.


Expected

Truncate only as many oldest messages as necessary to fit within num_ctx (minus a reasonable reserve for tools/template/output).

Actual

Almost all history is removed, leaving far fewer tokens than the context limit allows.

Model used

Qwen3-30B-A3B-2507-instruct:Q4_K_M

Relevant log output


OS

Windows

GPU

Nvidia

CPU

No response

Ollama version

0.11.4

Originally created by @Tarmenale2 on GitHub (Aug 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11885 ### What is the issue? With the official `qwen3:instruct` (48k context), `/api/chat` is over-truncating conversation history. Adding a single message can leave only a tiny fraction of the available context, even though there is plenty of budget left. --- ### Example Data **Before client-side filtering (no trimming):** ``` Prompt had 7460 tokens; Generated 136 tokens; Request length: 557406 ``` **After client-side filtering (keep newest messages until cumulative JSON length ≤ \~100k):** ``` Prompt had 33628 tokens; Generated 136 tokens; Request length: 105601 ``` This shows that without my filter, a **larger request** produces **fewer prompt tokens** (≈7.4k) than a smaller request after filtering (≈33.6k tokens), which clearly fits in the 48k context budget. --- ### Filtering logic I used Call site (C#): ```csharp request.Messages = RemoveOldestMessages(request.Messages, 100000); ``` Implementation: ```csharp private static List<OllamaChatMessage> RemoveOldestMessages(List<OllamaChatMessage> messages, int maxTextLength) { return messages .AsEnumerable() .Reverse() .Aggregate( (list: new List<OllamaChatMessage>(), total: 0), (acc, msg) => { int len = msg.ToJsonString().Length; if (acc.total + len > maxTextLength) return acc; // stop adding acc.list.Add(msg); acc.total += len; return acc; } ).list .AsEnumerable() .Reverse() .ToList(); } ``` The method simply takes the newest messages first, accumulates their serialized JSON length until the limit is reached, then sends only that reduced list to `/api/chat`. --- ### Expected Truncate only as many oldest messages as necessary to fit within `num_ctx` (minus a reasonable reserve for tools/template/output). ### Actual Almost all history is removed, leaving far fewer tokens than the context limit allows. ### Model used Qwen3-30B-A3B-2507-instruct:Q4_K_M ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.11.4
GiteaMirror added the bug label 2026-04-22 16:32:38 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 13, 2025):

ollama does this style of message trimming (keeping the newest), with the difference that it preserves the system message and the length calculations are done with tokens post-template processing. If you could provide data then the actual effect of message trimming could be examined to gain better understanding.

<!-- gh-comment-id:3184195703 --> @rick-github commented on GitHub (Aug 13, 2025): ollama does this style of message trimming (keeping the newest), with the difference that it preserves the system message and the length calculations are done with tokens post-template processing. If you could provide data then the actual effect of message trimming could be examined to gain better understanding.
Author
Owner

@Tarmenale2 commented on GitHub (Aug 13, 2025):

Meh, it seems that with simple chatting like question/answer I cannot reproduce this issue (~600k ollama request still works fine). I was able to reproduce it while using tools extensively (long output), but in this case I don't want to share content as I don't want to expose code/build logs etc.

If you can, then please tell me which source files are responsible for 'trimming' content. I will try to analyze it then and propose fix. Maybe it will be easier to find small reproduction steps just based on reviewing this piece of code.

<!-- gh-comment-id:3185162441 --> @Tarmenale2 commented on GitHub (Aug 13, 2025): Meh, it seems that with simple chatting like question/answer I cannot reproduce this issue (~600k ollama request still works fine). I was able to reproduce it while using tools extensively (long output), but in this case I don't want to share content as I don't want to expose code/build logs etc. If you can, then please tell me which source files are responsible for 'trimming' content. I will try to analyze it then and propose fix. Maybe it will be easier to find small reproduction steps just based on reviewing this piece of code.
Author
Owner

@rick-github commented on GitHub (Aug 13, 2025):

Prompt management is done in two places.

The first is in the server in chatPrompt, where the process similar to what you described takes place. The ollama server tries to fit as much of the user/assistant messages into the context buffer as it can, while maintaining system messages and accounting for token space for images (if any).

The second is in the runner in NewSequence (or NewSequence for the new engine). Since the first step may have not been able remove enough from the message list (eg, the final message itself is larger than the context buffer), the runner does brute surgery on the tokenized buffer to make it fit.

<!-- gh-comment-id:3185734558 --> @rick-github commented on GitHub (Aug 13, 2025): Prompt management is done in two places. The first is in the server in [chatPrompt](https://github.com/ollama/ollama/blob/bb71654ebe846d97df306b163c086167239431e5/server/prompt.go#L22), where the process similar to what you described takes place. The ollama server tries to fit as much of the user/assistant messages into the context buffer as it can, while maintaining system messages and accounting for token space for images (if any). The second is in the runner in [NewSequence](https://github.com/ollama/ollama/blob/bb71654ebe846d97df306b163c086167239431e5/runner/llamarunner/runner.go#L100) (or [NewSequence](https://github.com/ollama/ollama/blob/bb71654ebe846d97df306b163c086167239431e5/runner/ollamarunner/runner.go#L105) for the new engine). Since the first step may have not been able remove enough from the message list (eg, the final message itself is larger than the context buffer), the runner does brute surgery on the tokenized buffer to make it fit.
Author
Owner

@Tarmenale2 commented on GitHub (Aug 13, 2025):

Image

Issue
Ollama’s current chatPrompt always keeps all system prompts before any user/assistant messages. In long chats, this can fill most of the context and cause almost the entire conversation to be dropped — especially if one system prompt is very large.

Side Effect
The model ends up “remembering” old system instructions but not the actual conversation. This is unintuitive, as users expect recent messages to be prioritized.

Result
By trimming the input from ~500 KB to ~100 KB — which dropped most old system prompts — the model processed more of the latest conversation, producing more correct behavior.

My thoughts
This effect can be even more severe for small context sizes (which most users run), making it highly noticeable. It’s worth considering whether this is a bug or a feature, and what approach is more future-proof for LLM chat handling. If there were a separate REST endpoint to measure prompt length before sending, users could intentionally trim inputs — instead of relying on Ollama to make opaque truncation decisions they may not even be aware of.

Final words
I guess ticket can be closed if you don't see a need to do improvements here, but maybe at least it is worth to consult this case with other team members to conclude 'which behavior is actually more correct'.

<!-- gh-comment-id:3185972191 --> @Tarmenale2 commented on GitHub (Aug 13, 2025): <img width="1186" height="789" alt="Image" src="https://github.com/user-attachments/assets/a1ec9338-ca7a-4bb9-8c00-40b3f232b823" /> **Issue** Ollama’s current `chatPrompt` always keeps *all* system prompts before any user/assistant messages. In long chats, this can fill most of the context and cause almost the entire conversation to be dropped — especially if one system prompt is very large. **Side Effect** The model ends up “remembering” old system instructions but not the actual conversation. This is unintuitive, as users expect recent messages to be prioritized. **Result** By trimming the input from \~500 KB to \~100 KB — which dropped most old system prompts — the model processed more of the *latest* conversation, producing more correct behavior. **My thoughts** This effect can be even more severe for **small context sizes** (which most users run), making it highly noticeable. It’s worth considering whether this is a bug or a feature, and what approach is more future-proof for LLM chat handling. If there were a **separate REST endpoint** to measure prompt length before sending, users could intentionally trim inputs — instead of relying on Ollama to make opaque truncation decisions they may not even be aware of. **Final words** I guess ticket can be closed if you don't see a need to do improvements here, but maybe at least it is worth to consult this case with other team members to conclude 'which behavior is actually more correct'.
Author
Owner

@rick-github commented on GitHub (Aug 18, 2025):

If there were a separate REST endpoint to measure prompt length before sending, users could intentionally trim inputs

https://github.com/ollama/ollama/pull/8106

<!-- gh-comment-id:3196426043 --> @rick-github commented on GitHub (Aug 18, 2025): > If there were a separate REST endpoint to measure prompt length before sending, users could intentionally trim inputs https://github.com/ollama/ollama/pull/8106
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33649