[GH-ISSUE #4967] API Silently Truncates Conversation #28898

Open
opened 2026-04-22 07:27:01 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @flu0r1ne on GitHub (Jun 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4967

What is the issue?

Problem Description

The chat API currently truncates conversations without warning when the context limit is exceeded. This behavior can cause significant problems in downstream applications. For instance, if a document is provided for summarization, silently removing part of the document may lead to an incomplete or inaccurate summary. Similarly, for other tasks, critical instructions can be forgotten if they do not appear in the initial prompt.

Desired Behavior

Ideally, the API should reject requests that do not fit within the context window with a clear error message. For example, the (official) OpenAI API provides the following error if the context limit is exceeded:

1 validation error for request body → content ensure this value has at most 32768 characters

This enables downstream applications to notify users about the issue, allowing them to decide whether to extend the context, truncate the document, or accept a response based on the truncated prompt.

Current Behavior and Documentation

If this is the intended behavior for the API, it is currently undocumented and can be considered user-unfriendly. It appears this behavior might be inherited from the llama.cpp.

Example

The issue can be demonstrated with the following example:

CONTENT=$(python -c 'print("In language modeling, the context window refers to a predefined number of tokens that the model takes into account while predicting the next token within a text sequence. " * 68)')
curl http://localhost:11434/api/chat -d "{ \"model\": \"gemma:2b\",  \"messages\": [ { \"role\": \"user\", \"content\": \"$CONTENT\" } ]}"

With 68 repetitions of the sequence, the prompt contains 1041 tokens, as determined by prompt_eval_count. However, with 67 repetitions, the prompt contains 2027 tokens.

Similar Issues

#299 - Adjusted the truncation behavior to spare the prompt formatting.
#2653 - Documents a similar issue with the CLI

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.41

Originally created by @flu0r1ne on GitHub (Jun 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4967 ### What is the issue? ### Problem Description The chat API currently truncates conversations without warning when the context limit is exceeded. This behavior can cause significant problems in downstream applications. For instance, if a document is provided for summarization, silently removing part of the document may lead to an incomplete or inaccurate summary. Similarly, for other tasks, critical instructions can be forgotten if they do not appear in the initial prompt. ### Desired Behavior Ideally, the API should reject requests that do not fit within the context window with a clear error message. For example, the (official) OpenAI API provides the following error if the context limit is exceeded: ``` 1 validation error for request body → content ensure this value has at most 32768 characters ``` This enables downstream applications to notify users about the issue, allowing them to decide whether to extend the context, truncate the document, or accept a response based on the truncated prompt. ### Current Behavior and Documentation If this is the intended behavior for the API, it is currently undocumented and can be considered user-unfriendly. It appears this behavior might be inherited from the `llama.cpp`. ### Example The issue can be demonstrated with the following example: ```bash CONTENT=$(python -c 'print("In language modeling, the context window refers to a predefined number of tokens that the model takes into account while predicting the next token within a text sequence. " * 68)') curl http://localhost:11434/api/chat -d "{ \"model\": \"gemma:2b\", \"messages\": [ { \"role\": \"user\", \"content\": \"$CONTENT\" } ]}" ``` With 68 repetitions of the sequence, the prompt contains `1041` tokens, as determined by `prompt_eval_count`. However, with 67 repetitions, the prompt contains `2027` tokens. ### Similar Issues #299 - Adjusted the truncation behavior to spare the prompt formatting. #2653 - Documents a similar issue with the CLI ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.41
GiteaMirror added the bug label 2026-04-22 07:27:01 -05:00
Author
Owner

@BradKML commented on GitHub (Jul 15, 2024):

Since both llama.cpp and Ollama can handle context window adjustment, would the solution be to have dynamic context sizes that is significantly smaller (but more adjustable) than the "hard limit" based on VRAM?

<!-- gh-comment-id:2227876480 --> @BradKML commented on GitHub (Jul 15, 2024): Since both `llama.cpp` and Ollama can handle context window adjustment, would the solution be to have dynamic context sizes that is significantly smaller (but more adjustable) than the "hard limit" based on VRAM? - https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md#context-management - https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size
Author
Owner

@flu0r1ne commented on GitHub (Jul 15, 2024):

There are explicit or implicit context limits for all LLMs. Some LLMs use positional embeddings that prevent extending the context beyond the training limit. Most recent LLMs employ flexible positional embeddings, but performance degrades with longer contexts. Constraints on context are always necessary.

The problem arises with llama.cpp and by extention ollama, which truncates conversations when exceeding the context limit. While I haven't delved deeply into this, it seems ollama addresses this by appending the latter half of messages with the initial message to form the new context (e.g., something like messages = [initial_message, ...messages[n/2:]]). This approach is suboptimal for applications requiring precise context management.

In chat contexts, the initial message often includes system details like date, time, and user information supplied by the client. Subsequent messages are supplied by the user. When the context limit is reached limit is reached, some messages will be removed starting with the second. If these contain information critical to the integrity of the conversation, the LLM will malfunction.

The current API lacks sufficient control for intelligent context handling by clients. A straightforward fix would involve triggering an error when the context is exceeded. This is akin to OpenAI's approach, enabling clients to selectively remove messages and retry.

Enhancing the API could involve including routes for token counting. Various models utilize different tokenizers (e.g., Mistral uses SentencePiece, OpenAI uses tiktoken, etc.). Models also have distinct model files and prompt templates. It would be beneficial for ollama to manage token counting to avoid clients needing their maintain a tokenizer for each model, which could become cumbersome. While this enhancement would demand more resources for ollama to implement, it would enable well-behaved clients to ensure successful requests with proper context, eliminating the need for retries.

<!-- gh-comment-id:2229009129 --> @flu0r1ne commented on GitHub (Jul 15, 2024): There are explicit or implicit context limits for all LLMs. Some LLMs use positional embeddings that prevent extending the context beyond the training limit. Most recent LLMs employ flexible positional embeddings, but performance degrades with longer contexts. Constraints on context are always necessary. The problem arises with `llama.cpp` and by extention `ollama`, which truncates conversations when exceeding the context limit. While I haven't delved deeply into this, it seems `ollama` addresses this by appending the latter half of messages with the initial message to form the new context (e.g., something like `messages = [initial_message, ...messages[n/2:]]`). This approach is suboptimal for applications requiring precise context management. In chat contexts, the initial message often includes system details like date, time, and user information supplied by the client. Subsequent messages are supplied by the user. When the context limit is reached limit is reached, some messages will be removed starting with the second. If these contain information critical to the integrity of the conversation, the LLM will malfunction. The current API lacks sufficient control for intelligent context handling by clients. A straightforward fix would involve triggering an error when the context is exceeded. This is akin to OpenAI's approach, enabling clients to selectively remove messages and retry. Enhancing the API could involve including routes for token counting. Various models utilize different tokenizers (e.g., Mistral uses `SentencePiece`, OpenAI uses `tiktoken`, etc.). Models also have distinct model files and prompt templates. It would be beneficial for `ollama` to manage token counting to avoid clients needing their maintain a tokenizer for each model, which could become cumbersome. While this enhancement would demand more resources for `ollama` to implement, it would enable well-behaved clients to ensure successful requests with proper context, eliminating the need for retries.
Author
Owner

@BradKML commented on GitHub (Jul 16, 2024):

@flu0r1ne here are two questions:
Q1: are there ways to "hack" the token context to multiply in size (maybe RoPE and CoPE)?
Q2: What about PrunaAI and similar projects that has an excessive context window and possibly without "loss in the middle"?

<!-- gh-comment-id:2230129611 --> @BradKML commented on GitHub (Jul 16, 2024): @flu0r1ne here are two questions: Q1: are there ways to "hack" the token context to multiply in size (maybe RoPE and CoPE)? Q2: What about PrunaAI and similar projects that has an excessive context window and possibly without "loss in the middle"?
Author
Owner

@flu0r1ne commented on GitHub (Jul 16, 2024):

I believe there are ways to increase the size of the context for most ollama models at the expense of compute. Although, I don't think it is relevant to this issue. If you want to discus this further, please email me. You can find my email on the website attached to my profile.

<!-- gh-comment-id:2231006785 --> @flu0r1ne commented on GitHub (Jul 16, 2024): I believe there are ways to increase the size of the context for most `ollama` models at the expense of compute. Although, I don't think it is relevant to this issue. If you want to discus this further, please email me. You can find my email on the website attached to my profile.
Author
Owner

@dmatora commented on GitHub (Sep 19, 2024):

is there still no way to throw error when context exceeded?
is there maybe a fork for this?

<!-- gh-comment-id:2360864001 --> @dmatora commented on GitHub (Sep 19, 2024): is there still no way to throw error when context exceeded? is there maybe a fork for this?
Author
Owner

@bioshazard commented on GitHub (Feb 5, 2026):

I only realized this was happening by accident. Any plans to add a simple "OLLAMA_TRUNCATE=0" to error rather than warn in logs? or if I submit a PR for it would it get accepted?

Looks like you can maybe set eg provider.ollama.models.$model.options in OpenCode at least:

          "options": {
            "extraBody": {
              "truncate": false
            }
          }
<!-- gh-comment-id:3850849689 --> @bioshazard commented on GitHub (Feb 5, 2026): I only realized this was happening by accident. Any plans to add a simple "OLLAMA_TRUNCATE=0" to error rather than warn in logs? or if I submit a PR for it would it get accepted? Looks like you can maybe set eg `provider.ollama.models.$model.options` in OpenCode at least: ``` "options": { "extraBody": { "truncate": false } } ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28898