[GH-ISSUE #10974] num_ctx in practice is proven to be unstable and useless #53744

Closed
opened 2026-04-29 04:38:59 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @FieldMouse-AI on GitHub (Jun 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10974

What is the issue?

The Problem

I created a model that based on llama3.2:3b using PARAMETER num_ctx 32768.

My environment is as follows:

  • OS: Ubuntu Linux 22.04 LTS
  • CPU: AMD Ryzen 7 8-core/16-thread
  • RAM: 32GB
  • GPU: None
  • Ollama version: v0.9.0 running inside of Docker (no memory/resource limits)

When I set PARAMETER num_ctx 32768, it is expected that 32768 is the context window.

But, in actuality, Ollama randomly truncates the actual num_ctx down to random small values from 3000 to 5000 tokens!

My actual payloads come in at around 18000 tokens, which is far below the PARAMETER num_ctx 32768 that I had set.

I have tried reducing the payload I sent to Ollama, but when I discovered that even across reboots that Ollama persists at issuing random 3000 to 5000 token num_ctx has been problematic.

What is the result of this problem??

The absolute result of this is that in spite of my meticulous efforts at controlling the inputs to Ollama, I have no way of knowing how much of the input I feed to Ollama is actualy used by Ollama to create a response.

And this level of loss can be as high as 90%.

This renders Ollama unpredictable and unreliable.

I am so sorry to have to say that, but when you read what I have posted and review the log it is the only concllusion one could draw.

My Desire...

Make it so that setting PARAMETER num_ctx 32768 creates the 32768 token buffer as requested OR solidly fail with an error when it cannot.

The algorthm for doing this should be straight forward and not overly complex for maintainabilty in the Ollama system.

Relevant log output

time=2025-06-04T22:32:16.937Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-04T22:32:16.937Z level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45
time=2025-06-04T22:32:16.960Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-04T22:32:16.961Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=15 prompt=15 used=0 remaining=15
[GIN] 2025/06/04 - 22:32:17 | 200 |  151.168281ms |      172.18.0.4 | POST     "/api/embed"
time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096
time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 duration=2562047h47m16.854775807s
time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 refCount=0
time=2025-06-04T22:32:17.121Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-04T22:32:17.121Z level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-06-04T22:32:17.138Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=18569 format=""
time=2025-06-04T22:32:17.156Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=3236 prompt=4014 used=885 remaining=3129
[GIN] 2025/06/04 - 22:33:28 | 200 |         1m11s |      172.18.0.4 | POST     "/api/chat"
time=2025-06-04T22:33:28.880Z level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/emily_guardian:latest runner.inference=cpu runner.devices=1 runner.size="7.1 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=99 runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768
time=2025-06-04T22:33:28.880Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/emily_guardian:latest runner.inference=cpu runner.devices=1 runner.size="7.1 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=99 runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 duration=2562047h47m16.854775807s
time=2025-06-04T22:33:28.880Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/emily_guardian:latest runner.inference=cpu runner.devices=1 runner.size="7.1 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=99 runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 refCount=0

OS

Docker

GPU

No response

CPU

AMD

Ollama version

v0.9.0

Originally created by @FieldMouse-AI on GitHub (Jun 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10974 ### What is the issue? ### The Problem I created a model that based on `llama3.2:3b` using `PARAMETER num_ctx 32768`. My environment is as follows: - OS: Ubuntu Linux 22.04 LTS - CPU: AMD Ryzen 7 8-core/16-thread - RAM: 32GB - GPU: None - Ollama version: v0.9.0 running inside of Docker (no memory/resource limits) When I set `PARAMETER num_ctx 32768`, it is expected that 32768 is the context window. But, in actuality, Ollama **randomly** truncates the actual `num_ctx` down to **random small values from 3000 to 5000 tokens!** My actual payloads come in at around 18000 tokens, which is far below the `PARAMETER num_ctx 32768` that I had set. I have tried reducing the payload I sent to Ollama, but when I discovered that even across reboots that Ollama persists at issuing random 3000 to 5000 token num_ctx has been problematic. ### What is the result of this problem?? The absolute result of this is that in spite of my meticulous efforts at controlling the inputs to Ollama, I have no way of knowing how much of the input I feed to Ollama is actualy used by Ollama to create a response. And this level of loss can be as high as 90%. This renders Ollama unpredictable and unreliable. I am so sorry to have to say that, but when you read what I have posted and review the log it is the only concllusion one could draw. ## My Desire... Make it so that setting `PARAMETER num_ctx 32768` creates the `32768 token buffer` as requested OR solidly fail with an error when it cannot. The algorthm for doing this should be straight forward and not overly complex for maintainabilty in the Ollama system. ### Relevant log output ```shell time=2025-06-04T22:32:16.937Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-04T22:32:16.937Z level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 time=2025-06-04T22:32:16.960Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-04T22:32:16.961Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=15 prompt=15 used=0 remaining=15 [GIN] 2025/06/04 - 22:32:17 | 200 | 151.168281ms | 172.18.0.4 | POST "/api/embed" time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 duration=2562047h47m16.854775807s time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 refCount=0 time=2025-06-04T22:32:17.121Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-04T22:32:17.121Z level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-06-04T22:32:17.138Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=18569 format="" time=2025-06-04T22:32:17.156Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=3236 prompt=4014 used=885 remaining=3129 [GIN] 2025/06/04 - 22:33:28 | 200 | 1m11s | 172.18.0.4 | POST "/api/chat" time=2025-06-04T22:33:28.880Z level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/emily_guardian:latest runner.inference=cpu runner.devices=1 runner.size="7.1 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=99 runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 time=2025-06-04T22:33:28.880Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/emily_guardian:latest runner.inference=cpu runner.devices=1 runner.size="7.1 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=99 runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 duration=2562047h47m16.854775807s time=2025-06-04T22:33:28.880Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/emily_guardian:latest runner.inference=cpu runner.devices=1 runner.size="7.1 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=99 runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 refCount=0 ``` ### OS Docker ### GPU _No response_ ### CPU AMD ### Ollama version v0.9.0
GiteaMirror added the bug label 2026-04-29 04:38:59 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

But, in actuality, Ollama randomly truncates the actual num_ctx down to random small values from 3000 to 5000 tokens!

It does not.

runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096

This model, llama3.2:1b-instruct-q8_0, was loaded with a context of 4096 tokens.

runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768

This model, llama3.2:3b-instruct-q4_K_M, was loaded with a context of 32768 tokens.

It could be that your client is setting num_ctx in the API call, overriding the value you have configured in the Modelfile. You can either configure your client not to do that, or have the client use the OpenAI compatibility endpoint which doesn't support setting the context length.

If you can provide Modelfiles and a full log (perhaps also increasing the log detail by setting OLLAMA_DEBUG=1 in the server environment) then the source of the context size changes may be determined.

<!-- gh-comment-id:2941912339 --> @rick-github commented on GitHub (Jun 4, 2025): > But, in actuality, Ollama randomly truncates the actual num_ctx down to random small values from 3000 to 5000 tokens! It does not. ``` runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 ``` This model, llama3.2:1b-instruct-q8_0, was loaded with a context of 4096 tokens. ``` runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 ``` This model, llama3.2:3b-instruct-q4_K_M, was loaded with a context of 32768 tokens. It could be that your client is setting `num_ctx` in the API call, overriding the value you have configured in the Modelfile. You can either configure your client not to do that, or have the client use the [OpenAI compatibility endpoint](https://github.com/ollama/ollama/blob/main/docs/openai.md) which doesn't support setting the context length. If you can provide Modelfiles and a full log (perhaps also increasing the log detail by setting `OLLAMA_DEBUG=1` in the server environment) then the source of the context size changes may be determined.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

But, in actuality, Ollama randomly truncates the actual num_ctx down to random small values from 3000 to 5000 tokens!

It does not.

runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096

This model, llama3.2:1b-instruct-q8_0, was loaded with a context of 4096 tokens.

runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768

This model, llama3.2:3b-instruct-q4_K_M, was loaded with a context of 32768 tokens.

It could be that your client is setting num_ctx in the API call, overriding the value you have configured in the Modelfile. You can either configure your client not to do that, or have the client use the OpenAI compatibility endpoint which doesn't support setting the context length.

If you can provide Modelfiles and a full log (perhaps also increasing the log detail by setting OLLAMA_DEBUG=1 in the server environment) then the source of the context size changes may be determined.

Thanks for your response.

However, I set no parameters at the API level.

I depend on my model for all parameters values.

The only thing that I pass to the API is the message.

It must be assumed that I am always sending only data and that all parameters are set in my model's Modelfile.

🤗 Please give me a momemnt! I will add the more information here! 🤗

Modelfile:

FROM llama3.2:1b
PARAMETER temperature 0.2
PARAMETER repeat_penalty 1.1
PARAMETER top_p 0.5
PARAMETER num_ctx 32768
PARAMETER num_predict 10000

🤗 Also, I am running with OLLAMA_DEBUG=1... please wait for a bit as I gather the log, too! 🤗

<!-- gh-comment-id:2942492838 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > > But, in actuality, Ollama randomly truncates the actual num_ctx down to random small values from 3000 to 5000 tokens! > > It does not. > > ``` > runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 > ``` > > This model, llama3.2:1b-instruct-q8_0, was loaded with a context of 4096 tokens. > > ``` > runner.model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff runner.num_ctx=32768 > ``` > > This model, llama3.2:3b-instruct-q4_K_M, was loaded with a context of 32768 tokens. > > It could be that your client is setting `num_ctx` in the API call, overriding the value you have configured in the Modelfile. You can either configure your client not to do that, or have the client use the [OpenAI compatibility endpoint](https://github.com/ollama/ollama/blob/main/docs/openai.md) which doesn't support setting the context length. > > If you can provide Modelfiles and a full log (perhaps also increasing the log detail by setting `OLLAMA_DEBUG=1` in the server environment) then the source of the context size changes may be determined. Thanks for your response. However, I set no parameters at the API level. I depend on my model for all parameters values. The only thing that I pass to the API is the message. It must be assumed that I am always sending only data and that all parameters are set in my model's `Modelfile`. 🤗 Please give me a momemnt! I will add the more information here! 🤗 `Modelfile`: ``` FROM llama3.2:1b PARAMETER temperature 0.2 PARAMETER repeat_penalty 1.1 PARAMETER top_p 0.5 PARAMETER num_ctx 32768 PARAMETER num_predict 10000 ``` 🤗 Also, I am running with `OLLAMA_DEBUG=1`... please wait for a bit as I gather the log, too! 🤗
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

l for all parameters values.
The only thing that I pass to the API is the message.
It must be assumed that I am always sending only data and that all parameters are set in my model's Modelfile.

@rick-github ,
🤗 Please give me a moment! I will add the more information here! 🤗

Modelfile:

FROM llama3.2:1b
PARAMETER temperature 0.2
PARAMETER repeat_penalty 1.1
PARAMETER top_p 0.5
PARAMETER num_ctx 32768
PARAMETER num_predict 10000

🤗 Also, I am running with OLLAMA_DEBUG=1...
🤗 I attached the log!

Thanks! 🤗

ollama.log

<!-- gh-comment-id:2942652825 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > l for all parameters values. > The only thing that I pass to the API is the message. > It must be assumed that I am always sending only data and that all parameters are set in my model's `Modelfile`. @rick-github , 🤗 Please give me a moment! I will add the more information here! 🤗 `Modelfile`: ``` FROM llama3.2:1b PARAMETER temperature 0.2 PARAMETER repeat_penalty 1.1 PARAMETER top_p 0.5 PARAMETER num_ctx 32768 PARAMETER num_predict 10000 ``` 🤗 Also, I am running with `OLLAMA_DEBUG=1`... 🤗 I attached the log! Thanks! 🤗 [ollama.log](https://github.com/user-attachments/files/20602073/ollama.log)
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

@rick-github , is there any other information you need from me that could help?

Just in case, I am adding the ollama show of the model being used:

ollama-user@cc885ab8c308:/mywork$ ollama show emily_guardian   
  Model
    architecture        llama     
    parameters          3.2B      
    context length      131072    
    embedding length    3072      
    quantization        Q4_K_M    

  Capabilities
    completion    
    tools         

  Parameters
    num_ctx           32768                    
    num_predict       10000                    
    repeat_penalty    1.1                      
    stop              "<|start_header_id|>"    
    stop              "<|end_header_id|>"      
    stop              "<|eot_id|>"             
    temperature       0.2                      
    top_p             0.5                      

Thanks! 🤗

<!-- gh-comment-id:2942715818 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): @rick-github , is there any other information you need from me that could help? Just in case, I am adding the `ollama show` of the model being used: ``` ollama-user@cc885ab8c308:/mywork$ ollama show emily_guardian Model architecture llama parameters 3.2B context length 131072 embedding length 3072 quantization Q4_K_M Capabilities completion tools Parameters num_ctx 32768 num_predict 10000 repeat_penalty 1.1 stop "<|start_header_id|>" stop "<|end_header_id|>" stop "<|eot_id|>" temperature 0.2 top_p 0.5 ``` Thanks! 🤗
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

Model llama3.2:1b-instruct-q8_0 was loaded at 19:46:51 with a context size of 4096 to do an embedding request.

Model emily_guardian (base llama3.2:3b-instruct-q4_K_M) was loaded at 19:46:53 with a context of 32768 to do a chat request.

The models alternated answering requests with their assigned context size until the log ends at 01:41:05. There are no log entries indicating the prompt was reduced or shifted due to reaching the limit of the size of the context.

Nothing here indicates buffer truncation.

<!-- gh-comment-id:2944305149 --> @rick-github commented on GitHub (Jun 5, 2025): Model llama3.2:1b-instruct-q8_0 was loaded at 19:46:51 with a context size of 4096 to do an embedding request. Model emily_guardian (base llama3.2:3b-instruct-q4_K_M) was loaded at 19:46:53 with a context of 32768 to do a chat request. The models alternated answering requests with their assigned context size until the log ends at 01:41:05. There are no log entries indicating the prompt was reduced or shifted due to reaching the limit of the size of the context. Nothing here indicates buffer truncation.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

Model llama3.2:1b-instruct-q8_0 was loaded at 19:46:51 with a context size of 4096 to do an embedding request.

Model emily_guardian (base llama3.2:3b-instruct-q4_K_M) was loaded at 19:46:53 with a context of 32768 to do a chat request.

The models alternated answering requests with their assigned context size until the log ends at 01:41:05. There are no log entries indicating the prompt was reduced or shifted due to reaching the limit of the size of the context.

Nothing here indicates buffer truncation.

Wow! Can you give me a second. I want to get a copy of the log that made me feel that the context was getting truncated. It is possible that I am misreading something important.

<!-- gh-comment-id:2944315479 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > Model llama3.2:1b-instruct-q8_0 was loaded at 19:46:51 with a context size of 4096 to do an embedding request. > > Model emily_guardian (base llama3.2:3b-instruct-q4_K_M) was loaded at 19:46:53 with a context of 32768 to do a chat request. > > The models alternated answering requests with their assigned context size until the log ends at 01:41:05. There are no log entries indicating the prompt was reduced or shifted due to reaching the limit of the size of the context. > > Nothing here indicates buffer truncation. Wow! Can you give me a second. I want to get a copy of the log that made me feel that the context was getting truncated. It is possible that I am misreading something important.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

Model llama3.2:1b-instruct-q8_0 was loaded at 19:46:51 with a context size of 4096 to do an embedding request.
Model emily_guardian (base llama3.2:3b-instruct-q4_K_M) was loaded at 19:46:53 with a context of 32768 to do a chat request.
The models alternated answering requests with their assigned context size until the log ends at 01:41:05. There are no log entries indicating the prompt was reduced or shifted due to reaching the limit of the size of the context.
Nothing here indicates buffer truncation.

Wow! Can you give me a second. I want to get a copy of the log that made me feel that the context was getting truncated. It is possible that I am misreading something important.

OK, @rick-github , here is a clip from the end of the log that I sent you that made me feel that things were getting truncated:

time=2025-06-05T01:40:36.359Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=8591 format=""
time=2025-06-05T01:40:36.367Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=4130 prompt=1836 used=885 remaining=951
[GIN] 2025/06/05 - 01:41:05 | 200 | 29.583273506s |      172.18.0.4 | POST     "/api/chat"

It is that second line where it says that cache=4130. But my input should have been something closer to 9000.

Right now I feel like I've been misreading the log.

For example, I sent about 9000 tokens, but the cache value is only cache=4130 which is less than the tokens that I sent.

Is it the case that I am properly fitting my prompt into the 32768 context and I have been misreading the log all along? If so, could you show me how, please? 🤔

<!-- gh-comment-id:2944348901 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > > Model llama3.2:1b-instruct-q8_0 was loaded at 19:46:51 with a context size of 4096 to do an embedding request. > > Model emily_guardian (base llama3.2:3b-instruct-q4_K_M) was loaded at 19:46:53 with a context of 32768 to do a chat request. > > The models alternated answering requests with their assigned context size until the log ends at 01:41:05. There are no log entries indicating the prompt was reduced or shifted due to reaching the limit of the size of the context. > > Nothing here indicates buffer truncation. > > Wow! Can you give me a second. I want to get a copy of the log that made me feel that the context was getting truncated. It is possible that I am misreading something important. OK, @rick-github , here is a clip from the end of the log that I sent you that made me feel that things were getting truncated: ``` time=2025-06-05T01:40:36.359Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=8591 format="" time=2025-06-05T01:40:36.367Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=4130 prompt=1836 used=885 remaining=951 [GIN] 2025/06/05 - 01:41:05 | 200 | 29.583273506s | 172.18.0.4 | POST "/api/chat" ``` It is that second line where it says that `cache=4130`. But my input should have been something closer to `9000`. Right now I feel like I've been misreading the log. For example, I sent about `9000` tokens, but the `cache` value is only `cache=4130` which is less than the tokens that I sent. Is it the case that I am properly fitting my prompt into the `32768` context and I have been misreading the log all along? If so, could you show me how, please? 🤔
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

time=2025-06-05T01:40:36.359Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=8591 format=""

This line shows the length of the prompt, in bytes: 8591, ie close to 9000.

time=2025-06-05T01:40:36.367Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=4130 prompt=1836 used=885 remaining=951

This line shows the processing of the input in tokens. The 8591 bytes in the prompt are translated into 1836 tokens. The cache slot selected for this inference has 4130 tokens from the previous inference. ollama will use the first 885 tokens of that slot and the last 951 tokens from the prompt to generate new tokens.

Your prompt is fitting into the 32768 tokens allocated for the context window.

<!-- gh-comment-id:2944495163 --> @rick-github commented on GitHub (Jun 5, 2025): ``` time=2025-06-05T01:40:36.359Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=8591 format="" ``` This line shows the length of the prompt, in bytes: 8591, ie close to `9000`. ``` time=2025-06-05T01:40:36.367Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=4130 prompt=1836 used=885 remaining=951 ``` This line shows the processing of the input in tokens. The 8591 bytes in the prompt are translated into 1836 tokens. The cache slot selected for this inference has 4130 tokens from the previous inference. ollama will use the first 885 tokens of that slot and the last 951 tokens from the prompt to generate new tokens. Your prompt is fitting into the 32768 tokens allocated for the context window.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

time=2025-06-05T01:40:36.359Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=8591 format=""

This line shows the length of the prompt, in bytes: 8591, ie close to 9000.

time=2025-06-05T01:40:36.367Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=4130 prompt=1836 used=885 remaining=951

This line shows the processing of the input in tokens. The 8591 bytes in the prompt are translated into 1836 tokens. The cache slot selected for this inference has 4130 tokens from the previous inference. ollama will use the first 885 tokens of that slot and the last 951 tokens from the prompt to generate new tokens.

Your prompt is fitting into the 32768 tokens allocated for the context window.

So prompt=8591 is bytes not tokens.

And later, the 8591 bytes get converted into 1836 tokens.

So my sense that something got lost happens because I thought that 8591 characters were tokens and I missed the conversion to 1836 tokens.

Then the next thing that seems to have confused me is that it appears that the ollama selects something called a cache slot, which apparently is a chunk of token space allocated from the total 32768 token space? Am I following correctly now?

So, there was never a shortage?

<!-- gh-comment-id:2944570490 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > ``` > time=2025-06-05T01:40:36.359Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=8591 format="" > ``` > > This line shows the length of the prompt, in bytes: 8591, ie close to `9000`. > > ``` > time=2025-06-05T01:40:36.367Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=4130 prompt=1836 used=885 remaining=951 > ``` > > This line shows the processing of the input in tokens. The 8591 bytes in the prompt are translated into 1836 tokens. The cache slot selected for this inference has 4130 tokens from the previous inference. ollama will use the first 885 tokens of that slot and the last 951 tokens from the prompt to generate new tokens. > > Your prompt is fitting into the 32768 tokens allocated for the context window. So `prompt=8591` is **bytes not tokens**. And later, the 8591 bytes get converted into 1836 tokens. So my sense that something got lost happens because I thought that `8591` characters were tokens and I missed the conversion to `1836` tokens. Then the next thing that seems to have confused me is that it appears that the ollama selects something called a **cache slot**, which apparently is a chunk of token space allocated from the total 32768 token space? Am I following correctly now? So, there was never a shortage?
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

So, there was never a shortage?

Correct.

<!-- gh-comment-id:2944628513 --> @rick-github commented on GitHub (Jun 5, 2025): > So, there was never a shortage? Correct.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

So, there was never a shortage?

Correct.

But, when I go higher up in the logs, I run into the folllowing:

time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 refCount=0
time=2025-06-04T22:32:17.121Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-04T22:32:17.121Z level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-06-04T22:32:17.138Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=18569 format=""
time=2025-06-04T22:32:17.156Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=3236 prompt=4014 used=885 remaining=3129
[GIN] 2025/06/04 - 22:33:28 | 200 |         1m11s |      172.18.0.4 | POST     "/api/chat"
  • In this case, I sent 18569 characters.
  • This gets converted into 4014 tokens (prompt=4014).
  • Ollama then allocated a total context buffer of 3236 tokens (cache=3236).

In this case the prompt (4014 tokens) was larger than the allocated buffer (3236 tokens). This means that my 4014 token prompt was truncated to fit into the smaller 3236 token buffer, doesn't it?

<!-- gh-comment-id:2944690301 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > > So, there was never a shortage? > > Correct. But, when I go higher up in the logs, I run into the folllowing: ``` time=2025-06-04T22:32:17.063Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3.2:1b runner.inference=cpu runner.devices=1 runner.size="1.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=79 runner.model=/app/ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 runner.num_ctx=4096 refCount=0 time=2025-06-04T22:32:17.121Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-04T22:32:17.121Z level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=/app/ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-06-04T22:32:17.138Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=18569 format="" time=2025-06-04T22:32:17.156Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=3236 prompt=4014 used=885 remaining=3129 [GIN] 2025/06/04 - 22:33:28 | 200 | 1m11s | 172.18.0.4 | POST "/api/chat" ``` - In this case, I sent 18569 characters. - This gets converted into 4014 tokens (`prompt=4014`). - Ollama then allocated a total context buffer of 3236 tokens (`cache=3236`). In this case the prompt (4014 tokens) was larger than the allocated buffer (3236 tokens). This means that **my 4014 token prompt was truncated to fit into the smaller 3236 token buffer**, doesn't it?
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

cache doesn't indicate an allocated buffer, it just means that the current cache slot has 3236 tokens in it from the previous inference. The cache slot is 32768 tokens long. The cache slot is truncated to 885 tokens, the length of the matching tokens in the prompt. The prompt has the first 885 tokens removed from the total 4014 tokens of the prompt, leaving the remaining 3129 tokens to be appended to the contents of cache slot in the context buffer during inference. The context buffer starts with 4014 tokens in it, 28754 token positions are unused.

<!-- gh-comment-id:2944869071 --> @rick-github commented on GitHub (Jun 5, 2025): `cache` doesn't indicate an allocated buffer, it just means that the current cache slot has 3236 tokens in it from the previous inference. The cache slot is 32768 tokens long. The cache slot is truncated to 885 tokens, the length of the matching tokens in the prompt. The prompt has the first 885 tokens removed from the total 4014 tokens of the prompt, leaving the remaining 3129 tokens to be appended to the contents of cache slot in the context buffer during inference. The context buffer starts with 4014 tokens in it, 28754 token positions are unused.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

cache doesn't indicate an allocated buffer, it just means that the current cache slot has 3236 tokens in it from the previous inference. The cache slot is 32768 tokens long. The cache slot is truncated to 885 tokens, the length of the matching tokens in the prompt. The prompt has the first 885 tokens removed from the total 4014 tokens of the prompt, leaving the remaining 3129 tokens to be appended to the contents of cache slot in the context buffer during inference. The context buffer starts with 4014 tokens in it, 28754 token positions are unused.

So, this means that all this time I never lost any data due to cache truncation?

<!-- gh-comment-id:2944875228 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > `cache` doesn't indicate an allocated buffer, it just means that the current cache slot has 3236 tokens in it from the previous inference. The cache slot is 32768 tokens long. The cache slot is truncated to 885 tokens, the length of the matching tokens in the prompt. The prompt has the first 885 tokens removed from the total 4014 tokens of the prompt, leaving the remaining 3129 tokens to be appended to the contents of cache slot in the context buffer during inference. The context buffer starts with 4014 tokens in it, 28754 token positions are unused. So, this means that all this time I never lost any data due to cache truncation?
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

So, this means that all this time I never lost any data due to cache truncation?

Correct.

<!-- gh-comment-id:2944876944 --> @rick-github commented on GitHub (Jun 5, 2025): > So, this means that all this time I never lost any data due to cache truncation? Correct.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

So, this means that all this time I never lost any data due to cache truncation?

Correct.

In terms of performance, having a large context like this is fine then, right?

I do expect to fill it up with more data.

Up to now, I was so sure that I was losing context.

<!-- gh-comment-id:2944881466 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > > So, this means that all this time I never lost any data due to cache truncation? > > Correct. In terms of performance, having a large context like this is fine then, right? I do expect to fill it up with more data. Up to now, I was so sure that I was losing context.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

So, this means that all this time I never lost any data due to cache truncation?

Correct.

In terms of performance, having a large context like this is fine then, right?

I do expect to fill it up with more data.

Up to now, I was so sure that I was losing context.

Ah! So, num_ctx 32768 means that there is a token pool that can be drawn from dynamically!

<!-- gh-comment-id:2944887382 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): > > > So, this means that all this time I never lost any data due to cache truncation? > > > > > > Correct. > > In terms of performance, having a large context like this is fine then, right? > > I do expect to fill it up with more data. > > Up to now, I was so sure that I was losing context. Ah! So, `num_ctx 32768` means that there is a **token pool** that can be drawn from dynamically!
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

@rick-github ,

At this point, I must say that I am quite happy!

This means that I can return to debugging and pushing my app forward.

Thank you for your prompt assistance!
🤗🤗🤗🤗🤗

<!-- gh-comment-id:2944912543 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): @rick-github , At this point, I must say that I am quite happy! This means that I can return to debugging and pushing my app forward. Thank you for your prompt assistance! 🤗🤗🤗🤗🤗
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

In terms of performance, having a large context like this is fine then, right?

Processing time will go up (ie, token generation rate will go down) as more token space is used, purely because there are more tokens to be processed. This is normally insignificant on a GPU, but with CPU only it might be noticeable. Other than that, there's no downside to having a large context buffer (besides paying for the RAM/VRAM). Indeed, it's preferable, because running out of token space during generation is a significant performance hit as the contents of the context buffer needs to be shifted to make room for new tokens. It can also lead to a model losing coherence and as a result start generating nonsense.

Ah! So, num_ctx 32768 means that there is a token pool that can be drawn from dynamically!

It's not dynamic, the cache and context buffer are allocated when the runner starts. But the contents will resize as the inference starts and progresses.

<!-- gh-comment-id:2944912857 --> @rick-github commented on GitHub (Jun 5, 2025): > In terms of performance, having a large context like this is fine then, right? Processing time will go up (ie, token generation rate will go down) as more token space is used, purely because there are more tokens to be processed. This is normally insignificant on a GPU, but with CPU only it might be noticeable. Other than that, there's no downside to having a large context buffer (besides paying for the RAM/VRAM). Indeed, it's preferable, because running out of token space during generation is a significant performance hit as the contents of the context buffer needs to be shifted to make room for new tokens. It can also lead to a model losing coherence and as a result start generating nonsense. > Ah! So, num_ctx 32768 means that there is a token pool that can be drawn from dynamically! It's not dynamic, the cache and context buffer are allocated when the runner starts. But the contents will resize as the inference starts and progresses.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

So, @rick-github , do you thinkt that we are good to close this issue? 🤔

<!-- gh-comment-id:2944964328 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): So, @rick-github , do you thinkt that we are good to close this issue? 🤔
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

If you are satisfied with the explanation, sure.

<!-- gh-comment-id:2944994475 --> @rick-github commented on GitHub (Jun 5, 2025): If you are satisfied with the explanation, sure.
Author
Owner

@FieldMouse-AI commented on GitHub (Jun 5, 2025):

🤗

<!-- gh-comment-id:2945078168 --> @FieldMouse-AI commented on GitHub (Jun 5, 2025): 🤗
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53744