[GH-ISSUE #11087] Loading time of mistral-small3.1 is too long #33074

Closed
opened 2026-04-22 15:18:18 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @JitaekJo on GitHub (Jun 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11087

Originally assigned to: @jessegross on GitHub.

What is the issue?

Dear team,

When I send a request, mistral-small3.1 is loaded tooooo long.
For example, gemma3:27b does a task for 7 sec.
But mistral-small3.1 does the same thing for 1.5 min.
Meanwhile, this symptom appeared few days ago.

Please resolve this problem.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.9.0

Originally created by @JitaekJo on GitHub (Jun 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11087 Originally assigned to: @jessegross on GitHub. ### What is the issue? Dear team, When I send a request, mistral-small3.1 is loaded tooooo long. For example, gemma3:27b does a task for 7 sec. But mistral-small3.1 does the same thing for 1.5 min. Meanwhile, this symptom appeared few days ago. Please resolve this problem. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-22 15:18:18 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 16, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2976684415 --> @rick-github commented on GitHub (Jun 16, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@JitaekJo commented on GitHub (Jun 16, 2025):

I hope this log would help you

<!-- gh-comment-id:2976710027 --> @JitaekJo commented on GitHub (Jun 16, 2025): I hope this log would help you
Author
Owner

@JitaekJo commented on GitHub (Jun 16, 2025):

log.txt

<!-- gh-comment-id:2976711463 --> @JitaekJo commented on GitHub (Jun 16, 2025): [log.txt](https://github.com/user-attachments/files/20758199/log.txt)
Author
Owner

@rick-github commented on GitHub (Jun 16, 2025):

Memory estimation for gemma3:27b-it-q4_K_M:

time=2025-06-16T16:25:06.614+03:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1
 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="20.9 GiB" memory.required.partial="20.9 GiB" memory.required.kv="1.6 GiB"
 memory.required.allocations="[20.9 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB"
 memory.weights.nonrepeating="1.1 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB"
 projector.weights="795.9 MiB" projector.graph="1.0 GiB"

Memory estimation for mistral-small3.1:24b-instruct-2503-q4_K_M:

time=2025-06-16T16:25:53.080+03:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1
 layers.model=41 layers.offload=34 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="24.6 GiB" memory.required.partial="22.2 GiB" memory.required.kv="640.0 MiB"
 memory.required.allocations="[22.2 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB"
 memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB"
 projector.weights="769.3 MiB" projector.graph="8.8 GiB"

mistral-small3.1 needs 24.6G, more than is free, so offloads 34 of 41 layers to the GPU. gemma3 needs 20.9G, which fits in the available 22.2G, so all 63 layers are offloaded the the GPU.

mistral-small3.1 is slower than gemma3 because a few layers are loaded in system RAM, where the CPU does the inference. The CPU is slower and not optimized for matrix operations, so that slows down the whole inference.

You can try using flash attention or KV cache quantization to reduce the memory footprint of the model.

<!-- gh-comment-id:2976828836 --> @rick-github commented on GitHub (Jun 16, 2025): Memory estimation for gemma3:27b-it-q4_K_M: ``` time=2025-06-16T16:25:06.614+03:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.9 GiB" memory.required.partial="20.9 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[20.9 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="565.0 MiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` Memory estimation for mistral-small3.1:24b-instruct-2503-q4_K_M: ``` time=2025-06-16T16:25:53.080+03:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=34 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.6 GiB" memory.required.partial="22.2 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[22.2 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" ``` mistral-small3.1 needs 24.6G, more than is free, so offloads 34 of 41 layers to the GPU. gemma3 needs 20.9G, which fits in the available 22.2G, so all 63 layers are offloaded the the GPU. mistral-small3.1 is slower than gemma3 because a few layers are loaded in system RAM, where the CPU does the inference. The CPU is slower and not optimized for matrix operations, so that slows down the whole inference. You can try using [flash attention](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-enable-flash-attention) or [KV cache quantization](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache) to reduce the memory footprint of the model.
Author
Owner

@JitaekJo commented on GitHub (Jun 16, 2025):

Thank you for an analysis. But I wouldn't ask you if the problem was about memory.
mistral-small3.1 performed same task very fast as Gemma3 before.
But now, it was slowed.
What would be the problem?

<!-- gh-comment-id:2976897397 --> @JitaekJo commented on GitHub (Jun 16, 2025): Thank you for an analysis. But I wouldn't ask you if the problem was about memory. mistral-small3.1 performed same task very fast as Gemma3 before. But now, it was slowed. What would be the problem?
Author
Owner

@rick-github commented on GitHub (Jun 16, 2025):

What would be the problem?

There are layers running in system RAM, where inference is slower.

Have you upgraded ollama recently? Downloaded a new version of the model? Upgraded CUDA drivers? Running something that takes up a bit of VRAM?

<!-- gh-comment-id:2976949482 --> @rick-github commented on GitHub (Jun 16, 2025): > What would be the problem? There are layers running in system RAM, where inference is slower. Have you upgraded ollama recently? Downloaded a new version of the model? Upgraded CUDA drivers? Running something that takes up a bit of VRAM?
Author
Owner

@JitaekJo commented on GitHub (Jun 16, 2025):

I've upgraded ollama recently yes.
I've not downloaded a new version of the model
I've not upgraded CUDA drivers
There is no load on the Graphic card

<!-- gh-comment-id:2976956538 --> @JitaekJo commented on GitHub (Jun 16, 2025): I've upgraded ollama recently yes. I've not downloaded a new version of the model I've not upgraded CUDA drivers There is no load on the Graphic card
Author
Owner

@rick-github commented on GitHub (Jun 16, 2025):

I've upgraded ollama recently yes.

Mystery solved.

There have been recent changes to the estimation logic to reduce the chance of an OOM. You can force ollama to load more layers into GPU by setting num_gpu as described here. This may increase OOMs or cause a decrease in performance.

<!-- gh-comment-id:2977115053 --> @rick-github commented on GitHub (Jun 16, 2025): > I've upgraded ollama recently yes. Mystery solved. There have been recent changes to the estimation logic to reduce the chance of an OOM. You can force ollama to load more layers into GPU by setting `num_gpu` as described [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). This may increase OOMs or cause a [decrease in performance](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900).
Author
Owner

@pdevine commented on GitHub (Jun 16, 2025):

You can also use ollama ps to see if some of it is being loaded into system memory instead of onto the GPU. Unfortunately your GPU has just barely enough VRAM to hold the model, but older versions of Ollama were being too liberal with how many layers of the model were being loaded onto the GPU.

I'm going to go ahead and close this as answered (thank you @rick-github !).

<!-- gh-comment-id:2978456204 --> @pdevine commented on GitHub (Jun 16, 2025): You can also use `ollama ps` to see if some of it is being loaded into system memory instead of onto the GPU. Unfortunately your GPU has just _barely_ enough VRAM to hold the model, but older versions of Ollama were being too liberal with how many layers of the model were being loaded onto the GPU. I'm going to go ahead and close this as answered (thank you @rick-github !).
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

Image

A related problem is the model isn't loaded. As you can see, even though the memory is loaded and ollama is served after providing a prompt, the load is "0%" for a long time.
Is it related to the memory issue? I'm doubt.

<!-- gh-comment-id:2979137913 --> @JitaekJo commented on GitHub (Jun 17, 2025): ![Image](https://github.com/user-attachments/assets/a7384a08-fe41-49ce-a78b-8d2ceb094536) A related problem is the model isn't loaded. As you can see, even though the memory is loaded and ollama is served after providing a prompt, the load is "0%" for a long time. Is it related to the memory issue? I'm doubt.
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

I've upgraded ollama recently yes.

Mystery solved.

There have been recent changes to the estimation logic to reduce the chance of an OOM. You can force ollama to load more layers into GPU by setting num_gpu as described here. This may increase OOMs or cause a decrease in performance.

And this doesn't seem like an resolution because this guides to load the model on RAM and CPU.
My problem is that the model isn't loaded.

<!-- gh-comment-id:2979239664 --> @JitaekJo commented on GitHub (Jun 17, 2025): > > I've upgraded ollama recently yes. > > Mystery solved. > > There have been recent changes to the estimation logic to reduce the chance of an OOM. You can force ollama to load more layers into GPU by setting `num_gpu` as described [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). This may increase OOMs or cause a [decrease in performance](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900). And this doesn't seem like an resolution because this guides to load the model on RAM and CPU. My problem is that the model isn't loaded.
Author
Owner

@rick-github commented on GitHub (Jun 17, 2025):

And this doesn't seem like an resolution because this guides to load the model on RAM and CPU.

It demonstrates how setting num_gpu can be used to control the number of layers loaded into the GPU. In that example, it was 0. In your case, it would be more than 34.

My problem is that the model isn't loaded.

Your screenshot of nvidia-smi and the server logs show that the model is loaded into the GPU.

<!-- gh-comment-id:2979438271 --> @rick-github commented on GitHub (Jun 17, 2025): > And this doesn't seem like an resolution because this guides to load the model on RAM and CPU. It demonstrates how setting `num_gpu` can be used to control the number of layers loaded into the GPU. In that example, it was 0. In your case, it would be more than 34. > My problem is that the model isn't loaded. Your screenshot of `nvidia-smi` and the server logs show that the model is loaded into the GPU.
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

I found a cause of problem. When I feed an IMAGE, MISTRAL isn't be loaded.
When I send a TEXT, it works OK.

Please fix this bug.

<!-- gh-comment-id:2979448070 --> @JitaekJo commented on GitHub (Jun 17, 2025): I found a cause of problem. When I feed an IMAGE, MISTRAL isn't be loaded. When I send a TEXT, it works OK. Please fix this bug.
Author
Owner

@rick-github commented on GitHub (Jun 17, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2979451639 --> @rick-github commented on GitHub (Jun 17, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

log3.txt

This would be a proper version

<!-- gh-comment-id:2979880007 --> @JitaekJo commented on GitHub (Jun 17, 2025): [log3.txt](https://github.com/user-attachments/files/20773850/log3.txt) This would be a proper version
Author
Owner

@rick-github commented on GitHub (Jun 17, 2025):

Nothing in the log indicates a problem. The CUDA backend was loaded, a portion of the layers were loaded into the GPU, the server returned successful HTTP codes. Try setting OLLAMA_DEBUG=1 in the server environment to increase logging.

<!-- gh-comment-id:2979892741 --> @rick-github commented on GitHub (Jun 17, 2025): Nothing in the log indicates a problem. The CUDA backend was loaded, a portion of the layers were loaded into the GPU, the server returned successful HTTP codes. Try setting `OLLAMA_DEBUG=1` in the server environment to increase logging.
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

Could you please guide how to set OLLAMA_DEBUG=1?

<!-- gh-comment-id:2980229055 --> @JitaekJo commented on GitHub (Jun 17, 2025): Could you please guide how to set OLLAMA_DEBUG=1?
Author
Owner
<!-- gh-comment-id:2980250623 --> @rick-github commented on GitHub (Jun 17, 2025): https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

logs with debug.txt

I tested a lot and I confirm that the problem occurs with Image feeding

The model processed one image for 7-10 sec. But now, 1.5 min.

<!-- gh-comment-id:2980570771 --> @JitaekJo commented on GitHub (Jun 17, 2025): [logs with debug.txt](https://github.com/user-attachments/files/20777280/logs.with.debug.txt) I tested a lot and I confirm that the problem occurs with Image feeding The model processed one image for 7-10 sec. But now, 1.5 min.
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

logs with debug2.txt

This may help you. There was a big load with GPU for some minutes even though I didn't give any request.

<!-- gh-comment-id:2980638784 --> @JitaekJo commented on GitHub (Jun 17, 2025): [logs with debug2.txt](https://github.com/user-attachments/files/20777558/logs.with.debug2.txt) This may help you. There was a big load with GPU for some minutes even though I didn't give any request.
Author
Owner

@JitaekJo commented on GitHub (Jun 17, 2025):

logs with debug2.txt

This may help you. There was a big load with GPU for some minutes even though I didn't give any request.

It happens repeatedly AFTER updating to 0.9.1

<!-- gh-comment-id:2980699778 --> @JitaekJo commented on GitHub (Jun 17, 2025): > [logs with debug2.txt](https://github.com/user-attachments/files/20777558/logs.with.debug2.txt) > > This may help you. There was a big load with GPU for some minutes even though I didn't give any request. It happens repeatedly AFTER updating to 0.9.1
Author
Owner

@rick-github commented on GitHub (Jun 17, 2025):

Your issue is that responses now take longer after upgrading to 0.9.1? As explained above, the new version of ollama is more conservative with memory allocations. You can override num_gpu to force ollama to offload more layers to the GPU.

<!-- gh-comment-id:2980727436 --> @rick-github commented on GitHub (Jun 17, 2025): Your issue is that responses now take longer after upgrading to 0.9.1? As [explained above](https://github.com/ollama/ollama/issues/11087#issuecomment-2978456204), the new version of ollama is more conservative with memory allocations. You can [override](https://github.com/ollama/ollama/issues/11087#issuecomment-2977115053) `num_gpu` to force ollama to offload more layers to the GPU.
Author
Owner

@JitaekJo commented on GitHub (Jun 18, 2025):

Your issue is that responses now take longer after upgrading to 0.9.1? As explained above, the new version of ollama is more conservative with memory allocations. You can override num_gpu to force ollama to offload more layers to the GPU.

Correctly speaking,

ㅁ Current problem(before 0.9.1.)

  1. Your issue is that responses now take longer after upgrading to 0.9.1?
    -> No. The problem occurred even before upgrading to 0.9.1.

  2. when I feed the image, there is no load on GPU for a long time. usually, the load appears immediately after the start, but now the load appears after a minute.

ㅁ Additional problem(after 0.9.1.)

  • There is a big load with GPU for some minutes even though I didn't give any request.
<!-- gh-comment-id:2982840224 --> @JitaekJo commented on GitHub (Jun 18, 2025): > Your issue is that responses now take longer after upgrading to 0.9.1? As [explained above](https://github.com/ollama/ollama/issues/11087#issuecomment-2978456204), the new version of ollama is more conservative with memory allocations. You can [override](https://github.com/ollama/ollama/issues/11087#issuecomment-2977115053) `num_gpu` to force ollama to offload more layers to the GPU. Correctly speaking, ㅁ Current problem(before 0.9.1.) 1. Your issue is that responses now take longer after upgrading to 0.9.1? -> No. The problem occurred even before upgrading to 0.9.1. 2. when I feed the image, there is no load on GPU for a long time. usually, the load appears immediately after the start, but now the load appears after a minute. ㅁ Additional problem(after 0.9.1.) - There is a big load with GPU for some minutes even though I didn't give any request.
Author
Owner

@JitaekJo commented on GitHub (Jun 18, 2025):

Image

Just see this picture. For more than 1 minute there is no workload on GPU when I feed IMAGE.
In case of text, such issue doesn't happen

<!-- gh-comment-id:2982996402 --> @JitaekJo commented on GitHub (Jun 18, 2025): ![Image](https://github.com/user-attachments/assets/0a4f9615-b7a6-4e2d-9c54-b411f67bb7f2) Just see this picture. For more than 1 minute there is no workload on GPU when I feed IMAGE. In case of text, such issue doesn't happen
Author
Owner

@rick-github commented on GitHub (Jun 18, 2025):

Your context length is too small for the prompts you are sending:

time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=prompt.go:66 msg="truncating input messages which exceed context length" truncated=2
time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=25447 format=""
time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[1]
time=2025-06-17T17:32:20.670+03:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=11581 keep=4 new=4096
time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=3005 prompt=4096 used=4 remaining=4092
time=2025-06-17T17:32:35.458+03:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046

The current context length is 4096. The prompt was too long and the ollama server dropped the first two messages. The remaining message was 25447 characters, which translated to 11581 tokens, still too long to fit in the context window. ollama removed the first ~7000 tokens and then began the process of inference. During that, the output overran the context length and the runner shifted the buffer, discarding the first 2046 tokens.

See here for how to set the length of the context buffer.

<!-- gh-comment-id:2984206215 --> @rick-github commented on GitHub (Jun 18, 2025): Your context length is too small for the prompts you are sending: ``` time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=prompt.go:66 msg="truncating input messages which exceed context length" truncated=2 time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=25447 format="" time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[1] time=2025-06-17T17:32:20.670+03:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=11581 keep=4 new=4096 time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=3005 prompt=4096 used=4 remaining=4092 time=2025-06-17T17:32:35.458+03:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 ``` The current context length is 4096. The prompt was too long and the ollama server dropped the first two messages. The remaining message was 25447 characters, which translated to 11581 tokens, still too long to fit in the context window. ollama removed the first ~7000 tokens and then began the process of inference. During that, the output overran the context length and the runner shifted the buffer, discarding the first 2046 tokens. See [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size) for how to set the length of the context buffer.
Author
Owner

@JitaekJo commented on GitHub (Jun 18, 2025):

Your context length is too small for the prompts you are sending:

time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=prompt.go:66 msg="truncating input messages which exceed context length" truncated=2
time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=25447 format=""
time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[1]
time=2025-06-17T17:32:20.670+03:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=11581 keep=4 new=4096
time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=3005 prompt=4096 used=4 remaining=4092
time=2025-06-17T17:32:35.458+03:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046

The current context length is 4096. The prompt was too long and the ollama server dropped the first two messages. The remaining message was 25447 characters, which translated to 11581 tokens, still too long to fit in the context window. ollama removed the first ~7000 tokens and then began the process of inference. During that, the output overran the context length and the runner shifted the buffer, discarding the first 2046 tokens.

See here for how to set the length of the context buffer.

Still have a question.
If the problem is with the context length,

  1. Then why previously the model worked well with the SAME context length with IMAGE, but now it works bad?
  2. Why the model works well now with LONGER context length WITHOUT image?

I'm sure the problem is not with the context length, but with the Image feeding.
I tried to change models, but the symptom is same that with IMAGE - problem, with text - ok, regardless of context length.

<!-- gh-comment-id:2984256034 --> @JitaekJo commented on GitHub (Jun 18, 2025): > Your context length is too small for the prompts you are sending: > > ``` > time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=prompt.go:66 msg="truncating input messages which exceed context length" truncated=2 > time=2025-06-17T17:32:20.660+03:00 level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=25447 format="" > time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[1] > time=2025-06-17T17:32:20.670+03:00 level=WARN source=runner.go:157 msg="truncating input prompt" limit=4096 prompt=11581 keep=4 new=4096 > time=2025-06-17T17:32:20.670+03:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=3005 prompt=4096 used=4 remaining=4092 > time=2025-06-17T17:32:35.458+03:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 > ``` > > The current context length is 4096. The prompt was too long and the ollama server dropped the first two messages. The remaining message was 25447 characters, which translated to 11581 tokens, still too long to fit in the context window. ollama removed the first ~7000 tokens and then began the process of inference. During that, the output overran the context length and the runner shifted the buffer, discarding the first 2046 tokens. > > See [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size) for how to set the length of the context buffer. Still have a question. If the problem is with the context length, 1. Then why previously the model worked well with the SAME context length with IMAGE, but now it works bad? 2. Why the model works well now with LONGER context length WITHOUT image? I'm sure the problem is not with the context length, but with the Image feeding. I tried to change models, but the symptom is same that with IMAGE - problem, with text - ok, regardless of context length.
Author
Owner

@rick-github commented on GitHub (Jun 18, 2025):

Server logs with the increased context length will aid in debugging.

<!-- gh-comment-id:2984272288 --> @rick-github commented on GitHub (Jun 18, 2025): Server logs with the increased context length will aid in debugging.
Author
Owner

@JitaekJo commented on GitHub (Jun 18, 2025):

logs with shorter context.txt

  1. I tried to delete and re-install ollama/mistral -> result is same
  2. I tried with very short context with image -> result is same
<!-- gh-comment-id:2984497930 --> @JitaekJo commented on GitHub (Jun 18, 2025): [logs with shorter context.txt](https://github.com/user-attachments/files/20798779/logs.with.shorter.context.txt) 1. I tried to delete and re-install ollama/mistral -> result is same 2. I tried with very short context with image -> result is same
Author
Owner

@rick-github commented on GitHub (Jun 18, 2025):

Logs indicate that the ollama estimated only 40 of 41 layers will fit on the GPU, which means that 1 layer is offloaded to system RAM. Due to the way that layer allocation is done in the new ollama engine, this results in the entirety of the vision projector running in system RAM. The logs show that the runner is using 8 threads on a Haswell CPU (EOL 2019 Q3), so my guess as to the slow processing is due to the limited processing power.

<!-- gh-comment-id:2985058324 --> @rick-github commented on GitHub (Jun 18, 2025): Logs indicate that the ollama estimated only 40 of 41 layers will fit on the GPU, which means that 1 layer is offloaded to system RAM. Due to the way that [layer allocation](https://github.com/ollama/ollama/pull/10700#issue-3061458272) is done in the new ollama engine, this results in the entirety of the vision projector running in system RAM. The logs show that the runner is using 8 threads on a Haswell CPU (EOL 2019 Q3), so my guess as to the slow processing is due to the limited processing power.
Author
Owner

@JitaekJo commented on GitHub (Jun 19, 2025):

Logs indicate that the ollama estimated only 40 of 41 layers will fit on the GPU, which means that 1 layer is offloaded to system RAM. Due to the way that layer allocation is done in the new ollama engine, this results in the entirety of the vision projector running in system RAM. The logs show that the runner is using 8 threads on a Haswell CPU (EOL 2019 Q3), so my guess as to the slow processing is due to the limited processing power.

Thanks to the analysis,
But still your analysis couldn't answer the question.
1.Then why previously the model worked well with the SAME context length with IMAGE, but now it works bad?

  1. As this picture shows, there is no workload for more than 1 minute when I feed a image.
    Image

Do you believe that it is really related to the limited processing power?

  1. And One more thing I revealed.
  • When I feed the SAME IMAGE repeatedly with the SAME PROMPT, the model processes well as usual.
  • As I understand Ollama raise Vision encoder on CPU and CPU performs JEPG decoding - patch - embedding. <- It takes minutes
  • The thing is the Vision encoder raises anew every time when I feed a new image.
  • Meanwhile the image takes only 300 KB.
  • Can we optimize raising Vision encoder?
<!-- gh-comment-id:2986758237 --> @JitaekJo commented on GitHub (Jun 19, 2025): > Logs indicate that the ollama estimated only 40 of 41 layers will fit on the GPU, which means that 1 layer is offloaded to system RAM. Due to the way that [layer allocation](https://github.com/ollama/ollama/pull/10700#issue-3061458272) is done in the new ollama engine, this results in the entirety of the vision projector running in system RAM. The logs show that the runner is using 8 threads on a Haswell CPU (EOL 2019 Q3), so my guess as to the slow processing is due to the limited processing power. Thanks to the analysis, But still your analysis couldn't answer the question. 1.Then why previously the model worked well with the SAME context length with IMAGE, but now it works bad? 2. As this picture shows, there is no workload for more than 1 minute when I feed a image. ![Image](https://github.com/user-attachments/assets/a07634cb-49f7-499e-b83c-7fdc2f205020) Do you believe that it is really related to the limited processing power? 3. And One more thing I revealed. - When I feed the SAME IMAGE repeatedly with the SAME PROMPT, the model processes well as usual. - As I understand Ollama raise Vision encoder on CPU and CPU performs JEPG decoding - patch - embedding. <- It takes minutes - The thing is the Vision encoder raises anew every time when I feed a new image. - Meanwhile the image takes only 300 KB. - Can we optimize raising Vision encoder?
Author
Owner

@JitaekJo commented on GitHub (Jun 19, 2025):

Logs indicate that the ollama estimated only 40 of 41 layers will fit on the GPU, which means that 1 layer is offloaded to system RAM. Due to the way that layer allocation is done in the new ollama engine, this results in the entirety of the vision projector running in system RAM. The logs show that the runner is using 8 threads on a Haswell CPU (EOL 2019 Q3), so my guess as to the slow processing is due to the limited processing power.

Thanks to the analysis, But still your analysis couldn't answer the question. 1.Then why previously the model worked well with the SAME context length with IMAGE, but now it works bad?

  1. As this picture shows, there is no workload for more than 1 minute when I feed a image.
    Image

Do you believe that it is really related to the limited processing power?

  1. And One more thing I revealed.
  • When I feed the SAME IMAGE repeatedly with the SAME PROMPT, the model processes well as usual.
  • As I understand Ollama raise Vision encoder on CPU and CPU performs JEPG decoding - patch - embedding. <- It takes minutes
  • The thing is the Vision encoder raises anew every time when I feed a new image.
  • Meanwhile the image takes only 300 KB.
  • Can we optimize raising Vision encoder?

Here I share the result

I rolled back to 0.6.5, everything was solved.
The problem is Ollama raise Vision encoder on CPU. In this process inefficiency appears. Perhaps it could save a memory usage, but dramatically decrease a productivity.
Please consider to fix it.

<!-- gh-comment-id:2987574499 --> @JitaekJo commented on GitHub (Jun 19, 2025): > > Logs indicate that the ollama estimated only 40 of 41 layers will fit on the GPU, which means that 1 layer is offloaded to system RAM. Due to the way that [layer allocation](https://github.com/ollama/ollama/pull/10700#issue-3061458272) is done in the new ollama engine, this results in the entirety of the vision projector running in system RAM. The logs show that the runner is using 8 threads on a Haswell CPU (EOL 2019 Q3), so my guess as to the slow processing is due to the limited processing power. > > Thanks to the analysis, But still your analysis couldn't answer the question. 1.Then why previously the model worked well with the SAME context length with IMAGE, but now it works bad? > > 2. As this picture shows, there is no workload for more than 1 minute when I feed a image. > ![Image](https://github.com/user-attachments/assets/a07634cb-49f7-499e-b83c-7fdc2f205020) > > Do you believe that it is really related to the limited processing power? > > 3. And One more thing I revealed. > > * When I feed the SAME IMAGE repeatedly with the SAME PROMPT, the model processes well as usual. > * As I understand Ollama raise Vision encoder on CPU and CPU performs JEPG decoding - patch - embedding. <- It takes minutes > * The thing is the Vision encoder raises anew every time when I feed a new image. > * Meanwhile the image takes only 300 KB. > * Can we optimize raising Vision encoder? Here I share the result I rolled back to 0.6.5, everything was solved. The problem is Ollama raise Vision encoder on CPU. In this process inefficiency appears. Perhaps it could save a memory usage, but dramatically decrease a productivity. Please consider to fix it.
Author
Owner

@jessegross commented on GitHub (Sep 24, 2025):

I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.

<!-- gh-comment-id:3330095986 --> @jessegross commented on GitHub (Sep 24, 2025): I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33074