[GH-ISSUE #10372] Additional precautions against Gemma3 memory leaks on Windows 10 and Ollama 0.6.6? #68873

Closed
opened 2026-05-04 15:26:54 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @SingularityMan on GitHub (Apr 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10372

What is the issue?

Ever since I added Google's updated Gemma3-QAT model (no multimodal in Ollama) and deployed it in Ollama 0.6.6 I keep getting memory leaks that freeze my PC.

I know this is a well-documented issue and Ollama is hard at work. I also know its related to memory leaks associated with KV Cache.

So far I have found a way to prevent the leak from spreading throughout my PC and freezing it into a very expensive brick because of Ollama by setting CUDA_VISIBLE_DEVICES=0 (AI GPU) so Ollama can only use that GPU instead of my display adapter GPU I use for gaming. I also attempted to contain it by disabling system memory fallback on Windows 10 for Ollama only via NVIDIA Control Panel to prevent Ollama from automatically using up RAM after the inevitable occurs with G3-QAT.

From what I've seen, this seems to have temporarily contained the memory leak plaguing my PC, but of course I'm still going to get occasional OOMs with G3 due to exceeded context length and failed defragmentation attempts, but this version of Ollama is much more stable than previous versions.

What I'm trying to find out is what else can I do to minimize this issue? Over the last two days I have had no PC freezes since I implemented this band-aid solution, and as I suspected the occasional OOMs are only restricted to that GPU when Ollama is run, and the script I'm using immediately restarts Ollama and picks up where I left off when that happens.

For the record, here are my two GPUs:

  • Display Adapter/Gaming (GPU 1) - Geforce GTX 1660 Super - 6GB VRAM
  • AI inference - RTX 8000 Quadro (GPU 0) - 48GB VRAM

Here are some additional specs:

  • 7950x CPU
  • Asrock x670 Taichi
  • 128GB RAM
  • 1500W PSU
  • 6 Axial fans for cooling. 1 Axial fan for the GeForce, 1 blower fan for the Quadro.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.6.6

Originally created by @SingularityMan on GitHub (Apr 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10372 ### What is the issue? Ever since I added Google's updated [Gemma3-QAT](https://ollama.com/library/gemma3:27b-it-qat) model (no multimodal in Ollama) and deployed it in Ollama 0.6.6 I keep getting memory leaks that freeze my PC. I know this is a well-documented issue and Ollama is hard at work. I also know its related to memory leaks associated with KV Cache. So far I have found a way to prevent the leak from spreading throughout my PC and freezing it into a very expensive brick because of Ollama by setting `CUDA_VISIBLE_DEVICES=0` (AI GPU) so Ollama can only use that GPU instead of my display adapter GPU I use for gaming. I also attempted to contain it by disabling system memory fallback on Windows 10 for Ollama only via NVIDIA Control Panel to prevent Ollama from automatically using up RAM after the inevitable occurs with G3-QAT. From what I've seen, this seems to have temporarily contained the memory leak plaguing my PC, but of course I'm still going to get occasional OOMs with G3 due to exceeded context length and failed defragmentation attempts, but this version of Ollama is much more stable than previous versions. What I'm trying to find out is what else can I do to minimize this issue? Over the last two days I have had no PC freezes since I implemented this band-aid solution, and as I suspected the occasional OOMs are only restricted to that GPU when Ollama is run, and the script I'm using immediately restarts Ollama and picks up where I left off when that happens. For the record, here are my two GPUs: - Display Adapter/Gaming (GPU 1) - Geforce GTX 1660 Super - 6GB VRAM - AI inference - RTX 8000 Quadro (GPU 0) - 48GB VRAM Here are some additional specs: - 7950x CPU - Asrock x670 Taichi - 128GB RAM - 1500W PSU - 6 Axial fans for cooling. 1 Axial fan for the GeForce, 1 blower fan for the Quadro. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-05-04 15:26:54 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 22, 2025):

Can you provide some logs with OLLAMA_DEBUG=1? I did a quick test with gemma3:27b-it-qat on a Linux system and didn't detect any appreciable memory leakage. I'll let it run overnight to see what happens, but it would be quicker to diagnose if I can see the parameters your model is running with.

Also, you mention no multimodal. I'm assuming you mean that you updated to the ollama library version of gemma3 which supports images, unlike models imported from HF.

<!-- gh-comment-id:2822352965 --> @rick-github commented on GitHub (Apr 22, 2025): Can you provide some logs with `OLLAMA_DEBUG=1`? I did a quick test with gemma3:27b-it-qat on a Linux system and didn't detect any appreciable memory leakage. I'll let it run overnight to see what happens, but it would be quicker to diagnose if I can see the parameters your model is running with. Also, you mention no multimodal. I'm assuming you mean that you updated to the ollama library version of gemma3 which supports images, unlike models imported from HF.
Author
Owner

@SingularityMan commented on GitHub (Apr 22, 2025):

Can you provide some logs with OLLAMA_DEBUG=1? I did a quick test with gemma3:27b-it-qat on a Linux system and didn't detect any appreciable memory leakage. I'll let it run overnight to see what happens, but it would be quicker to diagnose if I can see the parameters your model is running with.

So I would get something like this when I get an OOM right after introducing the prompt in /chat:

time=2025-04-22T13:22:38.039-04:00 level=DEBUG source=process_text_spm.go:184 msg="adding bos token to prompt" id=2
time=2025-04-22T13:22:38.162-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=4239 keep=4 new=4096
time=2025-04-22T13:22:38.163-04:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=505 prompt=4096 used=4 remaining=4092
time=2025-04-22T13:22:38.195-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T13:22:39.295-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
panic: failed to decode batch: could not find a kv cache slot (length: 2560)

goroutine 53 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc00017a6c0, {0x7ff68ad39850, 0xc0001683c0})
        C:/a/ollama/ollama/runner/ollamarunner/runner.go:366 +0x65
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        C:/a/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37

Also, you mention no multimodal. I'm assuming you mean that you updated to the ollama library version of gemma3 which supports images, unlike models imported from HF.

Yes, that's exactly what I mean. I switched to that one when I found out it was multimodal since previously the QAT versions were not multimodal on Ollama.

<!-- gh-comment-id:2822372974 --> @SingularityMan commented on GitHub (Apr 22, 2025): > Can you provide some logs with `OLLAMA_DEBUG=1`? I did a quick test with gemma3:27b-it-qat on a Linux system and didn't detect any appreciable memory leakage. I'll let it run overnight to see what happens, but it would be quicker to diagnose if I can see the parameters your model is running with. So I would get something like this when I get an OOM right after introducing the prompt in `/chat`: ``` time=2025-04-22T13:22:38.039-04:00 level=DEBUG source=process_text_spm.go:184 msg="adding bos token to prompt" id=2 time=2025-04-22T13:22:38.162-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=4239 keep=4 new=4096 time=2025-04-22T13:22:38.163-04:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=505 prompt=4096 used=4 remaining=4092 time=2025-04-22T13:22:38.195-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T13:22:39.295-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" panic: failed to decode batch: could not find a kv cache slot (length: 2560) goroutine 53 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc00017a6c0, {0x7ff68ad39850, 0xc0001683c0}) C:/a/ollama/ollama/runner/ollamarunner/runner.go:366 +0x65 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 C:/a/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37 ``` > Also, you mention no multimodal. I'm assuming you mean that you updated to the ollama library version of gemma3 which supports images, unlike models imported from HF. Yes, that's exactly what I mean. I switched to that one when I found out it was multimodal since previously the QAT versions were not multimodal on Ollama.
Author
Owner

@rick-github commented on GitHub (Apr 22, 2025):

panic: failed to decode batch: could not find a kv cache slot (length: 2560)

OK, this is not an OOM in the sense of #9791 and #10040, this is ollama unable to find a free slot for caching the incoming prompt. You can try mitigating this by reducing num_batch or increasing num_ctx. It's not clear to me why this would cause a memory leak, so further investigation required. See #10127 for the cache slot tracking bug.

<!-- gh-comment-id:2822423193 --> @rick-github commented on GitHub (Apr 22, 2025): ``` panic: failed to decode batch: could not find a kv cache slot (length: 2560) ``` OK, this is not an OOM in the sense of #9791 and #10040, this is ollama unable to find a free slot for caching the incoming prompt. You can try mitigating this by reducing `num_batch` or increasing `num_ctx`. It's not clear to me why this would cause a memory leak, so further investigation required. See #10127 for the cache slot tracking bug.
Author
Owner

@SingularityMan commented on GitHub (Apr 22, 2025):

panic: failed to decode batch: could not find a kv cache slot (length: 2560)

OK, this is not an OOM in the sense of #9791 and #10040, this is ollama unable to find a free slot for caching the incoming prompt. You can try mitigating this by reducing num_batch or increasing num_ctx. It's not clear to me why this would cause a memory leak, so further investigation required. See #10127 for the cache slot tracking bug.

So one thing mentioned in that post is that the error may occur when more than 2 parallel queries happen at once, but I have hard-coded OLLAMA_NUM_PARALLEL to a maximum of 2 for the following reasons:

  • G3-27b-qat periodically views images and streams its contents to a text file in real-time, but this is a /generate request.
  • After certain conditions are met, a chat request is placed, taking all the descriptions of all the screenshots and including them into the prompt, along with text information gathered from other sources.

Most of the time, this doesn't actually exceed the context length, which is 4096 tokens. And I have 26GB VRAM used up when the model is loaded. And given that the /generate request is placed with stream enabled, the stream immediately ends when a /chat request is placed, so there should only be 1 request at any given time, but 2 are allowed in the event this happens.

Maybe there's more to it than that. I do have KV Cache set to q8_0 but it happens less often with q4_0. However, there is a noticeable dip in quality and occasional hallucinations when I do this.

<!-- gh-comment-id:2822468222 --> @SingularityMan commented on GitHub (Apr 22, 2025): > ``` > panic: failed to decode batch: could not find a kv cache slot (length: 2560) > ``` > > OK, this is not an OOM in the sense of [#9791](https://github.com/ollama/ollama/issues/9791) and [#10040](https://github.com/ollama/ollama/issues/10040), this is ollama unable to find a free slot for caching the incoming prompt. You can try mitigating this by reducing `num_batch` or increasing `num_ctx`. It's not clear to me why this would cause a memory leak, so further investigation required. See [#10127](https://github.com/ollama/ollama/issues/10127) for the cache slot tracking bug. So one thing mentioned in that post is that the error may occur when more than 2 parallel queries happen at once, but I have hard-coded `OLLAMA_NUM_PARALLEL` to a maximum of 2 for the following reasons: - G3-27b-qat periodically views images and streams its contents to a text file in real-time, but this is a `/generate` request. - After certain conditions are met, a `chat` request is placed, taking all the descriptions of all the screenshots and including them into the prompt, along with text information gathered from other sources. Most of the time, this doesn't actually exceed the context length, which is 4096 tokens. And I have 26GB VRAM used up when the model is loaded. And given that the /generate request is placed with stream enabled, the stream immediately ends when a `/chat` request is placed, so there should only be 1 request at any given time, but 2 are allowed in the event this happens. Maybe there's more to it than that. I do have KV Cache set to q8_0 but it happens less often with q4_0. However, there is a noticeable dip in quality and occasional hallucinations when I do this.
Author
Owner

@rick-github commented on GitHub (Apr 22, 2025):

If you are only doing one generation at a time, you might as well set OLLAMA_NUM_PARALLEL to 1 and double your num_ctx. If two (or more) requests are sent at the same time, they will be queued on the ollama server and processed sequentially.

<!-- gh-comment-id:2822479105 --> @rick-github commented on GitHub (Apr 22, 2025): If you are only doing one generation at a time, you might as well set `OLLAMA_NUM_PARALLEL` to 1 and double your `num_ctx`. If two (or more) requests are sent at the same time, they will be queued on the ollama server and processed sequentially.
Author
Owner

@SingularityMan commented on GitHub (Apr 22, 2025):

If you are only doing one generation at a time, you might as well set OLLAMA_NUM_PARALLEL to 1 and double your num_ctx. If two (or more) requests are sent at the same time, they will be queued on the ollama server and processed sequentially.

Ok well I set OLLAMA_NUM_PARALLEL to 1 and lowered num_batch to 256 like you said so we'll see what happens. This is what I got:

time=2025-04-22T17:06:26.720-04:00 level=DEBUG source=process_text_spm.go:184 msg="adding bos token to prompt" id=2
time=2025-04-22T17:06:26.812-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=5732 keep=4 new=4096
time=2025-04-22T17:06:26.868-04:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=1 cache=3769 prompt=4096 used=0 remaining=4096
time=2025-04-22T17:06:27.851-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T17:06:29.273-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T17:06:30.375-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T17:06:31.488-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T17:06:32.267-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T17:06:32.795-04:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=1 limit=4096 input=4096 keep=4 discard=2046
time=2025-04-22T17:06:35.762-04:00 level=DEBUG source=sched.go:468 msg="context for request finished"
[GIN] 2025/04/22 - 17:06:35 | 200 |   28.0838761s |       127.0.0.1 | POST     "/api/generate"
time=2025-04-22T17:06:35.762-04:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=H:\ai\ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 refCount=1
[GIN] 2025/04/22 - 17:06:35 | 200 |    9.2971567s |       127.0.0.1 | POST     "/api/chat"
time=2025-04-22T17:06:35.976-04:00 level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-04-22T17:06:35.977-04:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=H:\ai\ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 duration=2562047h47m16.854775807s

I don't know what might happen but I'll keep an eye out to see if things change.

<!-- gh-comment-id:2822489291 --> @SingularityMan commented on GitHub (Apr 22, 2025): > If you are only doing one generation at a time, you might as well set `OLLAMA_NUM_PARALLEL` to 1 and double your `num_ctx`. If two (or more) requests are sent at the same time, they will be queued on the ollama server and processed sequentially. Ok well I set `OLLAMA_NUM_PARALLEL` to 1 and lowered `num_batch` to 256 like you said so we'll see what happens. This is what I got: ``` time=2025-04-22T17:06:26.720-04:00 level=DEBUG source=process_text_spm.go:184 msg="adding bos token to prompt" id=2 time=2025-04-22T17:06:26.812-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=5732 keep=4 new=4096 time=2025-04-22T17:06:26.868-04:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=1 cache=3769 prompt=4096 used=0 remaining=4096 time=2025-04-22T17:06:27.851-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T17:06:29.273-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T17:06:30.375-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T17:06:31.488-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T17:06:32.267-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T17:06:32.795-04:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=1 limit=4096 input=4096 keep=4 discard=2046 time=2025-04-22T17:06:35.762-04:00 level=DEBUG source=sched.go:468 msg="context for request finished" [GIN] 2025/04/22 - 17:06:35 | 200 | 28.0838761s | 127.0.0.1 | POST "/api/generate" time=2025-04-22T17:06:35.762-04:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=H:\ai\ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 refCount=1 [GIN] 2025/04/22 - 17:06:35 | 200 | 9.2971567s | 127.0.0.1 | POST "/api/chat" time=2025-04-22T17:06:35.976-04:00 level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-04-22T17:06:35.977-04:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=H:\ai\ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 duration=2562047h47m16.854775807s ``` I don't know what might happen but I'll keep an eye out to see if things change.
Author
Owner

@rick-github commented on GitHub (Apr 22, 2025):

time=2025-04-22T17:06:26.812-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=5732 keep=4 new=4096
time=2025-04-22T17:06:32.795-04:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=1 limit=4096 input=4096 keep=4 discard=2046

These may be reducing the quality of the response, the truncate will be removing part of the system message or older user/assistant messages, and the latter will removing parts of the generated output and may affect the probability distribution of subsequent tokens.

<!-- gh-comment-id:2822500320 --> @rick-github commented on GitHub (Apr 22, 2025): ``` time=2025-04-22T17:06:26.812-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=5732 keep=4 new=4096 time=2025-04-22T17:06:32.795-04:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=1 limit=4096 input=4096 keep=4 discard=2046 ``` These may be reducing the quality of the response, the truncate will be removing part of the system message or older user/assistant messages, and the latter will removing parts of the generated output and may affect the probability distribution of subsequent tokens.
Author
Owner

@SingularityMan commented on GitHub (Apr 22, 2025):

time=2025-04-22T17:06:26.812-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=5732 keep=4 new=4096
time=2025-04-22T17:06:32.795-04:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=1 limit=4096 input=4096 keep=4 discard=2046

These may be reducing the quality of the response, the truncate will be removing part of the system message or older user/assistant messages, and the latter will removing parts of the generated output and may affect the probability distribution of subsequent tokens.

I'm not too worried about that. The quality of the output is still good and the framework the model runs on prioritizes speed over memory. That's why I have such a low context length. Its basically something that is supposed to adapt in real-time and pay attention to the immediate situation.

EDIT: NOPE, still getting the same error. Here's the log btw:


time=2025-04-22T17:24:08.127-04:00 level=DEBUG source=process_text_spm.go:184 msg="adding bos token to prompt" id=2
time=2025-04-22T17:24:08.219-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=6555 keep=4 new=4096
time=2025-04-22T17:24:08.220-04:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=560 prompt=4096 used=4 remaining=4092
time=2025-04-22T17:24:09.044-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
time=2025-04-22T17:24:09.441-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache"
panic: failed to decode batch: could not find a kv cache slot (length: 2304)

goroutine 30 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc00014c6c0, {0x7ff68ad39850, 0xc000522be0})
        C:/a/ollama/ollama/runner/ollamarunner/runner.go:366 +0x65
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        C:/a/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37

EDIT 2:

Ok, so I doubled the context length and this doesn't happen anymore, so it seems to be that for some reason. Very curious stuff.

<!-- gh-comment-id:2822504443 --> @SingularityMan commented on GitHub (Apr 22, 2025): > ``` > time=2025-04-22T17:06:26.812-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=5732 keep=4 new=4096 > time=2025-04-22T17:06:32.795-04:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=1 limit=4096 input=4096 keep=4 discard=2046 > ``` > > These may be reducing the quality of the response, the truncate will be removing part of the system message or older user/assistant messages, and the latter will removing parts of the generated output and may affect the probability distribution of subsequent tokens. I'm not too worried about that. The quality of the output is still good and the framework the model runs on prioritizes speed over memory. That's why I have such a low context length. Its basically something that is supposed to adapt in real-time and pay attention to the immediate situation. EDIT: NOPE, still getting the same error. Here's the log btw: ``` time=2025-04-22T17:24:08.127-04:00 level=DEBUG source=process_text_spm.go:184 msg="adding bos token to prompt" id=2 time=2025-04-22T17:24:08.219-04:00 level=WARN source=runner.go:154 msg="truncating input prompt" limit=4096 prompt=6555 keep=4 new=4096 time=2025-04-22T17:24:08.220-04:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=560 prompt=4096 used=4 remaining=4092 time=2025-04-22T17:24:09.044-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" time=2025-04-22T17:24:09.441-04:00 level=DEBUG source=causal.go:366 msg="defragmenting kv cache" panic: failed to decode batch: could not find a kv cache slot (length: 2304) goroutine 30 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc00014c6c0, {0x7ff68ad39850, 0xc000522be0}) C:/a/ollama/ollama/runner/ollamarunner/runner.go:366 +0x65 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 C:/a/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37 ``` EDIT 2: Ok, so I doubled the context length and this doesn't happen anymore, so it seems to be that for some reason. Very curious stuff.
Author
Owner

@jessegross commented on GitHub (Apr 24, 2025):

@SingularityMan As Rick mentioned, this looks the same as #10127, which is not a memory leak in the sense that it causes Ollama to consume additional RAM or VRAM over time. I believe that all of the memory leak issues of the traditional type have been solved in 0.6.6. As a result, I'm not sure why the mitigations you mentioned would have an effect. Now that is seems that it is working for you, can you confirm that it is independent of the mitigations?

As far as why doubling the context length helps, the context is stored in the KV cache and Ollama is looking for a free spot to put the current batch. The factors here are the batch size, context length, num parallel and past tokens so increasing the context gives more working space and can help mitigate the problem.

<!-- gh-comment-id:2825843367 --> @jessegross commented on GitHub (Apr 24, 2025): @SingularityMan As Rick mentioned, this looks the same as #10127, which is not a memory leak in the sense that it causes Ollama to consume additional RAM or VRAM over time. I believe that all of the memory leak issues of the traditional type have been solved in 0.6.6. As a result, I'm not sure why the mitigations you mentioned would have an effect. Now that is seems that it is working for you, can you confirm that it is independent of the mitigations? As far as why doubling the context length helps, the context is stored in the KV cache and Ollama is looking for a free spot to put the current batch. The factors here are the batch size, context length, num parallel and past tokens so increasing the context gives more working space and can help mitigate the problem.
Author
Owner

@SingularityMan commented on GitHub (Apr 24, 2025):

@SingularityMan As Rick mentioned, this looks the same as #10127, which is not a memory leak in the sense that it causes Ollama to consume additional RAM or VRAM over time. I believe that all of the memory leak issues of the traditional type have been solved in 0.6.6. As a result, I'm not sure why the mitigations you mentioned would have an effect. Now that is seems that it is working for you, can you confirm that it is independent of the mitigations?

As far as why doubling the context length helps, the context is stored in the KV cache and Ollama is looking for a free spot to put the current batch. The factors here are the batch size, context length, num parallel and past tokens so increasing the context gives more working space and can help mitigate the problem.

Ok, so I increased the context length further, to 3x the original to 12K, and now it doesn't happen anymore. However, I'm going to keep the original precautions at the beginning of this post in place because I am not sure what might happen later down the road with Ollama and its worked out pretty well so far. I did successfully contain the OOMs I got in the GPU prior to increasing context length to only that GPU so even before the context length increase my PC stopped freezing after restricting it to the AI GPU and preventing system memory fallback for Ollama via the NVIDIA control panel.

So yeah, it seems like a pretty good solution overall. Restricting num parallel and reducing batch size seemed to have no effect so I reset num parallel to 2 since context length seemed to have been the primary culprit and now things are starting to revert back to normal.

<!-- gh-comment-id:2825933204 --> @SingularityMan commented on GitHub (Apr 24, 2025): > [@SingularityMan](https://github.com/SingularityMan) As Rick mentioned, this looks the same as [#10127](https://github.com/ollama/ollama/issues/10127), which is not a memory leak in the sense that it causes Ollama to consume additional RAM or VRAM over time. I believe that all of the memory leak issues of the traditional type have been solved in 0.6.6. As a result, I'm not sure why the mitigations you mentioned would have an effect. Now that is seems that it is working for you, can you confirm that it is independent of the mitigations? > > As far as why doubling the context length helps, the context is stored in the KV cache and Ollama is looking for a free spot to put the current batch. The factors here are the batch size, context length, num parallel and past tokens so increasing the context gives more working space and can help mitigate the problem. Ok, so I increased the context length further, to 3x the original to 12K, and now it doesn't happen anymore. However, I'm going to keep the original precautions at the beginning of this post in place because I am not sure what might happen later down the road with Ollama and its worked out pretty well so far. I did successfully contain the OOMs I got in the GPU prior to increasing context length to only that GPU so even before the context length increase my PC stopped freezing after restricting it to the AI GPU and preventing system memory fallback for Ollama via the NVIDIA control panel. So yeah, it seems like a pretty good solution overall. Restricting num parallel and reducing batch size seemed to have no effect so I reset num parallel to 2 since context length seemed to have been the primary culprit and now things are starting to revert back to normal.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68873