[GH-ISSUE #13789] kv cache took too many memory on glm-4.7-flash #55546

Closed
opened 2026-04-29 09:23:14 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @gemlincong-dotcom on GitHub (Jan 20, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13789

What is the issue?

normally, with 8192 context, the kv cache should take less than 1G memory, but this version have took more that 6G memory, please advise.

ollama version: 0.14.3
time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="800.0 MiB"
time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.6 GiB"

with qwen3:30b Q4K, the total size is only 18G, but the glm-4.7 took 26G
PS C:\Users\Waver> ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
glm-4.7-flash:latest ff14144f31df 26 GB 88%/12% CPU/GPU 8192 Forever

Relevant log output

ollama version: 0.14.3
time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="800.0 MiB"
time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.6 GiB"

with qwen3:30b Q4K, the total size is only 18G, but the glm-4.7 took 26G
PS C:\Users\Waver> ollama ps
NAME                    ID              SIZE     PROCESSOR          CONTEXT    UNTIL
glm-4.7-flash:latest    ff14144f31df    26 GB    88%/12% CPU/GPU    8192       Forever

OS

Windows

GPU

Nvidia

CPU

No response

Ollama version

0.14.3-rc2

Originally created by @gemlincong-dotcom on GitHub (Jan 20, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13789 ### What is the issue? normally, with 8192 context, the kv cache should take less than 1G memory, but this version have took more that 6G memory, please advise. ollama version: 0.14.3 time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="800.0 MiB" time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.6 GiB" with qwen3:30b Q4K, the total size is only 18G, but the glm-4.7 took 26G PS C:\Users\Waver> ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL glm-4.7-flash:latest ff14144f31df 26 GB 88%/12% CPU/GPU 8192 Forever ### Relevant log output ```shell ollama version: 0.14.3 time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="800.0 MiB" time=2026-01-20T16:34:21.526+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.6 GiB" with qwen3:30b Q4K, the total size is only 18G, but the glm-4.7 took 26G PS C:\Users\Waver> ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL glm-4.7-flash:latest ff14144f31df 26 GB 88%/12% CPU/GPU 8192 Forever ``` ### OS Windows ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.14.3-rc2
GiteaMirror added the bug label 2026-04-29 09:23:14 -05:00
Author
Owner

@youyuzzg commented on GitHub (Jan 21, 2026):

Also experiencing this issue with GLM-4.7-flash at 128K context. Memory usage is much higher than Qwen3-30B. Any official confirmation or solution?

<!-- gh-comment-id:3777727369 --> @youyuzzg commented on GitHub (Jan 21, 2026): Also experiencing this issue with GLM-4.7-flash at 128K context. Memory usage is much higher than Qwen3-30B. Any official confirmation or solution?
Author
Owner

@rushyrush commented on GitHub (Jan 22, 2026):

Experiencing the same issue with glm-4.7-flash:latest and Ollama 0.15.0-rc0 on Nvidia GB10.

model requires more system memory (170.3 GiB) than is available

Env Vars:

OLLAMA_CONTEXT_LENGTH="131072"
OLLAMA_FLASH_ATTENTION="1"
OLLAMA_KV_CACHE_TYPE="q8_0"
OLLAMA_NUM_PARALLEL=4
OLLAMA_NEW_ENGINE="true"
<!-- gh-comment-id:3782538689 --> @rushyrush commented on GitHub (Jan 22, 2026): Experiencing the same issue with `glm-4.7-flash:latest` and Ollama `0.15.0-rc0` on Nvidia GB10. `model requires more system memory (170.3 GiB) than is available` Env Vars: ``` OLLAMA_CONTEXT_LENGTH="131072" OLLAMA_FLASH_ATTENTION="1" OLLAMA_KV_CACHE_TYPE="q8_0" OLLAMA_NUM_PARALLEL=4 OLLAMA_NEW_ENGINE="true" ```
Author
Owner

@Cyberschorsch commented on GitHub (Jan 22, 2026):

+1

<!-- gh-comment-id:3784176517 --> @Cyberschorsch commented on GitHub (Jan 22, 2026): +1
Author
Owner

@deep1305 commented on GitHub (Jan 22, 2026):

Error: 500 Internal Server Error: model requires more system memory (135.2 GiB) than is available (64.2 GiB)

<!-- gh-comment-id:3784355473 --> @deep1305 commented on GitHub (Jan 22, 2026): Error: 500 Internal Server Error: model requires more system memory (135.2 GiB) than is available (64.2 GiB)
Author
Owner

@JasonOdinberg commented on GitHub (Jan 22, 2026):

Experiencing the same issue with glm-4.7-flash:latest and Ollama 0.15.0-rc0 on Nvidia GB10.

model requires more system memory (170.3 GiB) than is available

Env Vars:

OLLAMA_CONTEXT_LENGTH="131072"
OLLAMA_FLASH_ATTENTION="1"
OLLAMA_KV_CACHE_TYPE="q8_0"
OLLAMA_NUM_PARALLEL=4
OLLAMA_NEW_ENGINE="true"

This worked! Max context at 76.1gb VRAM on the Q4 Model with q8 KV

<!-- gh-comment-id:3786075565 --> @JasonOdinberg commented on GitHub (Jan 22, 2026): > Experiencing the same issue with `glm-4.7-flash:latest` and Ollama `0.15.0-rc0` on Nvidia GB10. > > `model requires more system memory (170.3 GiB) than is available` > > Env Vars: > > ``` > OLLAMA_CONTEXT_LENGTH="131072" > OLLAMA_FLASH_ATTENTION="1" > OLLAMA_KV_CACHE_TYPE="q8_0" > OLLAMA_NUM_PARALLEL=4 > OLLAMA_NEW_ENGINE="true" > ``` This worked! Max context at 76.1gb VRAM on the Q4 Model with q8 KV
Author
Owner

@bradfa commented on GitHub (Jan 22, 2026):

This worked! Max context at 76.1gb VRAM on the Q4 Model with q8 KV

But 131k tokens of context should not require 76GB of memory for this model. It should be like 1/5th to 1/10th that much memory.

<!-- gh-comment-id:3786162261 --> @bradfa commented on GitHub (Jan 22, 2026): > This worked! Max context at 76.1gb VRAM on the Q4 Model with q8 KV But 131k tokens of context should not require 76GB of memory for this model. It should be like 1/5th to 1/10th that much memory.
Author
Owner

@JasonOdinberg commented on GitHub (Jan 23, 2026):

This worked! Max context at 76.1gb VRAM on the Q4 Model with q8 KV

But 131k tokens of context should not require 76GB of memory for this model. It should be like 1/5th to 1/10th that much memory.

I meant the max ctx for the model 198000. But you're probably right, it's still too much.

<!-- gh-comment-id:3787473492 --> @JasonOdinberg commented on GitHub (Jan 23, 2026): > > This worked! Max context at 76.1gb VRAM on the Q4 Model with q8 KV > > But 131k tokens of context should not require 76GB of memory for this model. It should be like 1/5th to 1/10th that much memory. I meant the max ctx for the model 198000. But you're probably right, it's still too much.
Author
Owner

@ParthSareen commented on GitHub (Jan 23, 2026):

We should have a fix coming for this soon - sorry about that folks

<!-- gh-comment-id:3788264464 --> @ParthSareen commented on GitHub (Jan 23, 2026): We should have a fix coming for this soon - sorry about that folks
Author
Owner

@battmanux commented on GitHub (Jan 23, 2026):

We should have a fix coming for this soon - sorry about that folks

Is there a workarround to fit the model in 32GB of VRAM?

<!-- gh-comment-id:3789009173 --> @battmanux commented on GitHub (Jan 23, 2026): > We should have a fix coming for this soon - sorry about that folks Is there a workarround to fit the model in 32GB of VRAM?
Author
Owner

@youyuzzg commented on GitHub (Jan 23, 2026):

We should have a fix coming for this soon - sorry about that folks

Is there a workarround to fit the model in 32GB of VRAM?

Once this issue is fixed, running the model on 32GB of VRAM should be possible. You can take Qwen_30B as a reference.

<!-- gh-comment-id:3789033677 --> @youyuzzg commented on GitHub (Jan 23, 2026): > > We should have a fix coming for this soon - sorry about that folks > > Is there a workarround to fit the model in 32GB of VRAM? Once this issue is fixed, running the model on 32GB of VRAM should be possible. You can take Qwen_30B as a reference.
Author
Owner

@jmorganca commented on GitHub (Jan 23, 2026):

Fixed by https://github.com/ollama/ollama/pull/13810, sorry all! Will be in the next release (0.15.0), and weights will need to be redownloaded (although please know, we are working on a new format that will avoid this in the future 😊 )

<!-- gh-comment-id:3792935814 --> @jmorganca commented on GitHub (Jan 23, 2026): Fixed by https://github.com/ollama/ollama/pull/13810, sorry all! Will be in the next release (0.15.0), and weights will need to be redownloaded (although please know, we are working on a new format that will avoid this in the future 😊 )
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55546