[GH-ISSUE #5091] KV Cache Quantization #28972

Closed
opened 2026-04-22 07:33:32 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @sammcj on GitHub (Jun 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5091

It would be good if the KV Key cache type could be set in Ollama.

llama.cpp allows you to set the Key cache type which can improve memory usage as the KV store increases in size, especially when running models like Command-R(+) that don't have GQA.

If we were able to change the KV Key cache type from f16 to q8_0 we would get around a 50% reduction in the memory used by the KV cache/context, some folks are saying that with IQ quants you can go as low as Q4_0 for the KV keys without issues using 75%~ less memory.

43b35e38ba/examples/server/README.md (L74C156-L76C1)

From what I've read changing the cache type for the Key from F16 to Q4_0 has little to no impact on quality.

See also - https://www.reddit.com/r/LocalLLaMA/comments/1dalkm8/memory_tests_using_llamacpp_kv_cache_quantization/

PR is up: https://github.com/ollama/ollama/pull/6279

Originally created by @sammcj on GitHub (Jun 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5091 It would be good if the KV Key cache type could be set in Ollama. llama.cpp allows you to set the Key cache type which can improve memory usage as the KV store increases in size, especially when running models like Command-R(+) that don't have GQA. If we were able to change the KV Key cache type from f16 to q8_0 we would get around a 50% reduction in the memory used by the KV cache/context, some folks are saying that with IQ quants you can go as low as Q4_0 for the KV keys without issues using 75%~ less memory. https://github.com/ggerganov/llama.cpp/blob/43b35e38ba371f9a7faa6dca4c5d1e8f698ffd87/examples/server/README.md?plain=1#L74C156-L76C1 From what I've read changing the cache type for the Key from F16 to Q4_0 has little to no impact on quality. See also - https://www.reddit.com/r/LocalLLaMA/comments/1dalkm8/memory_tests_using_llamacpp_kv_cache_quantization/ ## PR is up: https://github.com/ollama/ollama/pull/6279
GiteaMirror added the feature request label 2026-04-22 07:33:32 -05:00
Author
Owner

@AncientMystic commented on GitHub (Jun 17, 2024):

This definitely would be a positive to see, especially with the potential of lowering vram usage since it would allow more vram to go towards models on low vram gpus

It would also be nice to see low vram kv cache offloading to ram like others support

<!-- gh-comment-id:2172053022 --> @AncientMystic commented on GitHub (Jun 17, 2024): This definitely would be a positive to see, especially with the potential of lowering vram usage since it would allow more vram to go towards models on low vram gpus It would also be nice to see low vram kv cache offloading to ram like others support
Author
Owner

@Halflifefa commented on GitHub (Jun 17, 2024):

Definitely a huge boost for users with consumer-grade graphics cards

<!-- gh-comment-id:2172511712 --> @Halflifefa commented on GitHub (Jun 17, 2024): Definitely a huge boost for users with consumer-grade graphics cards
Author
Owner

@chigkim commented on GitHub (Jun 17, 2024):

I'd love to see the feature as well!

<!-- gh-comment-id:2173139623 --> @chigkim commented on GitHub (Jun 17, 2024): I'd love to see the feature as well!
Author
Owner

@sammcj commented on GitHub (Jul 23, 2024):

With modern models such as Llama 3.1, Qwen2, Codestral etc... all adopting larger context sizes the benefits of this could be more useful than ever.

I'm hoping someone smarter than I could potentially figure out what I was missing on https://github.com/ollama/ollama/pull/5098/files and get this moving.

<!-- gh-comment-id:2246267272 --> @sammcj commented on GitHub (Jul 23, 2024): With modern models such as Llama 3.1, Qwen2, Codestral etc... all adopting larger context sizes the benefits of this could be more useful than ever. I'm hoping someone smarter than I could potentially figure out what I was missing on https://github.com/ollama/ollama/pull/5098/files and get this moving.
Author
Owner

@sammcj commented on GitHub (Jul 23, 2024):

Oh my, I just got it working... and the vRAM savings are really good - PR incoming shortly!

<!-- gh-comment-id:2246383930 --> @sammcj commented on GitHub (Jul 23, 2024): Oh my, I just got it working... and the vRAM savings are _really_ good - PR incoming shortly!
Author
Owner

@sammcj commented on GitHub (Jul 23, 2024):

PR is up! https://github.com/ollama/ollama/pull/5894

f16

kv_cache_f16

q4_0

kv_cache_q4_0

q8_0

kv_cache_q8_0

<!-- gh-comment-id:2246406748 --> @sammcj commented on GitHub (Jul 23, 2024): PR is up! https://github.com/ollama/ollama/pull/5894 ### f16 ![kv_cache_f16](https://github.com/user-attachments/assets/af0a3b40-70e2-47f1-90b0-6ecd09dc59df) ### q4_0 ![kv_cache_q4_0](https://github.com/user-attachments/assets/47ba6578-1f5b-4091-8594-f63ecfada49e) ### q8_0 ![kv_cache_q8_0](https://github.com/user-attachments/assets/c7c09e62-4b54-4536-9617-6b00b1af6f94)
Author
Owner

@AncientMystic commented on GitHub (Jul 23, 2024):

Thank you for this update and for adding all the options including k32, i cannot wait for the next version of ollama to be released including this update, it is such a massive improvement ❤️😁

<!-- gh-comment-id:2246476105 --> @AncientMystic commented on GitHub (Jul 23, 2024): Thank you for this update and for adding all the options including k32, i cannot wait for the next version of ollama to be released including this update, it is such a massive improvement ❤️😁
Author
Owner

@sammcj commented on GitHub (Aug 7, 2024):

PR is waiting on review, I assume the Ollama maintainers are either just really busy and haven't got to it yet - or aren't interested but haven't communicated publicly.

<!-- gh-comment-id:2272491418 --> @sammcj commented on GitHub (Aug 7, 2024): [PR](https://github.com/ollama/ollama/pull/6279) is waiting on review, I assume the Ollama maintainers are either just really busy and haven't got to it yet - or aren't interested but haven't communicated publicly.
Author
Owner

@jojje commented on GitHub (Oct 5, 2024):

Tested your PR Sam, on a RTX 3090 on linux. Compiled using the project's own Dockerfile for building, and it worked perfectly with fp16, q8 and q4 quants. Great job.

Initially just forgot to enable FA and was surprised why the quants weren't activated. Same mistake I repeatedly make with llama.cpp though, so you've just ported the same "behavior" :)

A followup PR might auto-enable FA if kv-cache quant is enabled, to make it simpler for users, since it seems that's what this project is all about. No need to "fix" that unintuitive behavior in this PR, as it's orthogonal to the issue description.

<!-- gh-comment-id:2394908970 --> @jojje commented on GitHub (Oct 5, 2024): Tested your PR Sam, on a RTX 3090 on linux. Compiled using the project's own Dockerfile for building, and it worked perfectly with fp16, q8 and q4 quants. Great job. Initially just forgot to enable FA and was surprised why the quants weren't activated. Same mistake I repeatedly make with llama.cpp though, so you've just ported the same "behavior" :) A followup PR might auto-enable FA if kv-cache quant is enabled, to make it simpler for users, since it seems that's what this project is all about. No need to "fix" that unintuitive behavior in this PR, as it's orthogonal to the issue description.
Author
Owner

@sammcj commented on GitHub (Oct 5, 2024):

Thanks jojje, I've been running with k/v cache at q8 since mid July without issue.

I think FA should always be enabled unless a model explicitly doesn't support it.

<!-- gh-comment-id:2394933464 --> @sammcj commented on GitHub (Oct 5, 2024): Thanks jojje, I've been running with k/v cache at q8 since mid July without issue. I think FA should always be enabled unless a model explicitly doesn't support it.
Author
Owner

@jojje commented on GitHub (Oct 5, 2024):

Agreed. Saw that discussion in the PR. Gemma IIRC crashes with FA in llama.cpp, so just disabling FA and the cache with a log message as discussed in the pr seems the best option I think, when such exceptional circumstances occur. Users will then notice a slower than expected inference performance, look at the logs and discover why. More "user friendly" than just crashing outright.

<!-- gh-comment-id:2394939224 --> @jojje commented on GitHub (Oct 5, 2024): Agreed. Saw that discussion in the PR. Gemma IIRC crashes with FA in llama.cpp, so just disabling FA and the cache with a log message as discussed in the pr seems the best option I think, when such exceptional circumstances occur. Users will then notice a slower than expected inference performance, look at the logs and discover why. More "user friendly" than just crashing outright.
Author
Owner

@srossitto79 commented on GitHub (Nov 14, 2024):

I have implemented a version that looks at OLLAMA_KV_CACHE_TYPE env var to apply the quantization to all models, its working good for me, i will keep using this until there is no official support. you can have a look https://github.com/srossitto79/ollama

<!-- gh-comment-id:2476456862 --> @srossitto79 commented on GitHub (Nov 14, 2024): I have implemented a version that looks at OLLAMA_KV_CACHE_TYPE env var to apply the quantization to all models, its working good for me, i will keep using this until there is no official support. you can have a look https://github.com/srossitto79/ollama
Author
Owner

@emzaedu commented on GitHub (Nov 18, 2024):

I have implemented a version that looks at OLLAMA_KV_CACHE_TYPE env var to apply the quantization to all models, its working good for me, i will keep using this until there is no official support. you can have a look https://github.com/srossitto79/ollama

Something seems to be working incorrectly here. Video memory usage is lower than expected: for LLaMA 3.1-8B with a 128k context, only 9 GB of VRAM is utilized out of 24 GB. However, the workload distribution is uneven — 34% on the CPU and 66% on the GPU, leaving a significant portion of VRAM unused.

<!-- gh-comment-id:2482368615 --> @emzaedu commented on GitHub (Nov 18, 2024): > I have implemented a version that looks at OLLAMA_KV_CACHE_TYPE env var to apply the quantization to all models, its working good for me, i will keep using this until there is no official support. you can have a look https://github.com/srossitto79/ollama Something seems to be working incorrectly here. Video memory usage is lower than expected: for LLaMA 3.1-8B with a 128k context, only 9 GB of VRAM is utilized out of 24 GB. However, the workload distribution is uneven — 34% on the CPU and 66% on the GPU, leaving a significant portion of VRAM unused.
Author
Owner

@sammcj commented on GitHub (Nov 18, 2024):

@antonovkz a 128K context is really large, at fp16 that even with just a q4_k_m quant that would use around 29GB of vRAM.

It should be noted that while a model might support a maximum token size of 128k, it's unlikely it will perform well at it.

  • Have you tried at say 64K?

You'll need to provide more information on your build and configuration, e.g.

  • What is GGUF quant are you running?
  • What K/V cache quantisation are you using? (Q8_0?)
  • Is anything else using any vRAM at all?
  • Are you using CUDA or Metal?
  • Does this occur with other models, such as Qwen 2.5 7b Instruct?

And can you share you Ollama logs, specifically the section that looks like this:

llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors:        CPU buffer size =   631.12 MiB
llm_load_tensors:      CUDA0 buffer size = 18582.29 MiB
llm_load_tensors:      CUDA1 buffer size = 18650.40 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 1024
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   697.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   663.00 MiB
llama_new_context_with_model: KV self size  = 1360.00 MiB, K (q8_0):  680.00 MiB, V (q8_0):  680.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.61 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   299.51 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   409.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    80.02 MiB
llama_new_context_with_model: graph nodes  = 2487
llama_new_context_with_model: graph splits = 3

I don't think I'm able to reproduce this with a 72B iq4_xs quant:

image

Or an 7b q6_k quant at 128k~

image

Or llama 3.1 q6_k 8b at 128k~

image image

I can however make it not use all available vRAM if I try to use a larger context than the model supports (e.g. 150K on a 128K max model).

<!-- gh-comment-id:2482623675 --> @sammcj commented on GitHub (Nov 18, 2024): @antonovkz a 128K context is really large, at fp16 that even with just a q4_k_m quant that would use around 29GB of vRAM. It should be noted that while a model might support a maximum token size of 128k, it's unlikely it will perform well at it. - Have you tried at say 64K? You'll need to provide more information on your build and configuration, e.g. - What is GGUF quant are you running? - What K/V cache quantisation are you using? (Q8_0?) - Is anything else using any vRAM at all? - Are you using CUDA or Metal? - Does this occur with other models, such as Qwen 2.5 7b Instruct? And can you share you Ollama logs, specifically the section that looks like this: ``` llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 631.12 MiB llm_load_tensors: CUDA0 buffer size = 18582.29 MiB llm_load_tensors: CUDA1 buffer size = 18650.40 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 1024 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 1 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 697.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 663.00 MiB llama_new_context_with_model: KV self size = 1360.00 MiB, K (q8_0): 680.00 MiB, V (q8_0): 680.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.61 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 299.51 MiB llama_new_context_with_model: CUDA1 compute buffer size = 409.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2487 llama_new_context_with_model: graph splits = 3 ``` I don't think I'm able to reproduce this with a 72B iq4_xs quant: <img width="3840" alt="image" src="https://github.com/user-attachments/assets/1cf7f3a4-e4a0-4dcd-9f5d-8334d1512e3e"> Or an 7b q6_k quant at 128k~ <img width="3839" alt="image" src="https://github.com/user-attachments/assets/4d8ac52c-ecdc-47ba-a7ed-a836df962dbd"> Or llama 3.1 q6_k 8b at 128k~ <img width="3004" alt="image" src="https://github.com/user-attachments/assets/b37f3a6c-e8a9-47d1-ba96-80323315a4e0"> <img width="664" alt="image" src="https://github.com/user-attachments/assets/22340024-329b-4d48-beb6-28f83327e962"> I can however make it not use all available vRAM if I try to use a larger context than the model supports (e.g. 150K on a 128K max model).
Author
Owner

@emzaedu commented on GitHub (Dec 6, 2024):

I mean it incorrectly allocating resources.

For example:
/set parameter num_ctx 88000
Rombos-LLM-V2.6-Qwen-14b-Q4_K_M:latest 81d0d17e9f6a 21 GB 100% GPU 4 minutes from now

However, the actual VRAM usage amounts to 13,880,772K

<!-- gh-comment-id:2522498771 --> @emzaedu commented on GitHub (Dec 6, 2024): I mean it incorrectly allocating resources. For example: /set parameter num_ctx 88000 Rombos-LLM-V2.6-Qwen-14b-Q4_K_M:latest 81d0d17e9f6a 21 GB 100% GPU 4 minutes from now However, the actual VRAM usage amounts to 13,880,772K
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28972