ggml: Always set cache padding to 256

We currently use cache padding of 32 when not using flash attention
and 256 with flash attention, which is based on the historic alignment
requirements of these kernels. The restrictions have since been
loosened but there are still performance benefits, such as better
CUDA graph reuse.

Since the requirement is no longer kernel-specific, set the padding
uniformly to 256, as llama.cpp has.
This commit is contained in:
Jesse Gross
2025-12-04 11:42:30 -08:00
committed by Jesse Gross
parent 0a844f8e96
commit 7837a5bc7e

View File

@@ -687,7 +687,7 @@ func (b *Backend) CacheConfig() ml.CacheConfig {
if b.flashAttention {
return ml.CacheConfig{CachePadding: 256, MaskDType: ml.DTypeF16, MaskBatchPadding: C.GGML_KQ_MASK_PAD}
} else {
return ml.CacheConfig{CachePadding: 32, PermutedV: true}
return ml.CacheConfig{CachePadding: 256, PermutedV: true}
}
}