[GH-ISSUE #10859] Investigate V100 Flash Attention Implementation #53646

Closed
opened 2026-04-29 04:22:17 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @sempervictus on GitHub (May 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10859

While digging around following up on a conversation with @rick-github about quantization of KV caches requiring flash attention, i stumbled across an initial implementation of flash attention for the V100: https://github.com/ZRayZzz/flash-attention-v100/ or the apparently updated fork @ https://github.com/Coloured-glaze/flash-attention-v100. Bots say the readme translates to:

Flash_Attention_V100

Flash Attention only supports GPUs with the Ampere architecture or newer. Since it does not support the Volta architecture (as used in the V100), I created this version of Flash Attention specifically for V100 out of personal interest, following the CUTLASS tutorials and the Flash Attention 2 paper. However, due to time constraints and limited hardware resources, thorough performance tuning was not possible. As a result, the performance of this repository does not match that of PyTorch's attention implementation. Currently, the forward pass is approximately 40% faster than PyTorch, but the backward pass is about 20% slower, offsetting the gains. Additionally, this implementation does not account for boundary conditions, so sequence lengths must be padded to multiples of 32 using right padding. This will not affect normal training; simply ignore the padded positions when computing the loss.

Installation

Before installing, ensure you have:

  • PyTorch >= 2.0.1
  • CUDA >= 11.6
  • Linux OS
  • CUTLASS source code

Modify line 146 in setup.py to point to the location where you downloaded the CUTLASS source code:

include_dirs=[
    Path(this_dir) / "include",
    "/home/user/cutlass/include",
],

After making this change, install the package using:

python setup.py install --user

Usage

from flash_attn_v100 import flash_attn_func  
q = torch.empty((Z, N_CTX, H, D_HEAD), dtype=dtype, device="cuda").normal_(mean=0., std=1).requires_grad_()  
k = torch.empty((Z, N_CTX, H, D_HEAD), dtype=dtype, device="cuda").normal_(mean=0., std=1).requires_grad_()  
v = torch.empty((Z, N_CTX, H, D_HEAD), dtype=dtype, device="cuda").normal_(mean=0., std=1).requires_grad_()  
cuda_out = flash_attn_func(q, k, v, sm_scale, causal)  

References

If this does in fact provide the relevant interfaces to leverage flash attention, then it may be worth following up on the author's work to complete the performance optimizations given how much v7 hardware is out there still to this day.

Originally created by @sempervictus on GitHub (May 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10859 While digging around following up on a conversation with @rick-github about quantization of KV caches requiring flash attention, i stumbled across an initial implementation of flash attention for the V100: https://github.com/ZRayZzz/flash-attention-v100/ or the apparently updated fork @ https://github.com/Coloured-glaze/flash-attention-v100. Bots say the readme translates to: > > # Flash_Attention_V100 > Flash Attention only supports GPUs with the Ampere architecture or newer. Since it does not support the Volta architecture (as used in the V100), I created this version of Flash Attention specifically for V100 out of personal interest, following the CUTLASS tutorials and the Flash Attention 2 paper. However, due to time constraints and limited hardware resources, thorough performance tuning was not possible. As a result, the performance of this repository does not match that of PyTorch's attention implementation. Currently, the forward pass is approximately 40% faster than PyTorch, but the backward pass is about 20% slower, offsetting the gains. Additionally, this implementation does not account for boundary conditions, so sequence lengths must be padded to multiples of 32 using right padding. This will not affect normal training; simply ignore the padded positions when computing the loss. > > ## Installation > Before installing, ensure you have: > - PyTorch >= 2.0.1 > - CUDA >= 11.6 > - Linux OS > - CUTLASS source code > > Modify line 146 in `setup.py` to point to the location where you downloaded the CUTLASS source code: > ```python > include_dirs=[ > Path(this_dir) / "include", > "/home/user/cutlass/include", > ], > ``` > > After making this change, install the package using: > ```bash > python setup.py install --user > ``` > > ## Usage > ```python > from flash_attn_v100 import flash_attn_func > q = torch.empty((Z, N_CTX, H, D_HEAD), dtype=dtype, device="cuda").normal_(mean=0., std=1).requires_grad_() > k = torch.empty((Z, N_CTX, H, D_HEAD), dtype=dtype, device="cuda").normal_(mean=0., std=1).requires_grad_() > v = torch.empty((Z, N_CTX, H, D_HEAD), dtype=dtype, device="cuda").normal_(mean=0., std=1).requires_grad_() > cuda_out = flash_attn_func(q, k, v, sm_scale, causal) > ``` > > ## References > - [Flash-Attention](https://github.com/Dao-AILab/flash-attention) > - [CUTLASS](https://github.com/NVIDIA/cutlass) If this does in fact provide the relevant interfaces to _leverage_ flash attention, then it may be worth following up on the author's work to complete the performance optimizations given how much v7 hardware is out there still to this day.
GiteaMirror added the feature request label 2026-04-29 04:22:17 -05:00
Author
Owner

@teason2021 commented on GitHub (Sep 9, 2025):

in very short: flash attention is not supported on Volta generation (<Ampere)

this should be reflected here:
https://github.com/ollama/ollama/blob/main/discover/types.go#L177

<!-- gh-comment-id:3269368831 --> @teason2021 commented on GitHub (Sep 9, 2025): in very short: flash attention is not supported on Volta generation (<Ampere) this should be reflected here: https://github.com/ollama/ollama/blob/main/discover/types.go#L177
Author
Owner

@bryank-cs commented on GitHub (Sep 17, 2025):

in very short: flash attention is not supported on Volta generation (<Ampere)

this should be reflected here: https://github.com/ollama/ollama/blob/main/discover/types.go#L177

(gpu.Library == "cuda" && gpu.DriverMajor >= 7)

What is that test looking for, the driver or the capability? In either case, it looks like v100 should be supported, according to deviceQuery:

Device 0: "Tesla V100-SXM2-32GB"
  CUDA Driver Version / Runtime Version          12.8 / 9.1
  CUDA Capability Major/Minor version number:    7.0
<!-- gh-comment-id:3304128903 --> @bryank-cs commented on GitHub (Sep 17, 2025): > in very short: flash attention is not supported on Volta generation (<Ampere) > > this should be reflected here: https://github.com/ollama/ollama/blob/main/discover/types.go#L177 `(gpu.Library == "cuda" && gpu.DriverMajor >= 7) ` What is that test looking for, the driver or the capability? In either case, it looks like v100 should be supported, according to deviceQuery: ``` Device 0: "Tesla V100-SXM2-32GB" CUDA Driver Version / Runtime Version 12.8 / 9.1 CUDA Capability Major/Minor version number: 7.0 ```
Author
Owner

@teason2021 commented on GitHub (Sep 17, 2025):

I run it with this patch at the moment, if it helps for you it is same issue, it only affects flash attention support

index 1027aaac..490e0c05 100644
--- a/discover/types.go
+++ b/discover/types.go
@@ -174,7 +174,7 @@ func (l GpuInfoList) FlashAttentionSupported() bool {
        for _, gpu := range l {
                supportsFA := gpu.Library == "cpu" ||
                        gpu.Library == "metal" ||
-                       (gpu.Library == "cuda" && gpu.DriverMajor >= 7) ||
+                       (gpu.Library == "cuda" && gpu.DriverMajor >= 7 && gpu.Compute != "7.2") || //put here 7.0 for V100/sm_70
                        gpu.Library == "rocm"
 
                if !supportsFA {
<!-- gh-comment-id:3304300578 --> @teason2021 commented on GitHub (Sep 17, 2025): I run it with this patch at the moment, if it helps for you it is same issue, it only affects flash attention support ``` index 1027aaac..490e0c05 100644 --- a/discover/types.go +++ b/discover/types.go @@ -174,7 +174,7 @@ func (l GpuInfoList) FlashAttentionSupported() bool { for _, gpu := range l { supportsFA := gpu.Library == "cpu" || gpu.Library == "metal" || - (gpu.Library == "cuda" && gpu.DriverMajor >= 7) || + (gpu.Library == "cuda" && gpu.DriverMajor >= 7 && gpu.Compute != "7.2") || //put here 7.0 for V100/sm_70 gpu.Library == "rocm" if !supportsFA { ```
Author
Owner

@jessegross commented on GitHub (Oct 7, 2025):

For those that are running into this issue, please test out one of the 0.12.4 RCs and let us know if it fixes the issue with V100s - we have updated the kernels.

<!-- gh-comment-id:3377971361 --> @jessegross commented on GitHub (Oct 7, 2025): For those that are running into this issue, please test out one of the 0.12.4 RCs and let us know if it fixes the issue with V100s - we have updated the kernels.
Author
Owner

@jessegross commented on GitHub (Oct 8, 2025):

Fixed by #12245

<!-- gh-comment-id:3382452721 --> @jessegross commented on GitHub (Oct 8, 2025): Fixed by #12245
Author
Owner

@teason2021 commented on GitHub (Oct 8, 2025):

cuda compute 7.2, gpt-oss:20b, flash_attention env variable NOT set

//ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:483: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: CUDA_ARCH_LIST
CUDA error: unspecified launch failure
current device: 0, in function ggml_cuda_mul_mat_id at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2278
cudaMemcpyAsyncReserve(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: CUDA error

same, but flash attention env variable set to 0:

time=2025-10-08T17:12:47.204Z level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-e63d890b-9217-5ff6-a2e3-6de078f83b72 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

and it works

<!-- gh-comment-id:3382509237 --> @teason2021 commented on GitHub (Oct 8, 2025): cuda compute 7.2, gpt-oss:20b, flash_attention env variable NOT set > //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:483: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ > CUDA error: unspecified launch failure > current device: 0, in function ggml_cuda_mul_mat_id at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2278 > cudaMemcpyAsyncReserve(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) > //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: CUDA error same, but flash attention env variable set to 0: > time=2025-10-08T17:12:47.204Z level=INFO source=runner.go:1172 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-e63d890b-9217-5ff6-a2e3-6de078f83b72 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" and it works
Author
Owner

@jessegross commented on GitHub (Oct 8, 2025):

@teason2021 That is a separate issue, which is currently fixed in the main branch.

<!-- gh-comment-id:3382527953 --> @jessegross commented on GitHub (Oct 8, 2025): @teason2021 That is a separate issue, which is currently fixed in the main branch.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53646