[GH-ISSUE #14909] Feature Request: Add Flash Attention support for EXAONE 4.0 architecture #35362

Open
opened 2026-04-22 19:48:16 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @archeon-p on GitHub (Mar 17, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14909

Summary

EXAONE 4.0 32B (by LG AI Research) is not included in Ollama's Flash Attention allowlist, causing KV cache to fall back to FP16 instead of q8_0. This doubles the KV cache memory usage, severely limiting the usable context window on multi-GPU setups.

Problem

When running EXAONE 4.0 32B (GGUF from LGAI-EXAONE/EXAONE-4.0-32B-GGUF) with OLLAMA_FLASH_ATTENTION=1 and OLLAMA_KV_CACHE_TYPE=q8_0, the q8_0 KV cache is not applied. Instead, Ollama falls back to FP16 KV cache because the EXAONE architecture is not in the Flash Attention supported model list.

Impact (measured on RTX 3060 12GB + RTX 3090 24GB = 36GB VRAM)

Quantization Max Context (100% GPU) Ollama Est. VRAM Actual nvidia-smi
Q4_K_M 45K (46,080) 35 GB ~27 GB
Q5_K_M 37K (37,888) 35 GB ~29 GB

If q8_0 KV cache were supported (1 byte per value instead of 2 bytes for FP16), the KV cache would be halved, allowing approximately 90K context for Q4 and 70K context for Q5 on the same hardware.

KV Cache calculation (FP16 fallback, current)

Per token: 2 (K+V) × 64 layers × 8 KV heads × 128 head_dim × 2 bytes (FP16) = 262,144 bytes/token

KV Cache calculation (q8_0, desired)

Per token: 2 (K+V) × 64 layers × 8 KV heads × 128 head_dim × 1 byte (q8_0) = 131,072 bytes/token

Technical Details

  • EXAONE 4.0 uses standard Grouped Query Attention (GQA) — structurally compatible with Flash Attention
  • Architecture: 64 layers, 40 attention heads, 8 KV heads, 128 head dimension
  • The model supports up to 32,768 native context (extendable with RoPE)
  • Ollama version: latest (as of March 2026)

Environment

  • OS: Ubuntu 22.04 (Linux)
  • GPUs: RTX 3060 (12GB) + RTX 3090 (24GB)
  • Ollama settings: OLLAMA_FLASH_ATTENTION=1, OLLAMA_KV_CACHE_TYPE=q8_0, OLLAMA_SCHED_SPREAD=1
  • Model source: LGAI-EXAONE/EXAONE-4.0-32B-GGUF (Q4_K_M and Q5_K_M)

Request

Please add EXAONE 4.0 architecture to the Flash Attention supported model list so that q8_0/q4_0 KV cache types work correctly with this model. This would significantly improve the usable context window for EXAONE users on consumer GPUs.

Originally created by @archeon-p on GitHub (Mar 17, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14909 ## Summary EXAONE 4.0 32B (by LG AI Research) is not included in Ollama's Flash Attention allowlist, causing KV cache to fall back to FP16 instead of q8_0. This doubles the KV cache memory usage, severely limiting the usable context window on multi-GPU setups. ## Problem When running EXAONE 4.0 32B (GGUF from [LGAI-EXAONE/EXAONE-4.0-32B-GGUF](https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-32B-GGUF)) with `OLLAMA_FLASH_ATTENTION=1` and `OLLAMA_KV_CACHE_TYPE=q8_0`, the q8_0 KV cache is **not applied**. Instead, Ollama falls back to FP16 KV cache because the EXAONE architecture is not in the Flash Attention supported model list. ### Impact (measured on RTX 3060 12GB + RTX 3090 24GB = 36GB VRAM) | Quantization | Max Context (100% GPU) | Ollama Est. VRAM | Actual nvidia-smi | |---|---|---|---| | Q4_K_M | 45K (46,080) | 35 GB | ~27 GB | | Q5_K_M | 37K (37,888) | 35 GB | ~29 GB | If q8_0 KV cache were supported (1 byte per value instead of 2 bytes for FP16), the KV cache would be halved, allowing approximately **90K context for Q4** and **70K context for Q5** on the same hardware. ### KV Cache calculation (FP16 fallback, current) ``` Per token: 2 (K+V) × 64 layers × 8 KV heads × 128 head_dim × 2 bytes (FP16) = 262,144 bytes/token ``` ### KV Cache calculation (q8_0, desired) ``` Per token: 2 (K+V) × 64 layers × 8 KV heads × 128 head_dim × 1 byte (q8_0) = 131,072 bytes/token ``` ## Technical Details - EXAONE 4.0 uses standard **Grouped Query Attention (GQA)** — structurally compatible with Flash Attention - Architecture: 64 layers, 40 attention heads, 8 KV heads, 128 head dimension - The model supports up to 32,768 native context (extendable with RoPE) - Ollama version: latest (as of March 2026) ## Environment - OS: Ubuntu 22.04 (Linux) - GPUs: RTX 3060 (12GB) + RTX 3090 (24GB) - Ollama settings: `OLLAMA_FLASH_ATTENTION=1`, `OLLAMA_KV_CACHE_TYPE=q8_0`, `OLLAMA_SCHED_SPREAD=1` - Model source: `LGAI-EXAONE/EXAONE-4.0-32B-GGUF` (Q4_K_M and Q5_K_M) ## Request Please add EXAONE 4.0 architecture to the Flash Attention supported model list so that q8_0/q4_0 KV cache types work correctly with this model. This would significantly improve the usable context window for EXAONE users on consumer GPUs.
Author
Owner

@archeon-p commented on GitHub (Mar 17, 2026):

Additional context: A related issue was previously reported and closed as fixed:

  • #9605 — "EXAONE fails to run with quantized KV cache" (EXAONE 3.5, closed Oct 2025 on v0.12.5)
  • ggml-org/llama.cpp#13121 — corresponding llama.cpp issue

The fix in v0.12.5 resolved the crash (GGML_ASSERT failure due to EXAONE 3.5's n_embd_head_k=80 not being divisible by q8_0 block size of 32).

However, on the current v0.18.0, EXAONE 4.0 (n_embd_head_k=128, which IS divisible by 32) still does not use quantized KV cache — it silently falls back to FP16. The model runs without errors, but the KV cache is 2x larger than necessary, limiting the usable context window.

This suggests that while the crash was fixed, Flash Attention / quantized KV cache was never actually enabled for the EXAONE architecture in the allowlist.

<!-- gh-comment-id:4076821418 --> @archeon-p commented on GitHub (Mar 17, 2026): **Additional context:** A related issue was previously reported and closed as fixed: - #9605 — "EXAONE fails to run with quantized KV cache" (EXAONE 3.5, closed Oct 2025 on v0.12.5) - `ggml-org/llama.cpp#13121` — corresponding llama.cpp issue The fix in v0.12.5 resolved the **crash** (`GGML_ASSERT` failure due to EXAONE 3.5's `n_embd_head_k=80` not being divisible by q8_0 block size of 32). However, on the current **v0.18.0**, EXAONE 4.0 (`n_embd_head_k=128`, which IS divisible by 32) still does not use quantized KV cache — it silently falls back to FP16. The model runs without errors, but the KV cache is 2x larger than necessary, limiting the usable context window. This suggests that while the crash was fixed, Flash Attention / quantized KV cache was never actually **enabled** for the EXAONE architecture in the allowlist.
Author
Owner

@archeon-p commented on GitHub (Mar 17, 2026):

Additional finding: Tool calling is also not supported for EXAONE architecture

Tested on Ollama v0.18.0 — the EXAONE 4.0 model does not support tool/function calling:

$ ollama show exaone4-32b-q5:latest | head -10
  Model
    architecture        exaone4
    parameters          32.0B
    context length      131072
    embedding length    5120
    quantization        Q5_K_M

  Capabilities
    completion          ← only "completion", no "tools"

Attempting to use tools returns:

{"error":"...does not support tools"}

However, the GGUF file (LGAI-EXAONE/EXAONE-4.0-32B-GGUF) contains a full tool-calling chat template in its metadata (tokenizer.chat_template), with proper <tool>, <tool_call>, and <tool_result> tags — very similar to the format used by Qwen and other models.

For comparison, Qwen models that support tools have RENDERER and PARSER directives in their Modelfile:

RENDERER qwen3.5
PARSER qwen3.5

EXAONE has neither. This means Ollama needs to implement an EXAONE-specific RENDERER and PARSER (or a generic one that works with the GGUF's built-in Jinja template) to enable tool calling.

Summary of missing EXAONE architecture support in Ollama:

  1. Flash Attention allowlist → forces FP16 KV cache fallback (2x VRAM)
  2. RENDERER/PARSER for tool calling → "does not support tools" error
  3. The GGUF itself has all necessary metadata (chat template with tool support, compatible GQA attention) — Ollama just needs to recognize it.
<!-- gh-comment-id:4076890516 --> @archeon-p commented on GitHub (Mar 17, 2026): **Additional finding: Tool calling is also not supported for EXAONE architecture** Tested on Ollama v0.18.0 — the EXAONE 4.0 model does **not** support tool/function calling: ``` $ ollama show exaone4-32b-q5:latest | head -10 Model architecture exaone4 parameters 32.0B context length 131072 embedding length 5120 quantization Q5_K_M Capabilities completion ← only "completion", no "tools" ``` Attempting to use tools returns: ``` {"error":"...does not support tools"} ``` **However**, the GGUF file (`LGAI-EXAONE/EXAONE-4.0-32B-GGUF`) contains a **full tool-calling chat template** in its metadata (`tokenizer.chat_template`), with proper `<tool>`, `<tool_call>`, and `<tool_result>` tags — very similar to the format used by Qwen and other models. For comparison, Qwen models that support tools have `RENDERER` and `PARSER` directives in their Modelfile: ``` RENDERER qwen3.5 PARSER qwen3.5 ``` EXAONE has neither. This means Ollama needs to implement an EXAONE-specific **RENDERER** and **PARSER** (or a generic one that works with the GGUF's built-in Jinja template) to enable tool calling. **Summary of missing EXAONE architecture support in Ollama:** 1. ❌ Flash Attention allowlist → forces FP16 KV cache fallback (2x VRAM) 2. ❌ RENDERER/PARSER for tool calling → "does not support tools" error 3. The GGUF itself has all necessary metadata (chat template with tool support, compatible GQA attention) — Ollama just needs to recognize it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35362