[GH-ISSUE #15350] Gemma 4 31B Dense Specific Issue: Flash Attention hangs indefinitely on large prompt eval (>3-4K tokens) — CUDA/RTX 3090 #9819

Closed
opened 2026-04-12 22:41:15 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @ncb0606 on GitHub (Apr 5, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15350

What is the issue?

Flash Attention causes Gemma 4 31B Dense to hang indefinitely during prompt evaluation when the prompt exceeds ~3-4K tokens. Short prompts work perfectly at full speed. The 26B MoE variant handles the same large prompts without issue — the bug is specific to the Dense model.

This blocks all agentic use cases (OpenClaw, coding agents, any tool with a system prompt) since those tools inject 10-20K+ tokens of system prompt, tools, memory, and context before the user's message.

Environment

  • OS: Ubuntu 24.04
  • GPU: NVIDIA RTX 3090 (24GB)
  • Ollama: v0.20.2
  • Model: gemma4:31b (Q4_K_M, ~20GB) and gemma4:26b for comparison
  • CUDA: 12.x
  • Settings: OLLAMA_FLASH_ATTENTION=1, OLLAMA_KV_CACHE_TYPE=q4_0

Key finding: Dense hangs, MoE doesn't

Same server, same FA settings, same KV cache, same prompt (~8K tokens of system prompt):

Model Architecture Result Prompt Eval Time
gemma4:26b MoE (4B active) Works 8,021 tokens 88s
gemma4:31b Dense (31B all active) HANG 0 tokens processed >120s, 0% GPU

The MoE model processes the same large prompt successfully. The Dense model hangs with 0% GPU utilization — it's not slow processing, it's a complete stall.

Systematic test results (Dense model only)

All tests on same hardware, one variable changed at a time:

Test FA KV Cache Prompt Size Result Notes
1 ON q4_0 ~13K tokens (system prompt) HANG GPU 0% utilization, indefinite
2 ON f16 ~13K tokens (system prompt) HANG Same — KV type doesn't matter
3 OFF f16 ~13K tokens (system prompt) Works ~40s, CPU offload, ~6 tok/s
4 OFF q4_0 ~13K tokens (system prompt) Works Falls back to f16 silently
5 ON q4_0 ~26 tokens (short prompt) Works 30 tok/s, instant
6 ON q4_0 ~2,479 tokens Works 134 tok/s prompt eval
7 ON q4_0 ~3,541 tokens Works 74 tok/s prompt eval
8 ON q4_0 ~8K+ tokens (agent payload) HANG 3+ min, 0% GPU, aborted

The pattern: FA + Dense model works under ~3-4K tokens, hangs above that threshold. FA + MoE works at all sizes. FA off + Dense works at all sizes (slowly, with CPU offload).

Steps to reproduce

# Start Ollama with FA enabled
OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 ollama serve

# This works instantly (short prompt):
curl http://localhost:11434/api/generate \
  -d '{"model":"gemma4:31b","prompt":"Say hello","stream":false,"options":{"num_predict":5}}'
# Returns in <1 second

# This hangs forever (large prompt, Dense model):
python3 -c "
import json, sys
large_prompt = 'You are a helpful AI assistant with extensive knowledge. ' * 800
payload = json.dumps({'model':'gemma4:31b','messages':[{'role':'system','content':large_prompt},{'role':'user','content':'Say hello in one sentence.'}],'stream':False,'options':{'num_predict':20}})
sys.stdout.write(payload)
" | curl -s -m 120 -X POST http://localhost:11434/api/chat \
  -H 'Content-Type: application/json' -d @-
# Hangs indefinitely. GPU shows 0% utilization via nvidia-smi.
# Returns empty after timeout.

# Same payload works fine with MoE model:
# Change gemma4:31b → gemma4:26b in the above command
# Completes in ~88 seconds with 8,021 tokens processed

# Same payload works fine with FA disabled (Dense model):
# Set OLLAMA_FLASH_ATTENTION=0, restart, run the 31b command
# Completes in ~40s with CPU offload at ~6 tok/s

Why this matters — blocks all agentic use cases

This blocks every agentic use case for the Gemma 4 Dense model on Ollama:

  • OpenClaw injects ~27K chars (~8-10K tokens) of bootstrap, tools, memory, and system prompt. Multiple open issues trace back to this root cause: openclaw/openclaw#59916 (Gemma 4 hangs, filed 3 days ago), openclaw/openclaw#41871, openclaw/openclaw#31399, openclaw/openclaw#24756 — all reporting "local Ollama hangs, direct curl works fine." The community is filing these against OpenClaw, but the root cause is here in Ollama's FA implementation.
  • OpenCode, Continue, and other coding agents send large system prompts with tool definitions
  • Any application using Ollama's /api/chat with a system prompt + conversation history exceeding ~3-4K tokens

Gemma 4 31B Dense is the #1 ranked dense model in its class right now. OpenClaw is the #1 open-source agent platform. The intersection of these two is completely broken for anyone running locally with FA enabled on NVIDIA GPUs.

Analysis

The bug is in how Ollama's FA implementation handles the Dense model's attention during large batched prefill:

  • Gemma 4 uses a hybrid attention architecture: 50 sliding window layers (512-1024 token window) + 10 global attention layers
  • The Dense model processes all 31B parameters on every token
  • The MoE model only activates 4B parameters per token via expert routing — this appears to change how FA processes the batched prefill, explaining why MoE succeeds
  • Token-by-token generation works fine with FA (short prompts succeed because the prefill batch is small)
  • The hang is specifically in FA processing a large batch through the Dense model's full-width hybrid attention layers simultaneously

Gemma 3 precedent

Gemma 3 had the same architecture (sliding window + global attention) and required specific FA fixes in earlier Ollama releases:

  • #9683, #8158 — KV cache + FA speed issues with Gemma 3
  • #9857 — Gemma 3 27B on RTX 3090 becoming unresponsive (same GPU, same arch family)
  • Ollama changelog notes prior fixes: "Fixed handling of long contexts with Gemma 3 models" and "Flash attention is now enabled by default for Gemma 3"

PR #15296 enabled FA for Gemma 4 but may not have included the equivalent large-batch prefill handling that was eventually added for Gemma 3.

  • #15258 — Gemma 4 hanging on M4 Macs (fixed by PR #15296, but didn't address large prompt eval)
  • #15237 — Gemma 4 on 5090 showing GPU→CPU jump with FA
  • #15286 — Gemma 4 31B performance issues on M1 Max
Originally created by @ncb0606 on GitHub (Apr 5, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15350 ## What is the issue? Flash Attention causes Gemma 4 31B Dense to hang indefinitely during prompt evaluation when the prompt exceeds ~3-4K tokens. Short prompts work perfectly at full speed. The 26B MoE variant handles the same large prompts without issue — the bug is specific to the Dense model. This blocks all agentic use cases (OpenClaw, coding agents, any tool with a system prompt) since those tools inject 10-20K+ tokens of system prompt, tools, memory, and context before the user's message. ## Environment - **OS:** Ubuntu 24.04 - **GPU:** NVIDIA RTX 3090 (24GB) - **Ollama:** v0.20.2 - **Model:** gemma4:31b (Q4_K_M, ~20GB) and gemma4:26b for comparison - **CUDA:** 12.x - **Settings:** `OLLAMA_FLASH_ATTENTION=1`, `OLLAMA_KV_CACHE_TYPE=q4_0` ## Key finding: Dense hangs, MoE doesn't Same server, same FA settings, same KV cache, same prompt (~8K tokens of system prompt): | Model | Architecture | Result | Prompt Eval | Time | |-------|-------------|--------|-------------|------| | gemma4:26b | MoE (4B active) | ✅ Works | 8,021 tokens | 88s | | gemma4:31b | Dense (31B all active) | ❌ HANG | 0 tokens processed | >120s, 0% GPU | The MoE model processes the same large prompt successfully. The Dense model hangs with 0% GPU utilization — it's not slow processing, it's a complete stall. ## Systematic test results (Dense model only) All tests on same hardware, one variable changed at a time: | Test | FA | KV Cache | Prompt Size | Result | Notes | |------|-----|----------|-------------|--------|-------| | 1 | ON | q4_0 | ~13K tokens (system prompt) | ❌ HANG | GPU 0% utilization, indefinite | | 2 | ON | f16 | ~13K tokens (system prompt) | ❌ HANG | Same — KV type doesn't matter | | 3 | OFF | f16 | ~13K tokens (system prompt) | ✅ Works | ~40s, CPU offload, ~6 tok/s | | 4 | OFF | q4_0 | ~13K tokens (system prompt) | ✅ Works | Falls back to f16 silently | | 5 | ON | q4_0 | ~26 tokens (short prompt) | ✅ Works | 30 tok/s, instant | | 6 | ON | q4_0 | ~2,479 tokens | ✅ Works | 134 tok/s prompt eval | | 7 | ON | q4_0 | ~3,541 tokens | ✅ Works | 74 tok/s prompt eval | | 8 | ON | q4_0 | ~8K+ tokens (agent payload) | ❌ HANG | 3+ min, 0% GPU, aborted | **The pattern:** FA + Dense model works under ~3-4K tokens, hangs above that threshold. FA + MoE works at all sizes. FA off + Dense works at all sizes (slowly, with CPU offload). ## Steps to reproduce ```bash # Start Ollama with FA enabled OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 ollama serve # This works instantly (short prompt): curl http://localhost:11434/api/generate \ -d '{"model":"gemma4:31b","prompt":"Say hello","stream":false,"options":{"num_predict":5}}' # Returns in <1 second # This hangs forever (large prompt, Dense model): python3 -c " import json, sys large_prompt = 'You are a helpful AI assistant with extensive knowledge. ' * 800 payload = json.dumps({'model':'gemma4:31b','messages':[{'role':'system','content':large_prompt},{'role':'user','content':'Say hello in one sentence.'}],'stream':False,'options':{'num_predict':20}}) sys.stdout.write(payload) " | curl -s -m 120 -X POST http://localhost:11434/api/chat \ -H 'Content-Type: application/json' -d @- # Hangs indefinitely. GPU shows 0% utilization via nvidia-smi. # Returns empty after timeout. # Same payload works fine with MoE model: # Change gemma4:31b → gemma4:26b in the above command # Completes in ~88 seconds with 8,021 tokens processed # Same payload works fine with FA disabled (Dense model): # Set OLLAMA_FLASH_ATTENTION=0, restart, run the 31b command # Completes in ~40s with CPU offload at ~6 tok/s ``` ## Why this matters — blocks all agentic use cases This blocks **every agentic use case** for the Gemma 4 Dense model on Ollama: - **OpenClaw** injects ~27K chars (~8-10K tokens) of bootstrap, tools, memory, and system prompt. Multiple open issues trace back to this root cause: openclaw/openclaw#59916 (Gemma 4 hangs, filed 3 days ago), openclaw/openclaw#41871, openclaw/openclaw#31399, openclaw/openclaw#24756 — all reporting "local Ollama hangs, direct curl works fine." The community is filing these against OpenClaw, but the root cause is here in Ollama's FA implementation. - **OpenCode**, **Continue**, and other coding agents send large system prompts with tool definitions - **Any application** using Ollama's `/api/chat` with a system prompt + conversation history exceeding ~3-4K tokens Gemma 4 31B Dense is the #1 ranked dense model in its class right now. OpenClaw is the #1 open-source agent platform. The intersection of these two is completely broken for anyone running locally with FA enabled on NVIDIA GPUs. ## Analysis The bug is in how Ollama's FA implementation handles the Dense model's attention during large batched prefill: - Gemma 4 uses a **hybrid attention** architecture: 50 sliding window layers (512-1024 token window) + 10 global attention layers - The Dense model processes all 31B parameters on every token - The MoE model only activates 4B parameters per token via expert routing — this appears to change how FA processes the batched prefill, explaining why MoE succeeds - Token-by-token generation works fine with FA (short prompts succeed because the prefill batch is small) - The hang is specifically in FA processing a large batch through the Dense model's full-width hybrid attention layers simultaneously ## Gemma 3 precedent Gemma 3 had the same architecture (sliding window + global attention) and required specific FA fixes in earlier Ollama releases: - #9683, #8158 — KV cache + FA speed issues with Gemma 3 - #9857 — Gemma 3 27B on RTX 3090 becoming unresponsive (same GPU, same arch family) - Ollama changelog notes prior fixes: "Fixed handling of long contexts with Gemma 3 models" and "Flash attention is now enabled by default for Gemma 3" PR #15296 enabled FA for Gemma 4 but may not have included the equivalent large-batch prefill handling that was eventually added for Gemma 3. ## Related issues - #15258 — Gemma 4 hanging on M4 Macs (fixed by PR #15296, but didn't address large prompt eval) - #15237 — Gemma 4 on 5090 showing GPU→CPU jump with FA - #15286 — Gemma 4 31B performance issues on M1 Max
Author
Owner

@rick-github commented on GitHub (Apr 5, 2026):

Was the context length configured?

<!-- gh-comment-id:4189516316 --> @rick-github commented on GitHub (Apr 5, 2026): Was the context length configured?
Author
Owner

@ncb0606 commented on GitHub (Apr 5, 2026):

Was the context length configured?

Yes, context was explicitly configured. Ollama 0.20.2, OLLAMA_FLASH_ATTENTION=1, KV cache q4_0, RTX 3090 24GB.
The key finding is that both models were tested on the same server with identical settings. The MoE variant (gemma4:26b, 4B active params) completed an ~8K token payload in 88 seconds. The Dense variant (gemma4:31b, all 31B params active) hung indefinitely on the same payload — GPU utilization at 0%, not slow processing but a complete stall during prompt eval. Ollama log showed POST /api/chat | 500 | 3m0s.
Direct curl with a short prompt returns instantly on the Dense model — the model works. It's specifically large batched prefill that triggers the hang, and only on the Dense architecture.

<!-- gh-comment-id:4189533399 --> @ncb0606 commented on GitHub (Apr 5, 2026): > Was the context length configured? Yes, context was explicitly configured. Ollama 0.20.2, OLLAMA_FLASH_ATTENTION=1, KV cache q4_0, RTX 3090 24GB. The key finding is that both models were tested on the same server with identical settings. The MoE variant (gemma4:26b, 4B active params) completed an ~8K token payload in 88 seconds. The Dense variant (gemma4:31b, all 31B params active) hung indefinitely on the same payload — GPU utilization at 0%, not slow processing but a complete stall during prompt eval. Ollama log showed POST /api/chat | 500 | 3m0s. Direct curl with a short prompt returns instantly on the Dense model — the model works. It's specifically large batched prefill that triggers the hang, and only on the Dense architecture.
Author
Owner

@rick-github commented on GitHub (Apr 5, 2026):

Server logs will aid in debugging.

<!-- gh-comment-id:4189536353 --> @rick-github commented on GitHub (Apr 5, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@ncb0606 commented on GitHub (Apr 5, 2026):

Server logs will aid in debugging.

See attached.

ollama-gemma4-dense-logs.txt

<!-- gh-comment-id:4189574520 --> @ncb0606 commented on GitHub (Apr 5, 2026): > [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging. See attached. [ollama-gemma4-dense-logs.txt](https://github.com/user-attachments/files/26491338/ollama-gemma4-dense-logs.txt)
Author
Owner

@rick-github commented on GitHub (Apr 5, 2026):

Set OLLAMA_DEBUG=1 in the server environment, repeat the test, post the full log.

<!-- gh-comment-id:4189578132 --> @rick-github commented on GitHub (Apr 5, 2026): Set `OLLAMA_DEBUG=1` in the server environment, repeat the test, post the full log.
Author
Owner

@ncb0606 commented on GitHub (Apr 5, 2026):

Set OLLAMA_DEBUG=1 in the server environment, repeat the test, post the full log.

Setup:

  • Ollama v0.20.2
  • OLLAMA_FLASH_ATTENTION=1
  • OLLAMA_KV_CACHE_TYPE=q4_0
  • OLLAMA_DEBUG=1
  • GPU: NVIDIA GeForce RTX 3090, compute 8.6, 24 GiB VRAM
  • CUDA v13 library selected
  • Model: gemma4:31b (Q4_K_M, 1189 tensors, Dense — all 31B params active)

Test: ~30K char system prompt + short user message via curl to /api/chat (stream=false). Timeout set to 90s client-side.

What happened:

  1. Model loaded successfully — all 61/61 layers offloaded to GPU in 3.54s
  2. Weights: 18.4 GiB on CUDA0, 1.2 GiB on CPU
  3. KV cache: 1.7 GiB on CUDA0
  4. Completion request received: prompt=30101 tokens
  5. Cache slot begins loading: loading cache slot id=0 cache=0 prompt=5022 used=0 remaining=5022
  6. Then: no further output for 86 seconds — GPU hangs during prompt eval
  7. Client timeout fires → context cancelled → 500

The last line before the hang is the cache slot load. No progress, no errors, no further debug output until the request is cancelled.

ollama-debug-full.log

<!-- gh-comment-id:4189611672 --> @ncb0606 commented on GitHub (Apr 5, 2026): > Set `OLLAMA_DEBUG=1` in the server environment, repeat the test, post the full log. **Setup:** - Ollama v0.20.2 - `OLLAMA_FLASH_ATTENTION=1` - `OLLAMA_KV_CACHE_TYPE=q4_0` - `OLLAMA_DEBUG=1` - GPU: NVIDIA GeForce RTX 3090, compute 8.6, 24 GiB VRAM - CUDA v13 library selected - Model: `gemma4:31b` (Q4_K_M, 1189 tensors, Dense — all 31B params active) **Test:** ~30K char system prompt + short user message via `curl` to `/api/chat` (stream=false). Timeout set to 90s client-side. **What happened:** 1. Model loaded successfully — all 61/61 layers offloaded to GPU in 3.54s 2. Weights: 18.4 GiB on CUDA0, 1.2 GiB on CPU 3. KV cache: 1.7 GiB on CUDA0 4. Completion request received: `prompt=30101` tokens 5. Cache slot begins loading: `loading cache slot id=0 cache=0 prompt=5022 used=0 remaining=5022` 6. **Then: no further output for 86 seconds** — GPU hangs during prompt eval 7. Client timeout fires → context cancelled → 500 The last line before the hang is the cache slot load. No progress, no errors, no further debug output until the request is cancelled. <details> [ollama-debug-full.log](https://github.com/user-attachments/files/26491466/ollama-debug-full.log)
Author
Owner

@rick-github commented on GitHub (Apr 5, 2026):

  1. Then: no further output for 86 seconds — GPU hangs during prompt eval

More output will be generated if OLLAMA_DEBUG=2. It can be quite voluminous which is why I chose 1 earlier, but it may give insight into what the model is doing for those 86 seconds.

<!-- gh-comment-id:4189639353 --> @rick-github commented on GitHub (Apr 5, 2026): > 6. **Then: no further output for 86 seconds** — GPU hangs during prompt eval More output will be generated if `OLLAMA_DEBUG=2`. It can be quite voluminous which is why I chose `1` earlier, but it may give insight into what the model is doing for those 86 seconds.
Author
Owner

@ncb0606 commented on GitHub (Apr 5, 2026):

voluminous indeed, though it looks worth it. See below and attached:

Set OLLAMA_DEBUG=2 and repeated the test. 10,693 lines of trace output — very illuminating.

It's not a deadlock. It's a progressive per-batch slowdown during prompt eval. Each batch takes ~2-3s longer than the previous:

Batch Timestamp Duration
0 22:38:17 → 22:38:19 ~2s
1 22:38:19 → 22:38:24 ~5s
2 22:38:24 → 22:38:31 ~7s
3 22:38:31 → 22:38:41 ~10s
4 22:38:41 → 22:38:54 ~13s
5 22:38:54 → 22:39:09 ~15s
6 22:39:09 → 22:39:27 ~18s
7 22:39:27 → 22:39:50 ~23s
8 22:39:50 → never (client timeout)

The pipeline is working — batches complete, logits are produced, next batch starts. But each successive batch takes longer. By batch 8+, the per-batch cost exceeds client timeouts.

The last two lines before the stall:

"forwardBatch waiting for compute to start" pendingBatch.id=9
"computeBatch: waiting for inputs to be ready" batchID=9

Batch 9 is assembled (414 inputs) and waiting for batch 8 to produce logits. Batch 8 never finishes before the 90s client timeout.

Key detail: Each batch processes 512 inputs (batch size from load config: BatchSize:512), except the last which processes the remainder. With ~5,022 prompt tokens to eval, that's ~10 batches needed. The exponential slowdown means the total time grows superlinearly — a prompt that would take ~15s if all batches ran at batch-0 speed instead takes 90s+ and doesn't finish.

ollama-debug2-full.log

<!-- gh-comment-id:4189661325 --> @ncb0606 commented on GitHub (Apr 5, 2026): voluminous indeed, though it looks worth it. See below and attached: Set `OLLAMA_DEBUG=2` and repeated the test. 10,693 lines of trace output — very illuminating. **It's not a deadlock.** It's a progressive per-batch slowdown during prompt eval. Each batch takes ~2-3s longer than the previous: | Batch | Timestamp | Duration | |-------|-----------|----------| | 0 | 22:38:17 → 22:38:19 | ~2s | | 1 | 22:38:19 → 22:38:24 | ~5s | | 2 | 22:38:24 → 22:38:31 | ~7s | | 3 | 22:38:31 → 22:38:41 | ~10s | | 4 | 22:38:41 → 22:38:54 | ~13s | | 5 | 22:38:54 → 22:39:09 | ~15s | | 6 | 22:39:09 → 22:39:27 | ~18s | | 7 | 22:39:27 → 22:39:50 | ~23s | | 8 | 22:39:50 → never (client timeout) | — | The pipeline is working — batches complete, logits are produced, next batch starts. But each successive batch takes longer. By batch 8+, the per-batch cost exceeds client timeouts. The last two lines before the stall: ``` "forwardBatch waiting for compute to start" pendingBatch.id=9 "computeBatch: waiting for inputs to be ready" batchID=9 ``` Batch 9 is assembled (414 inputs) and waiting for batch 8 to produce logits. Batch 8 never finishes before the 90s client timeout. **Key detail:** Each batch processes 512 inputs (batch size from load config: `BatchSize:512`), except the last which processes the remainder. With ~5,022 prompt tokens to eval, that's ~10 batches needed. The exponential slowdown means the total time grows superlinearly — a prompt that would take ~15s if all batches ran at batch-0 speed instead takes 90s+ and doesn't finish. <details> [ollama-debug2-full.log](https://github.com/user-attachments/files/26491595/ollama-debug2-full.log)
Author
Owner

@rick-github commented on GitHub (Apr 5, 2026):

Flash Attention is known to make this model run slowly, and the PR that enabled it by default was rolled back due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model.

<!-- gh-comment-id:4189679525 --> @rick-github commented on GitHub (Apr 5, 2026): Flash Attention is known to make this model run slowly, and the PR that enabled it by default was [rolled back](https://github.com/ollama/ollama/pull/15311) due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model.
Author
Owner

@ncb0606 commented on GitHub (Apr 5, 2026):

Ah, disabling it makes it kinda pointless to run on the 3090. But I hear what you are saying. Made this issue to draw attention to it and that it is wanted. What an incredible model, just limited by the fact that it cant do the quants.

Appreciate you looking into it.

<!-- gh-comment-id:4189727875 --> @ncb0606 commented on GitHub (Apr 5, 2026): Ah, disabling it makes it kinda pointless to run on the 3090. But I hear what you are saying. Made this issue to draw attention to it and that it is wanted. What an incredible model, just limited by the fact that it cant do the quants. Appreciate you looking into it.
Author
Owner

@directorboint-arch commented on GitHub (Apr 6, 2026):

Flash Attention is known to make this model run slowly, and the PR that enabled it by default was rolled back due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model.

So the model will need to be fixed, not the engine?

I'm asking because if it's a model thing, I'll need to find new uncensored versions as well. But if ti's an engine thing, I can just wait... patiently...

<!-- gh-comment-id:4192368455 --> @directorboint-arch commented on GitHub (Apr 6, 2026): > Flash Attention is known to make this model run slowly, and the PR that enabled it by default was [rolled back](https://github.com/ollama/ollama/pull/15311) due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model. So the model will need to be fixed, not the engine? I'm asking because if it's a model thing, I'll need to find new uncensored versions as well. But if ti's an engine thing, I can just wait... patiently...
Author
Owner

@rick-github commented on GitHub (Apr 6, 2026):

Engine thing. But a modified ("uncensored") version of gemma4 is unlikely to run on ollama, at least until the next vendor sync: https://github.com/ollama/ollama/issues/14575#issuecomment-3989918451

<!-- gh-comment-id:4192390353 --> @rick-github commented on GitHub (Apr 6, 2026): Engine thing. But a modified ("uncensored") version of gemma4 is unlikely to run on ollama, at least until the next vendor sync: https://github.com/ollama/ollama/issues/14575#issuecomment-3989918451
Author
Owner

@directorboint-arch commented on GitHub (Apr 6, 2026):

Engine thing. But a modified ("uncensored") version of gemma4 is unlikely to run on ollama, at least until the next vendor sync: #14575 (comment)

Thank you! And, you're correct, most don't run (26B and 31B), but I managed to find a 31B that does. =) Though it is VERY slow.....

<!-- gh-comment-id:4194333578 --> @directorboint-arch commented on GitHub (Apr 6, 2026): > Engine thing. But a modified ("uncensored") version of gemma4 is unlikely to run on ollama, at least until the next vendor sync: [#14575 (comment)](https://github.com/ollama/ollama/issues/14575#issuecomment-3989918451) Thank you! And, you're correct, most don't run (26B and 31B), but I managed to find a 31B that does. =) Though it is VERY slow.....
Author
Owner

@Bestig commented on GitHub (Apr 6, 2026):

Flash Attention is known to make this model run slowly, and the PR that enabled it by default was rolled back due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model.

Sorry, is there any news\plans\dates on adding FA support for gemma4, without model running slowly?
I mean ETA (like day\week\month, etc)

<!-- gh-comment-id:4194749524 --> @Bestig commented on GitHub (Apr 6, 2026): > Flash Attention is known to make this model run slowly, and the PR that enabled it by default was [rolled back](https://github.com/ollama/ollama/pull/15311) due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model. Sorry, is there any news\plans\dates on adding FA support for gemma4, without model running slowly? I mean ETA (like day\week\month, etc)
Author
Owner

@directorboint-arch commented on GitHub (Apr 6, 2026):

Flash Attention is known to make this model run slowly, and the PR that enabled it by default was rolled back due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model.

Sorry, is there any news\plans\dates on adding FA support for gemma4, without model running slowly? I mean ETA (like day\week\month, etc)

We all like ETAs, I get it. But, now that I've been debugging things for a while, I see that we have no idea how long it will take. What we think will fix it might not work, might break other things, or might lead to a realization that things are even worse than we thought and we have more problems to solve. It's not like sitting down to do the dishes and knowing it will take 10 minutes. It's more like remodeling your bathroom and your rip up the tiles and find mold, then you see the mold at through your walls, the you see you've got termites now that the walls are open and the you noticed some rats were chewing on the electric wires.... And so what might have been a job that took a few days is now gonna take weeks....

<!-- gh-comment-id:4195401336 --> @directorboint-arch commented on GitHub (Apr 6, 2026): > > Flash Attention is known to make this model run slowly, and the PR that enabled it by default was [rolled back](https://github.com/ollama/ollama/pull/15311) due to performance impact. So this issue is really that the model doesn't respond before the client times out. Since the team are in favour of enabling FA where a model supports it, I would say disable it in your server for the moment and wait for the ollama team to implement a performant version for this model. > > Sorry, is there any news\plans\dates on adding FA support for gemma4, without model running slowly? I mean ETA (like day\week\month, etc) We all like ETAs, I get it. But, now that I've been debugging things for a while, I see that we have no idea how long it will take. What we think will fix it might not work, might break other things, or might lead to a realization that things are even worse than we thought and we have more problems to solve. It's not like sitting down to do the dishes and knowing it will take 10 minutes. It's more like remodeling your bathroom and your rip up the tiles and find mold, then you see the mold at through your walls, the you see you've got termites now that the walls are open and the you noticed some rats were chewing on the electric wires.... And so what might have been a job that took a few days is now gonna take weeks....
Author
Owner

@rick-github commented on GitHub (Apr 6, 2026):

There is prior art so it may be more like pulling up the old carpet to find some weathered but serviceable hardwood floor underneath, just needs a clean and a polish and a check for powderpost beetles. How long that takes is known only to the maintainers.

<!-- gh-comment-id:4195435652 --> @rick-github commented on GitHub (Apr 6, 2026): There is [prior art](https://github.com/ggml-org/llama.cpp/pull/20998) so it may be more like pulling up the old carpet to find some weathered but serviceable hardwood floor underneath, just needs a clean and a polish and a check for powderpost beetles. How long that takes is known only to the maintainers.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9819