[GH-ISSUE #15395] gemma-4-26b-a4b-it-q8_0 runs slower with OLLAMA_FLASH_ATTENTION enabled after update to 0.20.3. #71905

Closed
opened 2026-05-05 02:55:18 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @SingularityMan on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15395

What is the issue?

This is related to fixed issue #15237

While it loads %100 on GPU now with flash_attention enabled, it runs ~20 t/s slower than with it disabled after the most recent merge. In the log outputs, you can see the difference in t/s generated. The ~40 t/s speed is with OLLAMA_FLASH_ATTENTION=1 and the 60 t/s speed is with flash_attention set to 0.

It seems like this still needs a bit of work. Can anyone else confirm?

Relevant log output

ollama ps
NAME                      ID              SIZE     PROCESSOR    CONTEXT    UNTIL
gemma4:26b-a4b-it-q8_0    6bfaf9a8cb37    31 GB    100% GPU     128000     Forever

C:\Users\carlo>ollama run gemma4:26b-a4b-it-q8_0 --verbose
>>> hello
Thinking...
The user said "hello".
This is a standard greeting.

    *   Acknowledge the greeting.
    *   Offer assistance.
    *   Keep it friendly and professional.
"Hello! How can I help you today?" or "Hi there! Is there anything I can assist you with?"
...done thinking.

Hello! How can I help you today?

total duration:       10.6211288s
load duration:        143.4942ms
prompt eval count:    16 token(s)
prompt eval duration: 8.5065501s
prompt eval rate:     1.88 tokens/s
eval count:           79 token(s)
eval duration:        1.8325572s
eval rate:            43.11 tokens/s
>>> set nothink
Thinking...
"set nothink"
The user wants me to adopt a specific persona or behavioral mode called "nothink".
"Nothink" isn't a standard or widely known system command or a common internet slang term for a specific AI persona (like "DAN" or "Sycophant"). It sounds like it could mean "don't think," "no thought," or perhaps a
minimalist/robotic mode.

    *   *Hypothesis 1: Minimalist/Zero output.* The user wants me to respond with as little as possible or nothing at all.

>>> /set nothink
Set 'nothink' mode.
>>> ok can you read this?
Yes, I can! I am ready to read whatever you paste or type here.

Please go ahead and share the text, document, or link you would like me to look at.

total duration:       1.5509014s
load duration:        134.9002ms
prompt eval count:    49 token(s)
prompt eval duration: 356.973ms
prompt eval rate:     137.27 tokens/s
eval count:           44 token(s)
eval duration:        1.0314488s
eval rate:            42.66 tokens/s
>>> /set nothink
Set 'nothink' mode.
>>> hello
Hello! How can I help you today?

total duration:       699.8407ms
load duration:        140.1261ms
prompt eval count:    10 token(s)
prompt eval duration: 259.903ms
prompt eval rate:     38.48 tokens/s
eval count:           14 token(s)
eval duration:        213.2984ms
eval rate:            65.64 tokens/s
>>> generate a paragraph
Since you didn't specify a topic, I have written a descriptive paragraph about a peaceful morning in a forest:

The forest awoke slowly under a soft blanket of morning mist, as the first golden rays of sunlight began to pierce through the dense canopy of ancient oaks. A gentle breeze whispered through the leaves, carrying the sweet, earthy
scent of damp moss and pine needles. Somewhere in the distance, a lone songbird began its melodic ritual, its bright notes echoing through the quiet stillness of the woods. Dewdrops clung to the edges of ferns like tiny, scattered
diamonds, shimmering momentarily before vanishing into the warming air. It was a moment of profound stillness, where time seemed to pause, allowing the natural world to breathe in perfect, rhythmic harmony.

**If you would like a paragraph about something specific (like technology, a spooky story, a business email, or a historical event), just let me know!**

total duration:       3.4735661s
load duration:        252.6652ms
prompt eval count:    32 token(s)
prompt eval duration: 294.0344ms
prompt eval rate:     108.83 tokens/s
eval count:           189 token(s)
eval duration:        2.8577232s
eval rate:            66.14 tokens/s
>>>

C:\Users\carlo>ollama run gemma4:26b-a4b-it-q8_0 --verbose
>>> hello
Thinking...
The user said "hello".
The user is initiating a conversation.

    *   Acknowledge the greeting.
    *   Be friendly and helpful.
    *   Offer assistance.
"Hello! How can I help you today?" or "Hi there! Is there anything I can assist you with?"
...done thinking.

Hello! How can I help you today?

total duration:       1.7863134s
load duration:        142.0114ms
prompt eval count:    16 token(s)
prompt eval duration: 324.5737ms
prompt eval rate:     49.30 tokens/s
eval count:           79 token(s)
eval duration:        1.1945745s
eval rate:            66.13 tokens/s
>>> /set nothink
Set 'nothink' mode.
>>> hello
Hello again! Is there anything specific you'd like to talk about or anything I can help you with?

total duration:       836.1454ms
load duration:        138.938ms
prompt eval count:    30 token(s)
prompt eval duration: 274.4779ms
prompt eval rate:     109.30 tokens/s
eval count:           27 token(s)
eval duration:        409.6199ms
eval rate:            65.91 tokens/s
>>>

C:\Users\carlo>ollama run gemma4:26b-a4b-it-q8_0 --verbose
>>> /set nothink
Set 'nothink' mode.
>>> hello
Hello! How can I help you today?

total duration:       911.1178ms
load duration:        146.1256ms
prompt eval count:    10 token(s)
prompt eval duration: 350.6307ms
prompt eval rate:     28.52 tokens/s
eval count:           14 token(s)
eval duration:        320.0643ms
eval rate:            43.74 tokens/s

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.20.3

Originally created by @SingularityMan on GitHub (Apr 7, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15395 ### What is the issue? This is related to fixed issue #15237 While it loads %100 on GPU now with `flash_attention` enabled, it runs ~20 t/s slower than with it disabled after the most recent merge. In the log outputs, you can see the difference in t/s generated. The ~40 t/s speed is with `OLLAMA_FLASH_ATTENTION=1` and the 60 t/s speed is with flash_attention set to 0. It seems like this still needs a bit of work. Can anyone else confirm? ### Relevant log output ```shell ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma4:26b-a4b-it-q8_0 6bfaf9a8cb37 31 GB 100% GPU 128000 Forever C:\Users\carlo>ollama run gemma4:26b-a4b-it-q8_0 --verbose >>> hello Thinking... The user said "hello". This is a standard greeting. * Acknowledge the greeting. * Offer assistance. * Keep it friendly and professional. "Hello! How can I help you today?" or "Hi there! Is there anything I can assist you with?" ...done thinking. Hello! How can I help you today? total duration: 10.6211288s load duration: 143.4942ms prompt eval count: 16 token(s) prompt eval duration: 8.5065501s prompt eval rate: 1.88 tokens/s eval count: 79 token(s) eval duration: 1.8325572s eval rate: 43.11 tokens/s >>> set nothink Thinking... "set nothink" The user wants me to adopt a specific persona or behavioral mode called "nothink". "Nothink" isn't a standard or widely known system command or a common internet slang term for a specific AI persona (like "DAN" or "Sycophant"). It sounds like it could mean "don't think," "no thought," or perhaps a minimalist/robotic mode. * *Hypothesis 1: Minimalist/Zero output.* The user wants me to respond with as little as possible or nothing at all. >>> /set nothink Set 'nothink' mode. >>> ok can you read this? Yes, I can! I am ready to read whatever you paste or type here. Please go ahead and share the text, document, or link you would like me to look at. total duration: 1.5509014s load duration: 134.9002ms prompt eval count: 49 token(s) prompt eval duration: 356.973ms prompt eval rate: 137.27 tokens/s eval count: 44 token(s) eval duration: 1.0314488s eval rate: 42.66 tokens/s >>> /set nothink Set 'nothink' mode. >>> hello Hello! How can I help you today? total duration: 699.8407ms load duration: 140.1261ms prompt eval count: 10 token(s) prompt eval duration: 259.903ms prompt eval rate: 38.48 tokens/s eval count: 14 token(s) eval duration: 213.2984ms eval rate: 65.64 tokens/s >>> generate a paragraph Since you didn't specify a topic, I have written a descriptive paragraph about a peaceful morning in a forest: The forest awoke slowly under a soft blanket of morning mist, as the first golden rays of sunlight began to pierce through the dense canopy of ancient oaks. A gentle breeze whispered through the leaves, carrying the sweet, earthy scent of damp moss and pine needles. Somewhere in the distance, a lone songbird began its melodic ritual, its bright notes echoing through the quiet stillness of the woods. Dewdrops clung to the edges of ferns like tiny, scattered diamonds, shimmering momentarily before vanishing into the warming air. It was a moment of profound stillness, where time seemed to pause, allowing the natural world to breathe in perfect, rhythmic harmony. **If you would like a paragraph about something specific (like technology, a spooky story, a business email, or a historical event), just let me know!** total duration: 3.4735661s load duration: 252.6652ms prompt eval count: 32 token(s) prompt eval duration: 294.0344ms prompt eval rate: 108.83 tokens/s eval count: 189 token(s) eval duration: 2.8577232s eval rate: 66.14 tokens/s >>> C:\Users\carlo>ollama run gemma4:26b-a4b-it-q8_0 --verbose >>> hello Thinking... The user said "hello". The user is initiating a conversation. * Acknowledge the greeting. * Be friendly and helpful. * Offer assistance. "Hello! How can I help you today?" or "Hi there! Is there anything I can assist you with?" ...done thinking. Hello! How can I help you today? total duration: 1.7863134s load duration: 142.0114ms prompt eval count: 16 token(s) prompt eval duration: 324.5737ms prompt eval rate: 49.30 tokens/s eval count: 79 token(s) eval duration: 1.1945745s eval rate: 66.13 tokens/s >>> /set nothink Set 'nothink' mode. >>> hello Hello again! Is there anything specific you'd like to talk about or anything I can help you with? total duration: 836.1454ms load duration: 138.938ms prompt eval count: 30 token(s) prompt eval duration: 274.4779ms prompt eval rate: 109.30 tokens/s eval count: 27 token(s) eval duration: 409.6199ms eval rate: 65.91 tokens/s >>> C:\Users\carlo>ollama run gemma4:26b-a4b-it-q8_0 --verbose >>> /set nothink Set 'nothink' mode. >>> hello Hello! How can I help you today? total duration: 911.1178ms load duration: 146.1256ms prompt eval count: 10 token(s) prompt eval duration: 350.6307ms prompt eval rate: 28.52 tokens/s eval count: 14 token(s) eval duration: 320.0643ms eval rate: 43.74 tokens/s ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.20.3
GiteaMirror added the bug label 2026-05-05 02:55:18 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

Fixed by #15378 which will be in the 0.20.4 release.

<!-- gh-comment-id:4200541959 --> @rick-github commented on GitHub (Apr 7, 2026): Fixed by #15378 which will be in the 0.20.4 release.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71905