[GH-ISSUE #15428] gemma4:26b (MoE) returns completely empty response (no content, no reasoning) and stops early on long system prompts #35621

Open
opened 2026-04-22 20:16:10 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @cenovioj-lifeline on GitHub (Apr 8, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15428

gemma4:26b (MoE) returns completely empty response (no content, no reasoning) and stops early on long system prompts

What is the issue?

When using the Gemma 4 MoE model (gemma4:26b) via the native /api/chat endpoint, the model returns a completely empty response (empty content, missing reasoning, and done_reason: "stop") if the system prompt exceeds roughly 500 characters.

This is not the known issue with the OpenAI /v1/chat/completions endpoint where the output is moved to the reasoning field (as reported in #15288). In this case, the model evaluates ~49 tokens and stops immediately, producing absolutely no output.

If the system prompt is short (e.g., < 200 chars), the model works correctly and produces output. The same long prompt works perfectly on Dense models like gemma4:31b and gemma-d-cc:latest.

Steps to reproduce

Create a long system prompt (e.g., 2000+ characters). You can use any filler text or a complex system prompt.

# Create a 2000-character dummy system prompt
SYS_PROMPT=$(printf "System instruction. %.0s" {1..100})

# Send request to native API
curl -s http://localhost:11434/api/chat -d '{
  "model": "gemma4:26b",
  "messages": [
    {"role": "system", "content": "'"$SYS_PROMPT"'"},
    {"role": "user", "content": "Who won the 2025 NCAA mens basketball championship?"}
  ],
  "stream": false
}' | jq .

Result:

{
  "model": "gemma4:26b",
  "created_at": "2026-04-08T19:59:20.718104Z",
  "message": {
    "role": "assistant",
    "content": ""
  },
  "done": true,
  "done_reason": "stop",
  "total_duration": 1265782584,
  "load_duration": 137602292,
  "prompt_eval_count": 1423,
  "prompt_eval_duration": 192706834,
  "eval_count": 49,
  "eval_duration": 911844667
}

Notice eval_count is 49 and content is "".

What I have ruled out

  • Modelfile/Parameters: I tested identical blobs with varied Modelfiles (removing RENDERER, PARSER, tweaking num_ctx, temperature, top_k, etc.). The bug persists across all variants.
  • Ollama Version: Reproduced on 0.20.0 and 0.20.3.
  • Prompt Content: The bug triggers even with purely all-ASCII junk (Lorem Ipsum) if it reaches the length threshold.
  • Dense Models: gemma4:31b and gemma4:e4b handle the exact same prompt correctly. This bug is isolated to the MoE architecture or its specific quantization in the library.

Environment

  • Ollama version: 0.20.3
  • Model: gemma4:26b (SHA: 7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df)
  • OS: macOS 15 (Apple Silicon Mac Studio)
  • Configuration: flash_attention: false, num_parallel: 1
Originally created by @cenovioj-lifeline on GitHub (Apr 8, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15428 # gemma4:26b (MoE) returns completely empty response (no content, no reasoning) and stops early on long system prompts ## What is the issue? When using the Gemma 4 MoE model (`gemma4:26b`) via the native `/api/chat` endpoint, the model returns a completely empty response (empty `content`, missing `reasoning`, and `done_reason: "stop"`) if the system prompt exceeds roughly 500 characters. This is **not** the known issue with the OpenAI `/v1/chat/completions` endpoint where the output is moved to the `reasoning` field (as reported in #15288). In this case, the model evaluates ~49 tokens and stops immediately, producing absolutely no output. If the system prompt is short (e.g., < 200 chars), the model works correctly and produces output. The same long prompt works perfectly on Dense models like `gemma4:31b` and `gemma-d-cc:latest`. ## Steps to reproduce Create a long system prompt (e.g., 2000+ characters). You can use any filler text or a complex system prompt. ```bash # Create a 2000-character dummy system prompt SYS_PROMPT=$(printf "System instruction. %.0s" {1..100}) # Send request to native API curl -s http://localhost:11434/api/chat -d '{ "model": "gemma4:26b", "messages": [ {"role": "system", "content": "'"$SYS_PROMPT"'"}, {"role": "user", "content": "Who won the 2025 NCAA mens basketball championship?"} ], "stream": false }' | jq . ``` **Result:** ```json { "model": "gemma4:26b", "created_at": "2026-04-08T19:59:20.718104Z", "message": { "role": "assistant", "content": "" }, "done": true, "done_reason": "stop", "total_duration": 1265782584, "load_duration": 137602292, "prompt_eval_count": 1423, "prompt_eval_duration": 192706834, "eval_count": 49, "eval_duration": 911844667 } ``` Notice `eval_count` is 49 and `content` is `""`. ## What I have ruled out - **Modelfile/Parameters:** I tested identical blobs with varied Modelfiles (removing RENDERER, PARSER, tweaking `num_ctx`, `temperature`, `top_k`, etc.). The bug persists across all variants. - **Ollama Version:** Reproduced on 0.20.0 and 0.20.3. - **Prompt Content:** The bug triggers even with purely all-ASCII junk (Lorem Ipsum) if it reaches the length threshold. - **Dense Models:** `gemma4:31b` and `gemma4:e4b` handle the exact same prompt correctly. This bug is isolated to the MoE architecture or its specific quantization in the library. ## Environment - **Ollama version:** 0.20.3 - **Model:** `gemma4:26b` (SHA: 7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df) - **OS:** macOS 15 (Apple Silicon Mac Studio) - **Configuration:** `flash_attention: false`, `num_parallel: 1`
GiteaMirror added the needs more info label 2026-04-22 20:16:10 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 8, 2026):

$ ollama -v
ollama version is 0.20.3
$ SYS_PROMPT=$(printf "System instruction. %.0s" {1..100})
$ curl -s http://localhost:11434/api/chat -d '{
>   "model": "gemma4:26b",
>   "messages": [
>     {"role": "system", "content": "'"$SYS_PROMPT"'"},
>     {"role": "user", "content": "Who won the 2025 NCAA mens basketball championship?"}
>   ],
>   "stream": false
> }' | jq .
{
  "model": "gemma4:26b",
  "created_at": "2026-04-08T21:05:07.69990294Z",
  "message": {
    "role": "assistant",
    "content": "The 2025 NCAA Men's Basketball Championship **has not happened yet**. The tournament is scheduled to take place in March and April 2025, with the national championship game expected to be played in early April 2025.",
    "thinking": "*   Question: \"Who won the 2025 NCAA mens basketball championship?\"\n    *   Context: The user is asking about a future event (relative to the training data/current time, or at least an event that hasn't happened yet/completed if we are in early 2025).\n\n    *   Today's Date: May 2024 (based on my internal clock/knowledge cutoff context).\n    *   NCAA Men's Basketball Tournament timing: Typically takes place in March and April.\n    *   Status of 2025 Championship: The 2025 tournament hasn't occurred yet.\n\n    *   The 2025 NCAA Men's Basketball Championship has not taken place yet.\n    *   Therefore, there is no winner.\n\n    *   Directly state that the 2025 championship hasn't happened.\n    *   (Optional) Mention when it is scheduled to happen.\n\n    *   \"The 2025 NCAA Men's Basketball Championship has not happened yet. The tournament is scheduled to take place in March and April 2025, with the championship game typically held in early April.\""
  },
  "done": true,
  "done_reason": "stop",
  "total_duration": 87121977227,
  "load_duration": 84811826593,
  "prompt_eval_count": 328,
  "prompt_eval_duration": 186029406,
  "eval_count": 314,
  "eval_duration": 1944731598
}

Server logs with OLLAMA_DEBUG=2 will aid in debugging. Note this will be a lot of output.

What's the output of

ollama list gemma4:26b
ollama show --modelfile gemma4:26b
<!-- gh-comment-id:4209684259 --> @rick-github commented on GitHub (Apr 8, 2026): ```console $ ollama -v ollama version is 0.20.3 $ SYS_PROMPT=$(printf "System instruction. %.0s" {1..100}) $ curl -s http://localhost:11434/api/chat -d '{ > "model": "gemma4:26b", > "messages": [ > {"role": "system", "content": "'"$SYS_PROMPT"'"}, > {"role": "user", "content": "Who won the 2025 NCAA mens basketball championship?"} > ], > "stream": false > }' | jq . { "model": "gemma4:26b", "created_at": "2026-04-08T21:05:07.69990294Z", "message": { "role": "assistant", "content": "The 2025 NCAA Men's Basketball Championship **has not happened yet**. The tournament is scheduled to take place in March and April 2025, with the national championship game expected to be played in early April 2025.", "thinking": "* Question: \"Who won the 2025 NCAA mens basketball championship?\"\n * Context: The user is asking about a future event (relative to the training data/current time, or at least an event that hasn't happened yet/completed if we are in early 2025).\n\n * Today's Date: May 2024 (based on my internal clock/knowledge cutoff context).\n * NCAA Men's Basketball Tournament timing: Typically takes place in March and April.\n * Status of 2025 Championship: The 2025 tournament hasn't occurred yet.\n\n * The 2025 NCAA Men's Basketball Championship has not taken place yet.\n * Therefore, there is no winner.\n\n * Directly state that the 2025 championship hasn't happened.\n * (Optional) Mention when it is scheduled to happen.\n\n * \"The 2025 NCAA Men's Basketball Championship has not happened yet. The tournament is scheduled to take place in March and April 2025, with the championship game typically held in early April.\"" }, "done": true, "done_reason": "stop", "total_duration": 87121977227, "load_duration": 84811826593, "prompt_eval_count": 328, "prompt_eval_duration": 186029406, "eval_count": 314, "eval_duration": 1944731598 } ``` [Server logs](https://docs.ollama.com/troubleshooting) with `OLLAMA_DEBUG=2` will aid in debugging. Note this will be a lot of output. What's the output of ``` ollama list gemma4:26b ollama show --modelfile gemma4:26b ```
Author
Owner

@wiltongorske commented on GitHub (Apr 9, 2026):

Glad to see I'm not the only one experiencing this same bug! Crazy that gpt-oss:20b is still the most useful.

<!-- gh-comment-id:4215794262 --> @wiltongorske commented on GitHub (Apr 9, 2026): Glad to see I'm not the only one experiencing this same bug! Crazy that gpt-oss:20b is still the most useful.
Author
Owner

@semidark commented on GitHub (Apr 12, 2026):

Yeah, Same problem here... :-/

<!-- gh-comment-id:4231468526 --> @semidark commented on GitHub (Apr 12, 2026): Yeah, Same problem here... :-/
Author
Owner

@rick-github commented on GitHub (Apr 12, 2026):

Server logs with OLLAMA_DEBUG=2 will aid in debugging.

<!-- gh-comment-id:4231471665 --> @rick-github commented on GitHub (Apr 12, 2026): [Server logs](https://docs.ollama.com/troubleshooting) with `OLLAMA_DEBUG=2` will aid in debugging.
Author
Owner

@cymise commented on GitHub (Apr 14, 2026):

Same issue with tool calling — gemma4:26b returns empty response after tool result

Note: This analysis is based on raw packet inspection between OpenClaw and Ollama using mitmproxy (reverse proxy on Ollama port). The results were organized with the help of Claude, but all findings have been manually verified by the reporter.

Environment

  • macOS (Apple Silicon M4 Pro, 64GB)
  • Ollama 0.20.6
  • gemma4:26b (MoE, 26B-A4B)
  • Accessed via OpenClaw gateway (system prompt ~42K tokens, 10 tools defined)

Reproduction

  1. Model successfully generates a tool_call (eval_count: 1205)
  2. Tool result is sent back in the next request
  3. 🔴 Model returns empty responseeval_count: 2, eval_duration: 27ms, done_reason: "stop"
  4. After user manually sends "계속" (continue), model recovers (eval_count: 44)

Key evidence from raw Ollama response

Successful tool call (Response 2):

{"model":"gemma4:26b","message":{"role":"assistant","content":"","tool_calls":[{"id":"call_ixh3e56h","function":{"name":"exec","arguments":{"command":"mcporter call IrisEye.search_passages query=\"저전력\" search_mode=\"hybrid\" top_k=3"}}}]},"done":false}

🔴 Empty response after tool result (Response 3):

{"model":"gemma4:26b","message":{"role":"assistant","content":""},"done":true,"done_reason":"stop","total_duration":4683442791,"prompt_eval_count":44640,"prompt_eval_duration":4390705916,"eval_count":2,"eval_duration":27088083}
  • eval_count: 2 — only 2 tokens generated
  • eval_duration: 27ms — stopped almost instantly
  • prompt_eval_count: 44640 — full prompt was processed (4.4s)
  • Model chose to stop (done_reason: "stop"), was not truncated

Parameters

{"think": true, "stream": true, "options": {"num_ctx": 128000, "temperature": 0.55}}

Notes

  • This does not happen with Qwen3.5:27B (dense) under the same conditions
  • Qwen3.5:35B (MoE) also exhibits this exact same pattern — suggesting MoE architecture + long system prompt + tool calling might be the trigger. (But it’s not certain.)
  • Captured via mitmproxy reverse proxy between client and Ollama

Result of ollama show --modelfile gemma4:26b

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM gemma4:26b

FROM /Users/USER/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
TEMPLATE {{ .Prompt }}
RENDERER gemma4
PARSER gemma4
PARAMETER temperature 1
PARAMETER top_k 64
PARAMETER top_p 0.95
LICENSE """                                Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License."""

Attached files

Raw requests (system prompt redacted, tool results truncated):

  • request_2_redacted.json — Request that produced successful tool_call response (36 messages, ends with [..., "assistant", "user"]). Model generated 1205 tokens and called exec to run a search tool. This is the "working" baseline.
  • request_3_redacted.json🔴 Request that triggered the empty response (38 messages, ends with [..., "assistant", "tool"]). Contains the tool result from request 2's tool_call. Model processed 44,640 prompt tokens but generated only 2 tokens before stopping.
  • request_4_redacted.json, request_5_redacted.json — Request after user manually sent "계속" (continue) to recover (40 messages, ends with [..., "assistant", "user"]). Model recovered with eval_count: 44.

Analysis:

OLLAMA_DEBUG=2 log:

  • ollama_debug_redacted_v2.log — Full OLLAMA_DEBUG=2 server log captured while reproducing the bug. The bug was reproduced deterministically by replaying the captured request via curl. Personal information (usernames, file paths, system prompt content) has been redacted; all technical debug output (model loading, GPU memory, batch processing, token decoding, EOS detection) is preserved intact.
<!-- gh-comment-id:4241255788 --> @cymise commented on GitHub (Apr 14, 2026): ## Same issue with tool calling — gemma4:26b returns empty response after tool result > **Note:** This analysis is based on raw packet inspection between OpenClaw and Ollama using mitmproxy (reverse proxy on Ollama port). The results were organized with the help of Claude, but all findings have been manually verified by the reporter. ### Environment - macOS (Apple Silicon M4 Pro, 64GB) - Ollama 0.20.6 - gemma4:26b (MoE, 26B-A4B) - Accessed via OpenClaw gateway (system prompt ~42K tokens, 10 tools defined) ### Reproduction 1. Model successfully generates a tool_call (eval_count: 1205) ✅ 2. Tool result is sent back in the next request 3. 🔴 Model returns **empty response** — `eval_count: 2`, `eval_duration: 27ms`, `done_reason: "stop"` 4. After user manually sends "계속" (continue), model recovers (eval_count: 44) ### Key evidence from raw Ollama response **Successful tool call (Response 2):** ```json {"model":"gemma4:26b","message":{"role":"assistant","content":"","tool_calls":[{"id":"call_ixh3e56h","function":{"name":"exec","arguments":{"command":"mcporter call IrisEye.search_passages query=\"저전력\" search_mode=\"hybrid\" top_k=3"}}}]},"done":false} ``` **🔴 Empty response after tool result (Response 3):** ```json {"model":"gemma4:26b","message":{"role":"assistant","content":""},"done":true,"done_reason":"stop","total_duration":4683442791,"prompt_eval_count":44640,"prompt_eval_duration":4390705916,"eval_count":2,"eval_duration":27088083} ``` - `eval_count: 2` — only 2 tokens generated - `eval_duration: 27ms` — stopped almost instantly - `prompt_eval_count: 44640` — full prompt was processed (4.4s) - Model chose to stop (`done_reason: "stop"`), was not truncated ### Parameters ```json {"think": true, "stream": true, "options": {"num_ctx": 128000, "temperature": 0.55}} ``` ### Notes - This does **not** happen with Qwen3.5:27B (dense) under the same conditions - Qwen3.5:35B (MoE) also exhibits this exact same pattern — suggesting **MoE architecture + long system prompt + tool calling** might be the trigger. (But it’s not certain.) - Captured via mitmproxy reverse proxy between client and Ollama ### Result of ```ollama show --modelfile gemma4:26b``` ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this, replace FROM with: # FROM gemma4:26b FROM /Users/USER/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df TEMPLATE {{ .Prompt }} RENDERER gemma4 PARSER gemma4 PARAMETER temperature 1 PARAMETER top_k 64 PARAMETER top_p 0.95 LICENSE """ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.""" ``` ### Attached files **Raw requests (system prompt redacted, tool results truncated):** - [request_2_redacted.json](https://github.com/user-attachments/files/26696294/request_2_redacted.json) — Request that produced **successful** tool_call response (36 messages, ends with `[..., "assistant", "user"]`). Model generated 1205 tokens and called `exec` to run a search tool. This is the "working" baseline. - [request_3_redacted.json](https://github.com/user-attachments/files/26696293/request_3_redacted.json) — 🔴 Request that triggered the **empty response** (38 messages, ends with `[..., "assistant", "tool"]`). Contains the tool result from request 2's tool_call. Model processed 44,640 prompt tokens but generated only 2 tokens before stopping. - [request_4_redacted.json](https://github.com/user-attachments/files/26696295/request_4_redacted.json), [request_5_redacted.json](https://github.com/user-attachments/files/26696297/request_5_redacted.json) — Request after user manually sent "계속" (continue) to recover (40 messages, ends with `[..., "assistant", "user"]`). Model recovered with eval_count: 44. **Analysis:** - [empty_response_raw.md](https://github.com/user-attachments/files/26696337/empty_response_raw.md) — raw Ollama response sequence showing the bug - [failed_tool_2_redacted.md](https://github.com/user-attachments/files/26696331/failed_tool_2_redacted.md)(7 responses, 1 empty) **OLLAMA_DEBUG=2 log:** - [ollama_debug_redacted_v2.log](https://github.com/user-attachments/files/26696339/ollama_debug_redacted_v2.log) — Full `OLLAMA_DEBUG=2` server log captured while reproducing the bug. The bug was reproduced deterministically by replaying the captured request via `curl`. Personal information (usernames, file paths, system prompt content) has been redacted; all technical debug output (model loading, GPU memory, batch processing, token decoding, EOS detection) is preserved intact.
Author
Owner

@maxbanton commented on GitHub (Apr 14, 2026):

Additional repro data: tool calling fails ~66% with longer prompts

Environment

  • Ollama 0.20.7, macOS, Apple M5 Pro
  • gemma4:26b

Reproduction

Tool with 4 required fields, nested arrays. Short user message (~300 prompt tokens): 3/3 success. Longer user message (~1000 prompt tokens): 1/3 success.

Failure behavior

  • 6000-8000 thinking tokens (reasoning is correct)
  • Model emits stop without ever generating <|tool_call>
  • content: "", tool_calls: null, done_reason: "stop"

Things that DON'T help

  • num_ctx: 32768 — worse (0/3 vs 1/3 at 16384)
  • think: false — 0/3 (matches #15260)
  • num_predict: 4096 — thinking consumes entire budget
  • temperature: 0.3 — no change

Key observation

Same prompt WITHOUT tools array (JSON schema in system prompt instead): 3/3 success every time. Model reasons correctly and produces valid JSON — it just fails to transition from thinking to <|tool_call>.

Minimal repro

curl -s http://localhost:11434/api/chat -d '{
  "model": "gemma4:26b",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant. Call the create_report tool."},
    {"role": "user", "content": "Topic: quarterly review\nConstraints: must cover sales, marketing, engineering, support\nExclude: do not mention the following topics: restructuring, layoffs, budget cuts, salary freezes, hiring freezes, office closures, vendor cancellations, contract renegotiations, travel bans, equipment freezes, subscription cancellations, training cuts, internship program, holiday party, team offsites, conference sponsorships, charitable donations, stock buyback\nPlease generate a report."}
  ],
  "tools": [{"type":"function","function":{"name":"create_report","description":"Create a structured report","parameters":{"type":"object","required":["title","sections","summary"],"properties":{"title":{"type":"string"},"sections":{"type":"array","items":{"type":"object","properties":{"heading":{"type":"string"},"body":{"type":"string"}}}},"summary":{"type":"string"}}}}}],
  "stream": false,
  "options": {"num_ctx": 16384}
}'

Run 3-5 times. Expect ~33% success rate. Remove the "Exclude" line and success rate jumps to ~100%.

<!-- gh-comment-id:4244694305 --> @maxbanton commented on GitHub (Apr 14, 2026): ## Additional repro data: tool calling fails ~66% with longer prompts ### Environment - Ollama 0.20.7, macOS, Apple M5 Pro - gemma4:26b ### Reproduction Tool with 4 required fields, nested arrays. Short user message (~300 prompt tokens): 3/3 success. Longer user message (~1000 prompt tokens): 1/3 success. ### Failure behavior - 6000-8000 thinking tokens (reasoning is correct) - Model emits stop without ever generating `<|tool_call>` - `content: "", tool_calls: null, done_reason: "stop"` ### Things that DON'T help - `num_ctx: 32768` — worse (0/3 vs 1/3 at 16384) - `think: false` — 0/3 (matches #15260) - `num_predict: 4096` — thinking consumes entire budget - `temperature: 0.3` — no change ### Key observation Same prompt WITHOUT `tools` array (JSON schema in system prompt instead): 3/3 success every time. Model reasons correctly and produces valid JSON — it just fails to transition from thinking to `<|tool_call>`. ### Minimal repro ```bash curl -s http://localhost:11434/api/chat -d '{ "model": "gemma4:26b", "messages": [ {"role": "system", "content": "You are a helpful assistant. Call the create_report tool."}, {"role": "user", "content": "Topic: quarterly review\nConstraints: must cover sales, marketing, engineering, support\nExclude: do not mention the following topics: restructuring, layoffs, budget cuts, salary freezes, hiring freezes, office closures, vendor cancellations, contract renegotiations, travel bans, equipment freezes, subscription cancellations, training cuts, internship program, holiday party, team offsites, conference sponsorships, charitable donations, stock buyback\nPlease generate a report."} ], "tools": [{"type":"function","function":{"name":"create_report","description":"Create a structured report","parameters":{"type":"object","required":["title","sections","summary"],"properties":{"title":{"type":"string"},"sections":{"type":"array","items":{"type":"object","properties":{"heading":{"type":"string"},"body":{"type":"string"}}}},"summary":{"type":"string"}}}}}], "stream": false, "options": {"num_ctx": 16384} }' ``` Run 3-5 times. Expect ~33% success rate. Remove the "Exclude" line and success rate jumps to ~100%.
Author
Owner

@rparo20 commented on GitHub (Apr 18, 2026):

Tried to reproduce this on batiai/gemma4-26b:q4 (imatrix GGUF quantized directly from Google's BF16, running on Ollama 0.20.x) — couldn't trigger the
empty-content failure across five configurations:

system prompt user prompt eval_count content chars done_reason
1.8 KB ASCII filler NCAA 2025 question 378 247 stop
4.7 KB ASCII filler NCAA 2025 question 279 180 stop
10 KB ASCII filler NCAA 2025 question 272 183 stop
5 KB realistic engineering prompt "explain IQ3 vs Q3" 893 796 stop
4.7 KB ASCII filler "2 + 2 = ?" 78 9 stop

All returned sensible replies — no eval_count ≈ 49 + empty content pattern at any prompt length (tested up to 10 KB).

Since the architecture itself seems fine, this smells more like a specific-GGUF-metadata issue than a gemma4:26b architecture bug. Might be worth asking OP to:

  • Share which tag/sha they pulled from (unsloth vs Ollama official vs other)
  • Rerun against a second gemma4:26b quant source for comparison
  • Include ollama show <model> output so we can see the Modelfile + template

Reference tag used above: ollama pull batiai/gemma4-26b:q4 · https://huggingface.co/batiai/Gemma-4-26B-A4B-it-GGUF

(Disclosure: we publish batiai/ — flagging since we haven't seen this failure mode ourselves, might help narrow down whether this is source-specific.)

<!-- gh-comment-id:4273994246 --> @rparo20 commented on GitHub (Apr 18, 2026): Tried to reproduce this on `batiai/gemma4-26b:q4` (imatrix GGUF quantized directly from Google's BF16, running on Ollama 0.20.x) — couldn't trigger the empty-content failure across five configurations: | system prompt | user prompt | eval_count | content chars | done_reason | |---|---|---|---|---| | 1.8 KB ASCII filler | NCAA 2025 question | 378 | 247 | stop | | 4.7 KB ASCII filler | NCAA 2025 question | 279 | 180 | stop | | 10 KB ASCII filler | NCAA 2025 question | 272 | 183 | stop | | 5 KB realistic engineering prompt | "explain IQ3 vs Q3" | 893 | 796 | stop | | 4.7 KB ASCII filler | "2 + 2 = ?" | 78 | 9 | stop | All returned sensible replies — no `eval_count ≈ 49` + empty content pattern at any prompt length (tested up to 10 KB). Since the architecture itself seems fine, this smells more like a specific-GGUF-metadata issue than a gemma4:26b architecture bug. Might be worth asking OP to: - Share which tag/sha they pulled from (unsloth vs Ollama official vs other) - Rerun against a second gemma4:26b quant source for comparison - Include `ollama show <model>` output so we can see the Modelfile + template Reference tag used above: `ollama pull batiai/gemma4-26b:q4` · https://huggingface.co/batiai/Gemma-4-26B-A4B-it-GGUF *(Disclosure: we publish `batiai/` — flagging since we haven't seen this failure mode ourselves, might help narrow down whether this is source-specific.)*
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15428
Analyzed: 2026-04-18T18:21:32.621736

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274308511 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15428 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15428 **Analyzed**: 2026-04-18T18:21:32.621736 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@nvcnvn commented on GitHub (Apr 20, 2026):

Just want to mention one similar issue here https://github.com/ml-explore/mlx-lm/issues/1125

<!-- gh-comment-id:4279181745 --> @nvcnvn commented on GitHub (Apr 20, 2026): Just want to mention one similar issue here https://github.com/ml-explore/mlx-lm/issues/1125
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35621