[GH-ISSUE #14567] Bug: gemini-3-flash-preview:cloud - Function Calling returns 400 (missing thought_signature) #71506

Open
opened 2026-05-05 01:57:29 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @rovanni on GitHub (Mar 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14567

What is the issue?

Description

When using gemini-3-flash-preview:cloud via Ollama Cloud with function calling / tool calling enabled, the API returns a 400 Bad Request error whenever the model triggers a tool call:

Ollama API error 400: {"StatusCode":400,"Status":"400 Bad Request","error":"Function call is missing a thought_signature in functionCall parts."}

This indicates that Ollama's function-calling integration for Gemini 3 does not yet handle the thought_signature field required by Google's Gemini 3 API for tool calls.


Steps to Reproduce

  1. Configure Ollama Cloud with the gemini-3-flash-preview:cloud model.
  2. Define one or more tools/functions in the Ollama request payload.
  3. Send a prompt that causes the model to invoke a tool.
  4. Observe the HTTP 400 error with the message above.

Expected Behavior

Tool / function calling with gemini-3-flash-preview:cloud should work the same way as other Ollama Cloud models (e.g. glm-5:cloud, kimi-k2.5:cloud) — without requiring the client to manually manage or inject a thought_signature field.

Ollama should transparently handle any Gemini 3–specific protocol details (such as thought signatures) when forwarding tool calls and tool results.


Actual Behavior

When the model emits a function/tool call, Ollama forwards the call to the Gemini 3 API but the follow-up request fails with:

Function call is missing a thought_signature in functionCall parts.

According to Google's Gemini 3 documentation, function calls include a thought_signature that must be preserved and echoed back with tool results. Ollama does not currently handle this field, causing all tool-calling flows to fail for this model.


Environment

Field Value
Ollama Version 0.17.4
Model (failing) gemini-3-flash-preview:cloud
Models (working) glm-5:cloud, kimi-k2.5:cloud
Error Function call is missing a thought_signature in functionCall parts

Additional Context

  • Google's Gemini 3 API enforces a thought_signature mechanism: every function call response includes a signature that must be sent back alongside the tool result; omitting it causes 400 errors.
  • Multiple SDKs integrating Gemini 3 have had to add explicit support for thought_signature to make tool calling work.
  • Ollama Cloud currently does not propagate or manage thought_signature for Gemini 3, causing tool integrations that work with other models to fail specifically for gemini-3-flash-preview:cloud.

Questions:

  • Is support for Gemini 3 thought_signature in tool/function calling already planned?
  • Is there a recommended workaround in the meantime (e.g. disable tools for Gemini 3, or use a different Gemini model)?

Relevant log output

Ollama API error 400: {"StatusCode":400,"Status":"400 Bad Request","error":"Function call is missing a thought_signature in functionCall parts."}

OS

Linux

GPU

No response

CPU

No response

Ollama version

0.17.4

Originally created by @rovanni on GitHub (Mar 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14567 ### What is the issue? ## Description When using `gemini-3-flash-preview:cloud` via Ollama Cloud with function calling / tool calling enabled, the API returns a **400 Bad Request** error whenever the model triggers a tool call: ``` Ollama API error 400: {"StatusCode":400,"Status":"400 Bad Request","error":"Function call is missing a thought_signature in functionCall parts."} ``` This indicates that Ollama's function-calling integration for Gemini 3 does not yet handle the `thought_signature` field required by Google's Gemini 3 API for tool calls. --- ## Steps to Reproduce 1. Configure Ollama Cloud with the `gemini-3-flash-preview:cloud` model. 2. Define one or more tools/functions in the Ollama request payload. 3. Send a prompt that causes the model to invoke a tool. 4. Observe the HTTP 400 error with the message above. --- ## Expected Behavior Tool / function calling with `gemini-3-flash-preview:cloud` should work the same way as other Ollama Cloud models (e.g. `glm-5:cloud`, `kimi-k2.5:cloud`) — without requiring the client to manually manage or inject a `thought_signature` field. Ollama should transparently handle any Gemini 3–specific protocol details (such as thought signatures) when forwarding tool calls and tool results. --- ## Actual Behavior When the model emits a function/tool call, Ollama forwards the call to the Gemini 3 API but the follow-up request fails with: ``` Function call is missing a thought_signature in functionCall parts. ``` According to Google's Gemini 3 documentation, function calls include a `thought_signature` that must be preserved and echoed back with tool results. Ollama does not currently handle this field, causing all tool-calling flows to fail for this model. --- ## Environment | Field | Value | |---------------------|--------------------------------------------------------------------| | Ollama Version | 0.17.4 | | Model (failing) | `gemini-3-flash-preview:cloud` | | Models (working) | `glm-5:cloud`, `kimi-k2.5:cloud` | | Error | `Function call is missing a thought_signature in functionCall parts` | --- ## Additional Context - Google's Gemini 3 API enforces a `thought_signature` mechanism: every function call response includes a signature that must be sent back alongside the tool result; omitting it causes 400 errors. - Multiple SDKs integrating Gemini 3 have had to add explicit support for `thought_signature` to make tool calling work. - Ollama Cloud currently does not propagate or manage `thought_signature` for Gemini 3, causing tool integrations that work with other models to fail specifically for `gemini-3-flash-preview:cloud`. **Questions:** - Is support for Gemini 3 `thought_signature` in tool/function calling already planned? - Is there a recommended workaround in the meantime (e.g. disable tools for Gemini 3, or use a different Gemini model)? ### Relevant log output ```shell Ollama API error 400: {"StatusCode":400,"Status":"400 Bad Request","error":"Function call is missing a thought_signature in functionCall parts."} ``` ### OS Linux ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-05-05 01:57:29 -05:00
Author
Owner

@willificent commented on GitHub (Mar 4, 2026):

Additional Finding: Tool Call Arguments Format Mismatch

I've been debugging this same issue and discovered a related problem that may help isolate the bug.

The Arguments Format Issue

When testing gemini-3-flash-preview:cloud directly via /api/chat, I found that:

  1. Gemini returns tool call arguments as an object, not a string:
{
  "index": 0,
  "name": "get_weather",
  "arguments": {
    "city": "Chicago"
  }
}
  1. Sending tool results with arguments as a JSON string fails with a different error:
{"error":"Value looks like object, but can't find closing '}' symbol"}
  1. Sending tool results with arguments as an object works — the round-trip succeeds when the arguments are passed as a proper object rather than a stringified JSON.

Test Cases

Fails (arguments as JSON string — OpenAI-compatible format):

{"id": "tc1", "function": {"name": "get_weather", "arguments": "{\"city\": \"Chicago\"}"}}

Works (arguments as object):

{"id": "tc1", "function": {"name": "get_weather", "arguments": {"city": "Chicago"}}}

Hypothesis

It appears there are two separate issues affecting Gemini 3 tool calling:

  1. Arguments format mismatch — Ollama's OpenAI-compatible layer may be expecting stringified arguments, but Gemini Cloud returns objects. This causes the parsing error above.

  2. Missing thought_signature — The original issue reported. Even when the format is correct, Gemini 3 requires a thought_signature field that Ollama doesn't currently preserve/forward.

The first issue can be worked around by ensuring arguments are passed as objects, but the second is a protocol-level gap that requires Ollama to track and inject the signature from the initial tool call response.

Environment

Field Value
Ollama Version 0.17.4
Model gemini-3-flash-preview:cloud
Endpoint /api/chat (native Ollama API)

Hope this helps narrow down the fix!

<!-- gh-comment-id:4000240770 --> @willificent commented on GitHub (Mar 4, 2026): ## Additional Finding: Tool Call Arguments Format Mismatch I've been debugging this same issue and discovered a related problem that may help isolate the bug. ### The Arguments Format Issue When testing `gemini-3-flash-preview:cloud` directly via `/api/chat`, I found that: 1. **Gemini returns tool call arguments as an object**, not a string: ```json { "index": 0, "name": "get_weather", "arguments": { "city": "Chicago" } } ``` 2. **Sending tool results with arguments as a JSON string fails** with a different error: ``` {"error":"Value looks like object, but can't find closing '}' symbol"} ``` 3. **Sending tool results with arguments as an object works** — the round-trip succeeds when the arguments are passed as a proper object rather than a stringified JSON. ### Test Cases **Fails** (arguments as JSON string — OpenAI-compatible format): ```json {"id": "tc1", "function": {"name": "get_weather", "arguments": "{\"city\": \"Chicago\"}"}} ``` **Works** (arguments as object): ```json {"id": "tc1", "function": {"name": "get_weather", "arguments": {"city": "Chicago"}}} ``` ### Hypothesis It appears there are **two separate issues** affecting Gemini 3 tool calling: 1. **Arguments format mismatch** — Ollama's OpenAI-compatible layer may be expecting stringified arguments, but Gemini Cloud returns objects. This causes the parsing error above. 2. **Missing `thought_signature`** — The original issue reported. Even when the format is correct, Gemini 3 requires a `thought_signature` field that Ollama doesn't currently preserve/forward. The first issue can be worked around by ensuring arguments are passed as objects, but the second is a protocol-level gap that requires Ollama to track and inject the signature from the initial tool call response. ### Environment | Field | Value | |-------|-------| | Ollama Version | 0.17.4 | | Model | `gemini-3-flash-preview:cloud` | | Endpoint | `/api/chat` (native Ollama API) | Hope this helps narrow down the fix!
Author
Owner

@chereekana-droid commented on GitHub (Mar 6, 2026):

Hopefully, this issue will be taken seriously; it's just a simple bug.

<!-- gh-comment-id:4010512473 --> @chereekana-droid commented on GitHub (Mar 6, 2026): Hopefully, this issue will be taken seriously; it's just a simple bug.
Author
Owner

@willificent commented on GitHub (Mar 6, 2026):

Any progress on this perhaps? It is a super simple bug and, compared to GLM-5, Gemini 3 Flash is almost exactly as smart and nearly 3x as fast. I'll see if I can do some work on a fix and submit a PR.

<!-- gh-comment-id:4013117952 --> @willificent commented on GitHub (Mar 6, 2026): Any progress on this perhaps? It is a super simple bug and, compared to GLM-5, Gemini 3 Flash is almost exactly as smart and nearly 3x as fast. I'll see if I can do some work on a fix and submit a PR.
Author
Owner

@Tattooed-Geek commented on GitHub (Mar 6, 2026):

I am also waiting for a fix! 💪😜

<!-- gh-comment-id:4013135848 --> @Tattooed-Geek commented on GitHub (Mar 6, 2026): I am also waiting for a fix! 💪😜
Author
Owner

@willificent commented on GitHub (Mar 6, 2026):

Okay, I vibe coded a fix, fingers crossed this checks out at validation LOL https://github.com/ollama/ollama/pull/14676

<!-- gh-comment-id:4014001179 --> @willificent commented on GitHub (Mar 6, 2026): Okay, I vibe coded a fix, fingers crossed this checks out at validation LOL https://github.com/ollama/ollama/pull/14676
Author
Owner

@reflexmrl commented on GitHub (Mar 11, 2026):

Confirming this issue with OpenClaw + Gemini 3 via Ollama Cloud. Exact same error.
PR #14676 looks like the fix — looking forward to the merge! 🦞

<!-- gh-comment-id:4041478038 --> @reflexmrl commented on GitHub (Mar 11, 2026): Confirming this issue with OpenClaw + Gemini 3 via Ollama Cloud. Exact same error. PR #14676 looks like the fix — looking forward to the merge! 🦞
Author
Owner

@chereekana-droid commented on GitHub (Mar 14, 2026):

What's the latest development, my friends?

<!-- gh-comment-id:4059184598 --> @chereekana-droid commented on GitHub (Mar 14, 2026): What's the latest development, my friends?
Author
Owner

@sakuradonut commented on GitHub (Apr 14, 2026):

看起來依然沒有新的進展,gema4的修復都做了,但gemini的這個400問題ollama依舊擱置。

<!-- gh-comment-id:4241409963 --> @sakuradonut commented on GitHub (Apr 14, 2026): 看起來依然沒有新的進展,gema4的修復都做了,但gemini的這個400問題ollama依舊擱置。
Author
Owner

@willificent commented on GitHub (Apr 16, 2026):

说起来,Ollama 还专门发了新版来支持 Gemma4,但 Gemini 3 Flash 好像没什么人关心?

<!-- gh-comment-id:4261126591 --> @willificent commented on GitHub (Apr 16, 2026): 说起来,Ollama 还专门发了新版来支持 Gemma4,但 Gemini 3 Flash 好像没什么人关心?
Author
Owner

@krauserene89 commented on GitHub (Apr 28, 2026):

Hi team, any ETA on when this might be fixed? Would love to use Gemini 3 Flash with tool calling — it's a great model for my use case. Thanks for the awesome work! 🙏

<!-- gh-comment-id:4333284480 --> @krauserene89 commented on GitHub (Apr 28, 2026): Hi team, any ETA on when this might be fixed? Would love to use Gemini 3 Flash with tool calling — it's a great model for my use case. Thanks for the awesome work! 🙏
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71506