[GH-ISSUE #14784] Ollama Cloud: Gemini 3 tool-calling fails (missing thought_signature) #35313

Closed
opened 2026-04-22 19:44:18 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @reflexmrl on GitHub (Mar 11, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14784

Description:
Gemini 3 models served via Ollama Cloud (ollama.com) fail during tool-calling loops. The first tool-call is generated correctly, but when sending the tool result back to continue the conversation, the API returns a 400 Bad Request.

Error:
"Function call is missing a thought_signature in functionCall parts."

Context:
This error is specific to Gemini 3's safety mechanism which requires a thought_signature to be preserved across tool-calling turns. It appears the Ollama native API (/api/chat) currently doesn't expose or persist this signature when proxying Gemini 3 models, making tool-calling impossible for Gemini 3 on the cloud platform.

Tested with:

  • Endpoint: https://ollama.com
  • API: Native Ollama (non-streaming)
  • Model: gemini-3-flash-preview:cloud (and others)
  • Client: OpenClaw

Expected behavior:
The thought_signature should be persisted server-side by Ollama Cloud or exposed in the API response so it can be returned in the subsequent turn.

Originally created by @reflexmrl on GitHub (Mar 11, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14784 Description: Gemini 3 models served via Ollama Cloud (ollama.com) fail during tool-calling loops. The first tool-call is generated correctly, but when sending the tool result back to continue the conversation, the API returns a 400 Bad Request. Error: "Function call is missing a thought_signature in functionCall parts." Context: This error is specific to Gemini 3's safety mechanism which requires a `thought_signature` to be preserved across tool-calling turns. It appears the Ollama native API (/api/chat) currently doesn't expose or persist this signature when proxying Gemini 3 models, making tool-calling impossible for Gemini 3 on the cloud platform. Tested with: - Endpoint: https://ollama.com - API: Native Ollama (non-streaming) - Model: gemini-3-flash-preview:cloud (and others) - Client: OpenClaw Expected behavior: The thought_signature should be persisted server-side by Ollama Cloud or exposed in the API response so it can be returned in the subsequent turn.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35313