[PR #14182] openai: normalize empty content on assistant messages with tool_calls #61252

Open
opened 2026-04-29 16:20:11 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14182
Author: @lbijeau
Created: 2/10/2026
Status: 🔄 Open

Base: mainHead: fix/v1-normalize-empty-content


📝 Commits (1)

  • cb8cbc8 openai: normalize empty content on assistant messages with tool_calls

📊 Changes

2 files changed (+151 additions, -1 deletions)

View changed files

📝 openai/openai.go (+11 -1)
📝 openai/openai_test.go (+140 -0)

📄 Description

Summary

  • When /v1/chat/completions receives assistant messages with content: "" (empty string) alongside tool_calls, normalize it to omit content — matching the existing behavior for content: null
  • This is a bug fix, not a new feature — it makes the two equivalent representations of "no text content" behave identically

Motivation

The OpenAI spec treats content as nullable for assistant messages with tool_calls. Some clients (e.g. the Vercel AI SDK) send content: "" instead of null. The current code passes "" through to api.Message{Content: ""}, while null takes a different path that omits Content entirely.

This causes template rendering differences. For example, with qwen3-coder (which uses RENDERER qwen3-coder / PARSER qwen3-coder), an empty string in prior assistant messages causes the model to switch from structured tool_calls to text-based markup (<function=name>) on subsequent turns.

Verified via curl:

  • content: "" in prior assistant messages → finish_reason: "stop", text markup in content
  • content: "Let me check." in prior assistant messages → finish_reason: "tool_calls", clean structured array
  • content: null in prior assistant messages → finish_reason: "tool_calls", clean structured array

The fix normalizes "" to match the null path when tool calls are present, so templates render consistently.

Also filed upstream: https://github.com/vercel/ai/issues/12389 / https://github.com/vercel/ai/pull/12390

Test plan

  • Added test: content: "" with tool_calls produces message without content (matching null behavior)
  • Added test: content: null with tool_calls still works correctly
  • Added test: non-empty content with tool_calls is preserved
  • All existing tests in openai/ package pass (ran via go test ./openai/ -v)

Fixes #14181


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14182 **Author:** [@lbijeau](https://github.com/lbijeau) **Created:** 2/10/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix/v1-normalize-empty-content` --- ### 📝 Commits (1) - [`cb8cbc8`](https://github.com/ollama/ollama/commit/cb8cbc8a95abf7292d81ffa8a08b1a8586019b48) openai: normalize empty content on assistant messages with tool_calls ### 📊 Changes **2 files changed** (+151 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `openai/openai.go` (+11 -1) 📝 `openai/openai_test.go` (+140 -0) </details> ### 📄 Description ## Summary - When `/v1/chat/completions` receives assistant messages with `content: ""` (empty string) alongside `tool_calls`, normalize it to omit content — matching the existing behavior for `content: null` - This is a bug fix, not a new feature — it makes the two equivalent representations of "no text content" behave identically ## Motivation The OpenAI spec treats `content` as nullable for assistant messages with `tool_calls`. Some clients (e.g. the [Vercel AI SDK](https://github.com/vercel/ai/issues/12389)) send `content: ""` instead of `null`. The current code passes `""` through to `api.Message{Content: ""}`, while `null` takes a different path that omits `Content` entirely. This causes template rendering differences. For example, with qwen3-coder (which uses `RENDERER qwen3-coder` / `PARSER qwen3-coder`), an empty string in prior assistant messages causes the model to switch from structured `tool_calls` to text-based markup (`<function=name>`) on subsequent turns. Verified via curl: - `content: ""` in prior assistant messages → `finish_reason: "stop"`, text markup in content - `content: "Let me check."` in prior assistant messages → `finish_reason: "tool_calls"`, clean structured array - `content: null` in prior assistant messages → `finish_reason: "tool_calls"`, clean structured array The fix normalizes `""` to match the `null` path when tool calls are present, so templates render consistently. Also filed upstream: https://github.com/vercel/ai/issues/12389 / https://github.com/vercel/ai/pull/12390 ## Test plan - [x] Added test: `content: ""` with tool_calls produces message without content (matching null behavior) - [x] Added test: `content: null` with tool_calls still works correctly - [x] Added test: non-empty content with tool_calls is preserved - [x] All existing tests in `openai/` package pass (ran via `go test ./openai/ -v`) Fixes #14181 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 16:20:11 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#61252