[GH-ISSUE #15497] OpenAI-compatible streaming: Function.Index still 0 for models without a registered parser (follow-up to #15457 / #15467) #56417

Open
opened 2026-04-29 10:47:30 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @CPIDLE on GitHub (Apr 11, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15497

Originally assigned to: @drifkin on GitHub.

Thanks for the fast turnaround on #15467! It fixes the 8 listed parsers, but the underlying issue still affects models that fall through the legacy tool-parsing path.

What PR #15467 fixed

PR #15467 added Function.Index = p.callIndex / p.callIndex++ to cogito, deepseek3, functiongemma, gemma4, lfm2, ministral, olmo3, and qwen3vl. qwen3-coder and qwen3 already got this in #14484 (2026-02-27).

What it didn't fix

Not every model uses a dedicated parser. Looking at model/parsers/parsers.go:ParserForName, models without an entry in that switch fall through to the legacy tool-parsing path at server/routes.go:2381 (the branch guarded by len(req.Tools) > 0 && (builtinParser == nil || !builtinParser.HasToolSupport())). That path never sets Function.Index, so every tool call in a streaming response emerges with index: 0 via ToToolCalls in openai/openai.go.

Reproduction (qwen2.5-coder:7b, which has no registered parser)

curl -s http://localhost:11434/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "qwen2.5-coder:7b",
    "stream": true,
    "messages": [
      {"role": "system", "content": "Use the provided tools."},
      {"role": "user", "content": "Create hello.py with print(\"hello\") and world.py with print(\"world\")."}
    ],
    "tools": [{
      "type": "function",
      "function": {
        "name": "file_write",
        "parameters": {
          "type": "object",
          "properties": {
            "filePath": {"type":"string"},
            "content":  {"type":"string"}
          },
          "required": ["filePath","content"]
        }
      }
    }]
  }'

Both tool_calls chunks come back with "index": 0.

Suggested fix

Either:

(a) Set Function.Index in the legacy fallback tool-parsing path in server/routes.go (probably cleanest), or

(b) Post-normalize indices in openai/openai.go:ToToolCalls / toChunk when multiple tool calls arrive without distinct indices — essentially the "post-parse hook" already suggested in #15467's description.

I'd lean toward (b) as a safety net: it becomes a single source of truth regardless of which parser path fed it, and protects against future parsers forgetting the same boilerplate.

Side question

Does the current qwen3-coder:<tag> Modelfile on ollama.com actually declare PARSER qwen3-coder? I've been seeing index: 0 with qwen3-coder:30b on v0.20.4 despite Qwen3CoderParser setting callIndex correctly since #14484 — which would suggest the Modelfile isn't routing to that parser and is hitting the same legacy path. Happy to re-verify on v0.20.6 once released.

Environment

  • Ollama 0.20.2 / 0.20.4 (pre-#15467)
  • Models tested: qwen2.5-coder:7b, qwen3-coder:30b
Originally created by @CPIDLE on GitHub (Apr 11, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15497 Originally assigned to: @drifkin on GitHub. Thanks for the fast turnaround on #15467! It fixes the 8 listed parsers, but the underlying issue still affects models that fall through the legacy tool-parsing path. ## What PR #15467 fixed PR #15467 added `Function.Index = p.callIndex / p.callIndex++` to `cogito`, `deepseek3`, `functiongemma`, `gemma4`, `lfm2`, `ministral`, `olmo3`, and `qwen3vl`. `qwen3-coder` and `qwen3` already got this in #14484 (2026-02-27). ## What it didn't fix Not every model uses a dedicated parser. Looking at `model/parsers/parsers.go:ParserForName`, models without an entry in that switch fall through to the legacy tool-parsing path at `server/routes.go:2381` (the branch guarded by `len(req.Tools) > 0 && (builtinParser == nil || !builtinParser.HasToolSupport())`). That path never sets `Function.Index`, so every tool call in a streaming response emerges with `index: 0` via `ToToolCalls` in `openai/openai.go`. ## Reproduction (qwen2.5-coder:7b, which has no registered parser) ```bash curl -s http://localhost:11434/v1/chat/completions \ -H 'Content-Type: application/json' \ -d '{ "model": "qwen2.5-coder:7b", "stream": true, "messages": [ {"role": "system", "content": "Use the provided tools."}, {"role": "user", "content": "Create hello.py with print(\"hello\") and world.py with print(\"world\")."} ], "tools": [{ "type": "function", "function": { "name": "file_write", "parameters": { "type": "object", "properties": { "filePath": {"type":"string"}, "content": {"type":"string"} }, "required": ["filePath","content"] } } }] }' ``` Both `tool_calls` chunks come back with `"index": 0`. ## Suggested fix Either: (a) Set `Function.Index` in the legacy fallback tool-parsing path in `server/routes.go` (probably cleanest), or (b) Post-normalize indices in `openai/openai.go:ToToolCalls` / `toChunk` when multiple tool calls arrive without distinct indices — essentially the "post-parse hook" already suggested in #15467's description. I'd lean toward (b) as a safety net: it becomes a single source of truth regardless of which parser path fed it, and protects against future parsers forgetting the same boilerplate. ## Side question Does the current `qwen3-coder:<tag>` Modelfile on ollama.com actually declare `PARSER qwen3-coder`? I've been seeing `index: 0` with `qwen3-coder:30b` on v0.20.4 despite `Qwen3CoderParser` setting `callIndex` correctly since #14484 — which would suggest the Modelfile isn't routing to that parser and is hitting the same legacy path. Happy to re-verify on v0.20.6 once released. ## Environment - Ollama 0.20.2 / 0.20.4 (pre-#15467) - Models tested: `qwen2.5-coder:7b`, `qwen3-coder:30b`
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15497
Analyzed: 2026-04-18T18:21:26.588080

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274308299 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15497 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15497 **Analyzed**: 2026-04-18T18:21:26.588080 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@CPIDLE commented on GitHub (Apr 19, 2026):

Adding a concrete streaming-chunk comparison to make the impact easier to see.

Actual (streamed from qwen2.5-coder:7b on v0.20.6)

Two consecutive SSE chunks from the same stream — both carry index: 0:

// chunk N
{
  "choices": [{
    "index": 0,
    "delta": {
      "tool_calls": [{
        "index": 0,
        "id": "call_abc123",
        "type": "function",
        "function": {
          "name": "file_write",
          "arguments": "{\"filePath\":\"hello.py\",\"content\":\"print(\\\"hello\\\")\"}"
        }
      }]
    }
  }]
}

// chunk N+1  (note: same index: 0)
{
  "choices": [{
    "index": 0,
    "delta": {
      "tool_calls": [{
        "index": 0,
        "id": "call_def456",
        "type": "function",
        "function": {
          "name": "file_write",
          "arguments": "{\"filePath\":\"world.py\",\"content\":\"print(\\\"world\\\")\"}"
        }
      }]
    }
  }]
}

Expected (OpenAI spec)

Per the OpenAI streaming spec, tool_calls[].index must be unique per call so clients can assemble them positionally:

// chunk N
{ "choices": [{ "delta": { "tool_calls": [{ "index": 0, "id": "call_abc123", ... }] } }] }

// chunk N+1
{ "choices": [{ "delta": { "tool_calls": [{ "index": 1, "id": "call_def456", ... }] } }] }

Why it breaks clients

@ai-sdk/openai-compatible (and any spec-compliant OpenAI client) uses index as the array key when reconstructing the tool-call list from deltas. With two index: 0 entries:

  • Best case: second call overwrites/merges into the first → one malformed tool call.
  • Worst case: the first call is marked hasFinished=true, the second is silently dropped.

End-user symptom: agent says "I'll create both files" but only one (or zero) actually gets created. Single-tool calls always work, multi-tool turns are the failure mode.

Verification

Easy to confirm without a client: pipe the curl from the issue body through jq and check whether any streamed chunk shows index: 1:

curl -sN http://localhost:11434/v1/chat/completions -H 'Content-Type: application/json' -d @repro.json \
  | grep '^data: ' | sed 's/^data: //' | jq -c '.choices[0].delta.tool_calls[]? | {index, id, name: .function.name}'

Expected: {"index":0,...} then {"index":1,...}.
Actual on current path: {"index":0,...} then {"index":0,...}.

The distinct id values prove Ollama knows these are separate calls — only the index field is lost on the legacy path.

<!-- gh-comment-id:4274833863 --> @CPIDLE commented on GitHub (Apr 19, 2026): Adding a concrete streaming-chunk comparison to make the impact easier to see. ## Actual (streamed from `qwen2.5-coder:7b` on v0.20.6) Two consecutive SSE chunks from the same stream — both carry `index: 0`: ```json // chunk N { "choices": [{ "index": 0, "delta": { "tool_calls": [{ "index": 0, "id": "call_abc123", "type": "function", "function": { "name": "file_write", "arguments": "{\"filePath\":\"hello.py\",\"content\":\"print(\\\"hello\\\")\"}" } }] } }] } // chunk N+1 (note: same index: 0) { "choices": [{ "index": 0, "delta": { "tool_calls": [{ "index": 0, "id": "call_def456", "type": "function", "function": { "name": "file_write", "arguments": "{\"filePath\":\"world.py\",\"content\":\"print(\\\"world\\\")\"}" } }] } }] } ``` ## Expected (OpenAI spec) Per the [OpenAI streaming spec](https://platform.openai.com/docs/api-reference/chat/streaming), `tool_calls[].index` must be unique per call so clients can assemble them positionally: ```json // chunk N { "choices": [{ "delta": { "tool_calls": [{ "index": 0, "id": "call_abc123", ... }] } }] } // chunk N+1 { "choices": [{ "delta": { "tool_calls": [{ "index": 1, "id": "call_def456", ... }] } }] } ``` ## Why it breaks clients `@ai-sdk/openai-compatible` (and any spec-compliant OpenAI client) uses `index` as the array key when reconstructing the tool-call list from deltas. With two `index: 0` entries: - **Best case:** second call overwrites/merges into the first → one malformed tool call. - **Worst case:** the first call is marked `hasFinished=true`, the second is silently dropped. End-user symptom: agent says "I'll create both files" but only one (or zero) actually gets created. Single-tool calls always work, multi-tool turns are the failure mode. ## Verification Easy to confirm without a client: pipe the curl from the issue body through `jq` and check whether any streamed chunk shows `index: 1`: ```bash curl -sN http://localhost:11434/v1/chat/completions -H 'Content-Type: application/json' -d @repro.json \ | grep '^data: ' | sed 's/^data: //' | jq -c '.choices[0].delta.tool_calls[]? | {index, id, name: .function.name}' ``` Expected: `{"index":0,...}` then `{"index":1,...}`. Actual on current path: `{"index":0,...}` then `{"index":0,...}`. The distinct `id` values prove Ollama knows these are separate calls — only the `index` field is lost on the legacy path.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56417