[PR #13238] [MERGED] routes: fix missing logprobs in tool calls #19398

Closed
opened 2026-04-16 07:06:09 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13238
Author: @Eason023
Created: 11/25/2025
Status: Merged
Merged: 12/11/2025
Merged by: @ParthSareen

Base: mainHead: fix/calltool-logprobs


📝 Commits (1)

  • bc68f3d server: fix logprobs in tool calls with refactor and tests

📊 Changes

2 files changed (+99 additions, -2 deletions)

View changed files

📝 server/routes.go (+10 -2)
📝 server/routes_generate_test.go (+89 -0)

📄 Description

Summary

Fixes #13092.

This PR ensures that logprobs are correctly returned when the model generates tool calls.

Problem

Previously, when toolParser or builtinParser buffered content to parse JSON tool calls, the intermediate chunks containing logprobs were dropped because the content was temporarily empty.

Solution

Modified server/routes.go (ChatHandler) to explicitly forward chunks containing only logprobs when the parser is active and buffering content.

Test

Tested with llama3.1 (toolParser). (On Windows)

$body = @{
    model = "llama3.1"
    messages = @(@{ role = "user"; content = "What is the weather in Taipei?" })
    tools = @(@{
        type = "function"
        function = @{
            name = "get_current_weather"
            description = "Get the current weather"
            parameters = @{
                type = "object"
                properties = @{ location = @{ type = "string" } }
                required = @("location")
            }
        }
    })
    options = @{ temperature = 0 }
    logprobs = $true
    stream = $false
} | ConvertTo-Json -Depth 10

$response = Invoke-RestMethod -Uri "http://localhost:11434/api/chat" -Method Post -Body $body -ContentType "application/json"
Write-Host "------------------------------------------------"
Write-Host "Model generated Token num (Eval Count): " $response.eval_count
Write-Host "Received Logprobs num: " $response.logprobs.Count
Write-Host "------------------------------------------------"

Before:

------------------------------------------------
Model generated Token num (Eval Count):  19
Received Logprobs num: 1
------------------------------------------------

After:

------------------------------------------------
Model generated Token num (Eval Count):  19
Received Logprobs num: 18
------------------------------------------------

Note to assignees: I noticed this issue was assigned to @ParthSareen and @jmorganca, but since I encountered this problem and found a fix, I went ahead and submitted this PR. Hope this helps!


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13238 **Author:** [@Eason023](https://github.com/Eason023) **Created:** 11/25/2025 **Status:** ✅ Merged **Merged:** 12/11/2025 **Merged by:** [@ParthSareen](https://github.com/ParthSareen) **Base:** `main` ← **Head:** `fix/calltool-logprobs` --- ### 📝 Commits (1) - [`bc68f3d`](https://github.com/ollama/ollama/commit/bc68f3d62625bff08697555901e91246325d8f05) server: fix logprobs in tool calls with refactor and tests ### 📊 Changes **2 files changed** (+99 additions, -2 deletions) <details> <summary>View changed files</summary> 📝 `server/routes.go` (+10 -2) 📝 `server/routes_generate_test.go` (+89 -0) </details> ### 📄 Description ### Summary Fixes #13092. This PR ensures that `logprobs` are correctly returned when the model generates tool calls. ### Problem Previously, when `toolParser` or `builtinParser` buffered content to parse JSON tool calls, the intermediate chunks containing `logprobs` were dropped because the content was temporarily empty. ### Solution Modified `server/routes.go` (`ChatHandler`) to explicitly forward chunks containing only `logprobs` when the parser is active and buffering content. ### Test Tested with `llama3.1` (toolParser). (On Windows) ```Powershell $body = @{ model = "llama3.1" messages = @(@{ role = "user"; content = "What is the weather in Taipei?" }) tools = @(@{ type = "function" function = @{ name = "get_current_weather" description = "Get the current weather" parameters = @{ type = "object" properties = @{ location = @{ type = "string" } } required = @("location") } } }) options = @{ temperature = 0 } logprobs = $true stream = $false } | ConvertTo-Json -Depth 10 $response = Invoke-RestMethod -Uri "http://localhost:11434/api/chat" -Method Post -Body $body -ContentType "application/json" Write-Host "------------------------------------------------" Write-Host "Model generated Token num (Eval Count): " $response.eval_count Write-Host "Received Logprobs num: " $response.logprobs.Count Write-Host "------------------------------------------------" ``` **Before:** ```bash ------------------------------------------------ Model generated Token num (Eval Count): 19 Received Logprobs num: 1 ------------------------------------------------ ``` **After:** ```bash ------------------------------------------------ Model generated Token num (Eval Count): 19 Received Logprobs num: 18 ------------------------------------------------ ``` Note to assignees: I noticed this issue was assigned to @ParthSareen and @jmorganca, but since I encountered this problem and found a fix, I went ahead and submitted this PR. Hope this helps! --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 07:06:09 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#19398