[PR #15683] server: preserve thinking in /api/generate and populate parameter_size in /api/tags for safetensors #61957

Open
opened 2026-04-29 16:55:42 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15683
Author: @serenposh
Created: 4/18/2026
Status: 🔄 Open

Base: mainHead: claude/exciting-gould-08a274


📝 Commits (1)

  • 5ef5c1c server: preserve thinking in /api/generate and enrich /api/tags for safetensors

📊 Changes

1 file changed (+40 additions, -16 deletions)

View changed files

📝 server/routes.go (+40 -16)

📄 Description

Summary

Fixes two independent bugs surfaced on gemma4:26b-mxfp8 / gemma4:26b-nvfp4.

1. /api/generate silently drops thinking for models that think by default (#15681)

GenerateHandler initialized the builtin parser before the capability-gated default for req.Think was applied. Parsers that gate thinking output on the value passed to Init — notably Gemma4Parser, which has an explicit // When thinking is disabled, silently discard channel content branch — therefore saw thinkValue == nil and dropped the reasoning, even though the model was emitting it (visible via a large eval_count but short response).

Moving the capability check + default above the parser Init call so the parser sees the resolved req.Think value. This matches ChatHandler, which already does the two steps in the correct order — and is why /api/chat / /v1/chat/completions return reasoning correctly on the same model.

Callers that explicitly set think: false are unaffected — the default only kicks in when req.Think == nil.

2. /api/tags returns empty parameter_size for safetensors models (#15679)

ListHandler populated Details purely from the manifest's ConfigV2, whose ModelType / FileType are not written for safetensors models during create. /api/show already works around this by reading the safetensors headers via xserver.GetSafetensorsLLMInfo / GetSafetensorsDtype; mirror the same enrichment in ListHandler so the two endpoints stay consistent.

Note: the separate observation in #15679 that the reported count (e.g. 8.7B) is the active-parameter count for MoE variants rather than a "26B-A4B"-style total is a deeper metadata question — out of scope for this PR. This change at minimum stops /api/tags from returning an empty string and makes it match the existing /api/show value.

Verified locally

  • go vet ./server/ — clean
  • go build ./server/ — clean
  • go test ./server/ — all pass (2.5s)
  • go test ./model/parsers/ — all pass

Needs manual verification by reviewer

Neither author has a machine with gemma4:26b-mxfp8 / a safetensors model available, so these runtime checks weren't performed:

  • curl /api/generate -d '{"model":"gemma4:26b-mxfp8","prompt":"Moin","stream":false}' → response now includes populated thinking field.
  • Same request with "think": falsethinking empty (no regression for explicit opt-out).
  • curl /api/generate -d '{"model":"llama3.2","prompt":"hi"}' (non-thinking model) → unchanged behaviour.
  • curl /api/tags on a machine with a safetensors model → details.parameter_size is populated (matches /api/show).
  • curl /api/tags on a machine with GGUF-only models → unchanged (manifest config still used).

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15683 **Author:** [@serenposh](https://github.com/serenposh) **Created:** 4/18/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `claude/exciting-gould-08a274` --- ### 📝 Commits (1) - [`5ef5c1c`](https://github.com/ollama/ollama/commit/5ef5c1cc15a5f67744830b3d14a28c58fb52382a) server: preserve thinking in /api/generate and enrich /api/tags for safetensors ### 📊 Changes **1 file changed** (+40 additions, -16 deletions) <details> <summary>View changed files</summary> 📝 `server/routes.go` (+40 -16) </details> ### 📄 Description ## Summary Fixes two independent bugs surfaced on `gemma4:26b-mxfp8` / `gemma4:26b-nvfp4`. ### 1. `/api/generate` silently drops thinking for models that think by default (#15681) `GenerateHandler` initialized the builtin parser **before** the capability-gated default for `req.Think` was applied. Parsers that gate thinking output on the value passed to `Init` — notably `Gemma4Parser`, which has an explicit `// When thinking is disabled, silently discard channel content` branch — therefore saw `thinkValue == nil` and dropped the reasoning, even though the model was emitting it (visible via a large `eval_count` but short `response`). Moving the capability check + default above the parser `Init` call so the parser sees the resolved `req.Think` value. This matches `ChatHandler`, which already does the two steps in the correct order — and is why `/api/chat` / `/v1/chat/completions` return `reasoning` correctly on the same model. Callers that explicitly set `think: false` are unaffected — the default only kicks in when `req.Think == nil`. ### 2. `/api/tags` returns empty `parameter_size` for safetensors models (#15679) `ListHandler` populated `Details` purely from the manifest's `ConfigV2`, whose `ModelType` / `FileType` are not written for safetensors models during create. `/api/show` already works around this by reading the safetensors headers via `xserver.GetSafetensorsLLMInfo` / `GetSafetensorsDtype`; mirror the same enrichment in `ListHandler` so the two endpoints stay consistent. > Note: the separate observation in #15679 that the reported count (e.g. 8.7B) is the active-parameter count for MoE variants rather than a "26B-A4B"-style total is a deeper metadata question — out of scope for this PR. This change at minimum stops `/api/tags` from returning an empty string and makes it match the existing `/api/show` value. ## Verified locally - [x] `go vet ./server/` — clean - [x] `go build ./server/` — clean - [x] `go test ./server/` — all pass (2.5s) - [x] `go test ./model/parsers/` — all pass ## Needs manual verification by reviewer Neither author has a machine with `gemma4:26b-mxfp8` / a safetensors model available, so these runtime checks weren't performed: - [ ] `curl /api/generate -d '{"model":"gemma4:26b-mxfp8","prompt":"Moin","stream":false}'` → response now includes populated `thinking` field. - [ ] Same request with `"think": false` → `thinking` empty (no regression for explicit opt-out). - [ ] `curl /api/generate -d '{"model":"llama3.2","prompt":"hi"}'` (non-thinking model) → unchanged behaviour. - [ ] `curl /api/tags` on a machine with a safetensors model → `details.parameter_size` is populated (matches `/api/show`). - [ ] `curl /api/tags` on a machine with GGUF-only models → unchanged (manifest config still used). --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 16:55:42 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#61957