[PR #14140] Reduce per-call tokenizer overhead by 3-5× #40406

Open
opened 2026-04-23 01:18:23 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14140
Author: @BigBIueWhale
Created: 2/7/2026
Status: 🔄 Open

Base: mainHead: main


📝 Commits (10+)

  • 15c396c Reduce per-call tokenizer overhead by 3-5x
  • 8df9722 Merge tag 'v0.17.4' of https://github.com/ollama/ollama
  • fbae697 Fix four Qwen 3.5 27B bugs: penalty sampling, tool call format, unclosed , missing generation prompt
  • ab23495 Use ring buffer for penalty sampler's recent token window
  • 7b9d86f Strip historical thinking traces across rounds via lastQueryIndex
  • bebddcc Fix head_count_kv for third-party GGUFs with uniform scalar values
  • 86bb9c4 Support ssm_dt.bias tensor name from third-party GGUFs
  • 9ec17fc Fix three third-party GGUF compatibility bugs in Qwen 3.5 / Qwen3Next
  • 19622f9 Guard repeatPenalty <= 0 in NewSampler to prevent division by zero
  • d474d36 Replace SetInplace with balanced concat tree in deltaNetChunked

📊 Changes

47 files changed (+6743 additions, -271 deletions)

View changed files

📝 api/types.go (+7 -6)
📝 api/types_test.go (+2 -2)
📝 convert/convert_qwen3next.go (+2 -6)
📝 convert/convert_qwen3next_test.go (+5 -2)
internal/jsonutil/json.go (+19 -0)
📝 internal/orderedmap/orderedmap.go (+29 -1)
📝 llama/llama.go (+57 -1)
📝 llama/sampling_ext.cpp (+470 -0)
📝 llama/sampling_ext.h (+78 -0)
📝 llm/server.go (+22 -3)
📝 model/models/qwen3next/deltanet.go (+34 -9)
📝 model/models/qwen3next/model.go (+48 -14)
📝 model/parsers/parsers.go (+1 -1)
model/parsers/qwen35.go (+238 -0)
model/parsers/qwen35_test.go (+382 -0)
📝 model/parsers/qwen3coder.go (+131 -12)
📝 model/parsers/qwen3coder_test.go (+137 -0)
📝 model/renderers/cogito_test.go (+8 -8)
📝 model/renderers/deepseek3_test.go (+10 -10)
📝 model/renderers/glm46_test.go (+2 -2)

...and 27 more files

📄 Description

Metric v0.15.5 This PR
Time to first token 0.56 s 0.13 s
CPU samples (60s pprof) 920 ms 120 ms
GC time 580 ms 10 ms
Avg CPU % 164% 65%

Profiled with Devstral 2 Small (1,000 special tokens), 151 messages, 65 KB payload, warm cache.
pprof profiles: cpu-v0.15.5.prof cpu-fork.prof
Reproduction scripts: measure_cpu.py profile_pprof.py
Full breakdown: v0.15.5_profile.md

Changes

  1. strings.Contains pre-check on special tokens — skips ~997/1000 tokens not present in input
  2. slices.Replace instead of append(s[:i], append(mid, s[i+1:]...)...) — the single hottest line in v0.15.5, responsible for 63% GC pressure
  3. Stack buffer in Merge() — avoids per-lookup heap allocation
  4. Binary-search truncation in chatPrompt() — O(log N) tokenize calls when truncation is needed
  5. Deduplicated the special-token loop from bytepairencoding.go and sentencepiece.go into shared tokenizer/special.go

All existing tests pass. New tests for splitSpecialTokens and truncation call counting. go vet clean.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14140 **Author:** [@BigBIueWhale](https://github.com/BigBIueWhale) **Created:** 2/7/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (10+) - [`15c396c`](https://github.com/ollama/ollama/commit/15c396c372f1e770e5384e51929a64fc359dbaf7) Reduce per-call tokenizer overhead by 3-5x - [`8df9722`](https://github.com/ollama/ollama/commit/8df97220eebfdc264ed4f508b8c716891f7d07f3) Merge tag 'v0.17.4' of https://github.com/ollama/ollama - [`fbae697`](https://github.com/ollama/ollama/commit/fbae6976a8d202a1a760915811968eb30ad8981a) Fix four Qwen 3.5 27B bugs: penalty sampling, tool call format, unclosed </think>, missing generation prompt - [`ab23495`](https://github.com/ollama/ollama/commit/ab2349554f98c41b22a9294e3447461ee37c8063) Use ring buffer for penalty sampler's recent token window - [`7b9d86f`](https://github.com/ollama/ollama/commit/7b9d86fa75d655940739b994612e55ce361063e7) Strip historical thinking traces across rounds via lastQueryIndex - [`bebddcc`](https://github.com/ollama/ollama/commit/bebddccd0ccb7c35cac96ecb7a1e9a3ad8e49882) Fix head_count_kv for third-party GGUFs with uniform scalar values - [`86bb9c4`](https://github.com/ollama/ollama/commit/86bb9c4b3fd41e273f4f8b42fa6da79b5272e2c5) Support ssm_dt.bias tensor name from third-party GGUFs - [`9ec17fc`](https://github.com/ollama/ollama/commit/9ec17fc180968f5861d0f4e4ba80e2341e697e33) Fix three third-party GGUF compatibility bugs in Qwen 3.5 / Qwen3Next - [`19622f9`](https://github.com/ollama/ollama/commit/19622f966acfbc893eacdf837ec3c0ffaef8ce64) Guard repeatPenalty <= 0 in NewSampler to prevent division by zero - [`d474d36`](https://github.com/ollama/ollama/commit/d474d36871309391c71c4fc59e7960f471becb29) Replace SetInplace with balanced concat tree in deltaNetChunked ### 📊 Changes **47 files changed** (+6743 additions, -271 deletions) <details> <summary>View changed files</summary> 📝 `api/types.go` (+7 -6) 📝 `api/types_test.go` (+2 -2) 📝 `convert/convert_qwen3next.go` (+2 -6) 📝 `convert/convert_qwen3next_test.go` (+5 -2) ➕ `internal/jsonutil/json.go` (+19 -0) 📝 `internal/orderedmap/orderedmap.go` (+29 -1) 📝 `llama/llama.go` (+57 -1) 📝 `llama/sampling_ext.cpp` (+470 -0) 📝 `llama/sampling_ext.h` (+78 -0) 📝 `llm/server.go` (+22 -3) 📝 `model/models/qwen3next/deltanet.go` (+34 -9) 📝 `model/models/qwen3next/model.go` (+48 -14) 📝 `model/parsers/parsers.go` (+1 -1) ➕ `model/parsers/qwen35.go` (+238 -0) ➕ `model/parsers/qwen35_test.go` (+382 -0) 📝 `model/parsers/qwen3coder.go` (+131 -12) 📝 `model/parsers/qwen3coder_test.go` (+137 -0) 📝 `model/renderers/cogito_test.go` (+8 -8) 📝 `model/renderers/deepseek3_test.go` (+10 -10) 📝 `model/renderers/glm46_test.go` (+2 -2) _...and 27 more files_ </details> ### 📄 Description | Metric | v0.15.5 | This PR | |--------|--------:|--------:| | Time to first token | 0.56 s | **0.13 s** | | CPU samples (60s pprof) | 920 ms | **120 ms** | | GC time | 580 ms | **10 ms** | | Avg CPU % | 164% | **65%** | Profiled with Devstral 2 Small (1,000 special tokens), 151 messages, 65 KB payload, warm cache. pprof profiles: [`cpu-v0.15.5.prof`](https://github.com/BigBIueWhale/ollama_perf_bug_report/blob/main/cpu-v0.15.5.prof) [`cpu-fork.prof`](https://github.com/BigBIueWhale/ollama_perf_bug_report/blob/main/cpu-fork.prof) &bull; Reproduction scripts: [`measure_cpu.py`](https://github.com/BigBIueWhale/ollama_perf_bug_report/blob/main/measure_cpu.py) [`profile_pprof.py`](https://github.com/BigBIueWhale/ollama_perf_bug_report/blob/main/profile_pprof.py) &bull; Full breakdown: [`v0.15.5_profile.md`](https://github.com/BigBIueWhale/ollama_perf_bug_report/blob/main/v0.15.5_profile.md) ## Changes 1. **`strings.Contains` pre-check** on special tokens — skips ~997/1000 tokens not present in input 2. **`slices.Replace`** instead of `append(s[:i], append(mid, s[i+1:]...)...)` — the [single hottest line](https://github.com/BigBIueWhale/ollama_perf_bug_report/blob/main/v0.15.5_profile.md#the-problematic-code-path) in v0.15.5, responsible for 63% GC pressure 3. **Stack buffer in `Merge()`** — avoids per-lookup heap allocation 4. **Binary-search truncation** in `chatPrompt()` — O(log N) tokenize calls when truncation is needed 5. **Deduplicated** the special-token loop from `bytepairencoding.go` and `sentencepiece.go` into shared `tokenizer/special.go` All existing tests pass. New tests for `splitSpecialTokens` and truncation call counting. `go vet` clean. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-23 01:18:23 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#40406