[PR #15684] model/models/nomicbert: fix tokenizer for t5 models (crash + embedding quality) #25750

Open
opened 2026-04-19 18:25:41 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15684
Author: @mverrilli
Created: 4/18/2026
Status: 🔄 Open

Base: mainHead: fix/issue-15174-embed-assert-validate-length


📝 Commits (4)

  • 8679b06 model/models/nomicbert: use sentencepiece tokenizer for t5 models
  • 0817013 tokenizer: use Viterbi algorithm for SentencePiece and support add_space_prefix
  • 00bd3b8 tokenizer: fix SentencePiece short-circuit and small-vocab panic
  • bc30f7f model/models/bert: support t5 (SentencePiece) tokenizer for bge-m3

📊 Changes

9 files changed (+756 additions, -133 deletions)

View changed files

📝 fs/ggml/ggml.go (+1 -0)
📝 model/models/bert/embed.go (+3 -0)
model/models/bert/embed_tokenizer_test.go (+53 -0)
📝 model/models/nomicbert/model.go (+33 -23)
📝 tokenizer/sentencepiece.go (+74 -110)
tokenizer/sentencepiece_viterbi_test.go (+569 -0)
📝 tokenizer/vocabulary.go (+1 -0)
📝 tokenizer/wordpiece.go (+3 -0)
📝 tokenizer/wordpiece_test.go (+19 -0)

📄 Description

Problem

nomic-embed-text-v2-moe (and any nomic-bert-moe model using tokenizer.ggml.model = "t5") had two separate issues:

Issue 1 — Crash: The runner aborted with GGML_ASSERT(i01 >= 0 && i01 < ne01) failed in ggml_get_rows on any /api/embed request containing a character the tokenizer couldn't resolve (e.g. . or ,). Reported in #15174; reproducible on Windows + Vulkan + CUDA and via OLLAMA_NEW_ENGINE=1 on Linux.

Issue 2 — Embedding quality: Even after crash prevention, embeddings produced by the native engine diverged severely from llama.cpp (cosine similarity as low as 0.357). Two tokenizer bugs caused this: the greedy segmentation algorithm was not globally optimal, and the add_space_prefix GGUF flag was not being applied.

Root Causes

Crash root cause: model/models/nomicbert/model.go unconditionally used WordPiece for all nomic-bert models, but nomic-embed-text-v2-moe's GGUF sets tokenizer.ggml.model = "t5" (SentencePiece). WordPiece fell back to [UNK] for unrecognized words; [UNK] is absent from this vocab, so vocab.Encode("[UNK]") returned -1 and that invalid ID flowed into TokenEmbedding.Weight.Rows, tripping the assertion.

Embedding quality root cause 1 — wrong algorithm: The SentencePiece implementation used a greedy bottom-up merge that made locally optimal pair decisions without regard for the full segmentation. For example, "▁world" (score -11.35) gets split into suboptimal pieces [▁w, or, ld] because "or" (-9.17) scores better as an isolated pair. The correct approach (used by llama.cpp / sentencepiece library) is the Viterbi DP algorithm, which finds the globally optimal (highest total log-probability) segmentation.

Embedding quality root cause 2 — add_space_prefix ignored: The GGUF sets tokenizer.ggml.add_space_prefix = true, meaning a leading space must be prepended before tokenizing so the first word gets the prefix matching SentencePiece's word-boundary semantics. The Go code never read or applied this flag, so "hello world" was tokenized without the leading , producing completely different IDs from llama.cpp.

Changes

  1. model/models/nomicbert/model.go — select tokenizer based on tokenizer.ggml.model: bert → WordPiece, t5 → SentencePiece, otherwise ErrUnsupportedTokenizer. Read tokenizer.ggml.add_space_prefix from GGUF into vocab.
  2. tokenizer/vocabulary.go — add AddSpacePrefix bool field to carry the GGUF add_space_prefix flag.
  3. tokenizer/sentencepiece.go — replace greedy merge with Viterbi DP algorithm for globally optimal segmentation; apply AddSpacePrefix at the start of Encode().
  4. tokenizer/wordpiece.go — defense in depth: return an error instead of emitting -1 when [UNK] is absent and a word can't be tokenized.
  5. fs/ggml/ggml.go — add nomic-bert-moe to OllamaEngineRequired() so the model routes through the Go-native runner that has the tokenizer fix.
  6. tokenizer/wordpiece_test.go — regression test: TestWordPieceEncodeReturnsErrorWhenUnkMissing.

Test plan

  • go test ./tokenizer/... ./model/... ./fs/ggml/... ./runner/... — all 1274 tests pass.
  • go vet on changed packages — clean.
  • Crash fix verified: inputs with . / , / hello world this is a test. no longer abort the runner.
  • Embedding quality verified: cosine similarity vs llama.cpp is ≥0.9999 for all tested inputs (was as low as 0.357 before the Viterbi + add_space_prefix fixes).
    • ASCII text: cos-sim = 1.000000
    • Unicode text ("café résumé naïve"): cos-sim = 0.999933 (float32 accumulation noise, not a bug)
    • Japanese text: cos-sim = 1.000000
  • nomic-embed-text (v1, nomic-bert arch, WordPiece) and all-minilm (bert arch) still work unchanged.
  • Multilingual smoke test: "hola mundo! привет мир! 你好世界! 🌟" embeds correctly.

Fixes #15174

Also fixes: bge-m3 / #15582

bge-m3 has general.architecture = bert and tokenizer.ggml.model = t5. With bert in OllamaEngineRequired(), it routes to the Go native engine, but bert/embed.go returned ErrUnsupportedTokenizer for any non-bert tokenizer model — so bge-m3 failed to load entirely.

Added t5SentencePiece routing to bert/embed.go (same pattern as nomicbert/model.go) and reads tokenizer.ggml.add_space_prefix from GGUF. bge-m3 now loads and tokenizes correctly in the native engine, eliminating the code path that produced NaN embeddings.

Fixes #15582


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15684 **Author:** [@mverrilli](https://github.com/mverrilli) **Created:** 4/18/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix/issue-15174-embed-assert-validate-length` --- ### 📝 Commits (4) - [`8679b06`](https://github.com/ollama/ollama/commit/8679b0693f794105a420613d03711e316d7f18d7) model/models/nomicbert: use sentencepiece tokenizer for t5 models - [`0817013`](https://github.com/ollama/ollama/commit/08170132a9a9f670839d234f37cf2089a8b61e99) tokenizer: use Viterbi algorithm for SentencePiece and support add_space_prefix - [`00bd3b8`](https://github.com/ollama/ollama/commit/00bd3b894ead82ce00e10b5b05b55bcd99370467) tokenizer: fix SentencePiece short-circuit and small-vocab panic - [`bc30f7f`](https://github.com/ollama/ollama/commit/bc30f7fbffeaa22ece08cc7da02cac582b9046da) model/models/bert: support t5 (SentencePiece) tokenizer for bge-m3 ### 📊 Changes **9 files changed** (+756 additions, -133 deletions) <details> <summary>View changed files</summary> 📝 `fs/ggml/ggml.go` (+1 -0) 📝 `model/models/bert/embed.go` (+3 -0) ➕ `model/models/bert/embed_tokenizer_test.go` (+53 -0) 📝 `model/models/nomicbert/model.go` (+33 -23) 📝 `tokenizer/sentencepiece.go` (+74 -110) ➕ `tokenizer/sentencepiece_viterbi_test.go` (+569 -0) 📝 `tokenizer/vocabulary.go` (+1 -0) 📝 `tokenizer/wordpiece.go` (+3 -0) 📝 `tokenizer/wordpiece_test.go` (+19 -0) </details> ### 📄 Description ## Problem `nomic-embed-text-v2-moe` (and any nomic-bert-moe model using `tokenizer.ggml.model = "t5"`) had two separate issues: **Issue 1 — Crash:** The runner aborted with `GGML_ASSERT(i01 >= 0 && i01 < ne01) failed` in `ggml_get_rows` on any `/api/embed` request containing a character the tokenizer couldn't resolve (e.g. `.` or `,`). Reported in #15174; reproducible on Windows + Vulkan + CUDA and via `OLLAMA_NEW_ENGINE=1` on Linux. **Issue 2 — Embedding quality:** Even after crash prevention, embeddings produced by the native engine diverged severely from llama.cpp (cosine similarity as low as 0.357). Two tokenizer bugs caused this: the greedy segmentation algorithm was not globally optimal, and the `add_space_prefix` GGUF flag was not being applied. ## Root Causes **Crash root cause:** `model/models/nomicbert/model.go` unconditionally used WordPiece for all nomic-bert models, but nomic-embed-text-v2-moe's GGUF sets `tokenizer.ggml.model = "t5"` (SentencePiece). WordPiece fell back to `[UNK]` for unrecognized words; `[UNK]` is absent from this vocab, so `vocab.Encode("[UNK]")` returned `-1` and that invalid ID flowed into `TokenEmbedding.Weight.Rows`, tripping the assertion. **Embedding quality root cause 1 — wrong algorithm:** The SentencePiece implementation used a greedy bottom-up merge that made locally optimal pair decisions without regard for the full segmentation. For example, "▁world" (score -11.35) gets split into suboptimal pieces [▁w, or, ld] because "or" (-9.17) scores better as an isolated pair. The correct approach (used by llama.cpp / sentencepiece library) is the Viterbi DP algorithm, which finds the globally optimal (highest total log-probability) segmentation. **Embedding quality root cause 2 — add_space_prefix ignored:** The GGUF sets `tokenizer.ggml.add_space_prefix = true`, meaning a leading space must be prepended before tokenizing so the first word gets the `▁` prefix matching SentencePiece's word-boundary semantics. The Go code never read or applied this flag, so "hello world" was tokenized without the leading `▁`, producing completely different IDs from llama.cpp. ## Changes 1. **`model/models/nomicbert/model.go`** — select tokenizer based on `tokenizer.ggml.model`: `bert` → WordPiece, `t5` → SentencePiece, otherwise `ErrUnsupportedTokenizer`. Read `tokenizer.ggml.add_space_prefix` from GGUF into vocab. 2. **`tokenizer/vocabulary.go`** — add `AddSpacePrefix bool` field to carry the GGUF `add_space_prefix` flag. 3. **`tokenizer/sentencepiece.go`** — replace greedy merge with Viterbi DP algorithm for globally optimal segmentation; apply `AddSpacePrefix` at the start of `Encode()`. 4. **`tokenizer/wordpiece.go`** — defense in depth: return an error instead of emitting `-1` when `[UNK]` is absent and a word can't be tokenized. 5. **`fs/ggml/ggml.go`** — add `nomic-bert-moe` to `OllamaEngineRequired()` so the model routes through the Go-native runner that has the tokenizer fix. 6. **`tokenizer/wordpiece_test.go`** — regression test: `TestWordPieceEncodeReturnsErrorWhenUnkMissing`. ## Test plan - [x] `go test ./tokenizer/... ./model/... ./fs/ggml/... ./runner/...` — all 1274 tests pass. - [x] `go vet` on changed packages — clean. - [x] Crash fix verified: inputs with `.` / `,` / `hello world this is a test.` no longer abort the runner. - [x] Embedding quality verified: cosine similarity vs llama.cpp is ≥0.9999 for all tested inputs (was as low as 0.357 before the Viterbi + add_space_prefix fixes). - ASCII text: cos-sim = 1.000000 - Unicode text ("café résumé naïve"): cos-sim = 0.999933 (float32 accumulation noise, not a bug) - Japanese text: cos-sim = 1.000000 - [x] `nomic-embed-text` (v1, `nomic-bert` arch, WordPiece) and `all-minilm` (`bert` arch) still work unchanged. - [x] Multilingual smoke test: `"hola mundo! привет мир! 你好世界! 🌟"` embeds correctly. Fixes #15174 ## Also fixes: bge-m3 / #15582 `bge-m3` has `general.architecture = bert` and `tokenizer.ggml.model = t5`. With `bert` in `OllamaEngineRequired()`, it routes to the Go native engine, but `bert/embed.go` returned `ErrUnsupportedTokenizer` for any non-`bert` tokenizer model — so `bge-m3` failed to load entirely. Added `t5` → `SentencePiece` routing to `bert/embed.go` (same pattern as `nomicbert/model.go`) and reads `tokenizer.ggml.add_space_prefix` from GGUF. `bge-m3` now loads and tokenizes correctly in the native engine, eliminating the code path that produced NaN embeddings. Fixes #15582 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 18:25:41 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#25750