[PR #12030] Add /api/tokenize and /api/detokenize endpoints for model-aligned tokenization #44940

Open
opened 2026-04-25 00:37:15 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12030
Author: @icedmoca
Created: 8/22/2025
Status: 🔄 Open

Base: mainHead: main


📝 Commits (10+)

  • aa855f2 Add /api/tokenize and /api/detokenize HTTP endpoints with full model support
  • 552f0b5 server: expose /api/tokenize and /api/detokenize (text-only) with vocab-only cache + fallback
  • 60fc2da Delete PR_DESCRIPTION.md
  • a7c8853 Implement vocab-only tokenization using llama.cpp bindings
  • 1c6b273 Update PR description: vocab-only implementation completed
  • 0ce99df Add test hooks and cache reset plumbing for tokenizerloader
  • 48806d3 Tok/Detok: Vocab-only endpoints with cache + Fallback
  • a059516 Merge branch 'main' of https://github.com/icedmoca/ollama
  • e243ae3 Merge branch 'ollama:main' into main
  • 96e2944 Revise README for ollama-vocab-tokenizer project

📊 Changes

13 files changed (+1293 additions, -6 deletions)

View changed files

📝 README.md (+209 -6)
api/examples/tokenize/bench.md (+59 -0)
llama.png (+0 -0)
📝 server/routes.go (+145 -0)
server/routes_tokenize_handler_test.go (+125 -0)
server/tokenizerloader/loader.go (+324 -0)
server/tokenizerloader/loader_test.go (+118 -0)
server/tokenizerloader/loader_vocabbasic_test.go (+130 -0)
server/tokenizerloader/testonly_inject.go (+12 -0)
server/tokenizerloader/testonly_reset.go (+11 -0)
server/tokenizerloader/testutil.go (+27 -0)
upstream-links/tokenize-js.md (+68 -0)
upstream-links/tokenize-python.md (+65 -0)

📄 Description

Two new HTTP API endpoints that expose model-aligned tokenization and detokenization, making it possible to interact with a model’s tokenizer directly over the API.

New Endpoints

POST /api/tokenize → Converts input text into model-specific token IDs

POST /api/detokenize → Converts token IDs back into text

Why This Is Useful

  • Accurate token budgeting: Enables safe prompt chunking and reliable token counts

  • Logit-level control: Supports workflows that rely on raw token IDs

  • Debugging & analysis: Makes tokenizer behavior visible and testable via API

  • Future-ready: Includes media_type field (default "text") for multimodal support

Features

Works with any installed model (mistral, tinyllama, etc.)

Supports keep_alive to reduce reload latency

Provides timing metrics (total_duration, load_duration)

Consistent error handling with other API routes

Validation

Round-trip tested with mistral:latest and tinyllama:latest

Verified model-specific vocabulary behavior (e.g., "fam" → [2050] → " fam")

Integrated cleanly with existing scheduleRunner infrastructure


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12030 **Author:** [@icedmoca](https://github.com/icedmoca) **Created:** 8/22/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (10+) - [`aa855f2`](https://github.com/ollama/ollama/commit/aa855f2b156708e8388480f95586e083c01732a6) Add /api/tokenize and /api/detokenize HTTP endpoints with full model support - [`552f0b5`](https://github.com/ollama/ollama/commit/552f0b58c1718c59212fa5459b522413d74b7713) server: expose /api/tokenize and /api/detokenize (text-only) with vocab-only cache + fallback - [`60fc2da`](https://github.com/ollama/ollama/commit/60fc2daeeaa3202bbc0fd4ea47e0247dca9cf6f7) Delete PR_DESCRIPTION.md - [`a7c8853`](https://github.com/ollama/ollama/commit/a7c8853d353324829655636311603b6f4db4fc58) Implement vocab-only tokenization using llama.cpp bindings - [`1c6b273`](https://github.com/ollama/ollama/commit/1c6b2734794a7be29bc03488e173a5000c837056) Update PR description: vocab-only implementation completed - [`0ce99df`](https://github.com/ollama/ollama/commit/0ce99df74fcbe4d614298a8d87863996c6e487de) Add test hooks and cache reset plumbing for tokenizerloader - [`48806d3`](https://github.com/ollama/ollama/commit/48806d3e196a44fd0452eb09354fd9cf1a82b921) Tok/Detok: Vocab-only endpoints with cache + Fallback - [`a059516`](https://github.com/ollama/ollama/commit/a05951665b6c91ade1d0abb58366344265ba7720) Merge branch 'main' of https://github.com/icedmoca/ollama - [`e243ae3`](https://github.com/ollama/ollama/commit/e243ae33075c82156a4326283bc93d895d4108f3) Merge branch 'ollama:main' into main - [`96e2944`](https://github.com/ollama/ollama/commit/96e2944e783086e55af7e0df769458620d04a9ab) Revise README for ollama-vocab-tokenizer project ### 📊 Changes **13 files changed** (+1293 additions, -6 deletions) <details> <summary>View changed files</summary> 📝 `README.md` (+209 -6) ➕ `api/examples/tokenize/bench.md` (+59 -0) ➕ `llama.png` (+0 -0) 📝 `server/routes.go` (+145 -0) ➕ `server/routes_tokenize_handler_test.go` (+125 -0) ➕ `server/tokenizerloader/loader.go` (+324 -0) ➕ `server/tokenizerloader/loader_test.go` (+118 -0) ➕ `server/tokenizerloader/loader_vocabbasic_test.go` (+130 -0) ➕ `server/tokenizerloader/testonly_inject.go` (+12 -0) ➕ `server/tokenizerloader/testonly_reset.go` (+11 -0) ➕ `server/tokenizerloader/testutil.go` (+27 -0) ➕ `upstream-links/tokenize-js.md` (+68 -0) ➕ `upstream-links/tokenize-python.md` (+65 -0) </details> ### 📄 Description Two new HTTP API endpoints that expose model-aligned tokenization and detokenization, making it possible to interact with a model’s tokenizer directly over the API. ### New Endpoints `POST /api/tokenize` → Converts input text into model-specific token IDs `POST /api/detokenize` → Converts token IDs back into text ### Why This Is Useful - Accurate token budgeting: Enables safe prompt chunking and reliable token counts - Logit-level control: Supports workflows that rely on raw token IDs - Debugging & analysis: Makes tokenizer behavior visible and testable via API - Future-ready: Includes `media_type` field (default "text") for multimodal support ### Features Works with any installed model (mistral, tinyllama, etc.) Supports `keep_alive` to reduce reload latency Provides timing metrics (`total_duration, load_duration`) Consistent error handling with other API routes ### Validation Round-trip tested with mistral:latest and tinyllama:latest Verified model-specific vocabulary behavior (e.g., `"fam" → [2050] → " fam"`) Integrated cleanly with existing `scheduleRunner` infrastructure --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 00:37:15 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#44940