Files
TinyTorch/modules/11_tokenization/module.yaml
Vijay Janapa Reddi a9fed98b66 Clean up repository: remove temp files, organize modules, prepare for PyPI publication
- Removed temporary test files and audit reports
- Deleted backup and temp_holding directories
- Reorganized module structure (07->09 spatial, 09->07 dataloader)
- Added new modules: 11-14 (tokenization, embeddings, attention, transformers)
- Updated examples with historical ML milestones
- Cleaned up documentation structure
2025-09-24 10:13:37 -04:00

32 lines
1.0 KiB
YAML

name: "Tokenization"
number: 11
description: "Text processing systems that convert raw text into numerical sequences for language models"
learning_objectives:
- "Implement character-level tokenization with special token handling"
- "Build BPE (Byte Pair Encoding) tokenizer for subword units"
- "Understand tokenization trade-offs: vocabulary size vs sequence length"
- "Optimize tokenization performance for production systems"
- "Analyze how tokenization affects model memory and training efficiency"
prerequisites:
- "02_tensor"
exports:
- "CharTokenizer"
- "BPETokenizer"
- "TokenizationProfiler"
- "OptimizedTokenizer"
systems_concepts:
- "Memory efficiency of token representations"
- "Vocabulary size vs model size tradeoffs"
- "Tokenization throughput optimization"
- "String processing performance"
- "Cache-friendly text processing patterns"
ml_systems_focus: "Text processing pipelines, tokenization throughput, memory-efficient vocabulary management"
estimated_time: "4-5 hours"
next_modules:
- "12_embeddings"