- Added typing imports (List, Dict, Tuple, Optional, Set) to export section
- Fixed NameError: name 'List' is not defined
- Fixed milestone copilot references from SimpleTokenizer to CharTokenizer
- Verified transformer learning: 99.1% loss decrease in 500 steps
Training results:
- Initial loss: 3.555
- Final loss: 0.031
- Training time: 52.1s for 500 steps
- Gradient flow: All 21 parameters receiving gradients
- Model: 1-layer GPT with 32d embeddings, 4 heads
Issue: CharTokenizer was failing with NameError: name 'List' is not defined
Root cause: typing imports were not marked with #| export
Fix:
✅ Added #| export directive to import block in tokenization_dev.py
✅ Re-exported module using 'tito export 10_tokenization'
✅ typing.List, Dict, Tuple, Optional, Set now properly exported
Verification:
- CharTokenizer.build_vocab() works ✅
- encode() and decode() work ✅
- Tested on Shakespeare sample text ✅
This fixes the integration with vaswani_shakespeare.py which now properly
uses CharTokenizer from Module 10 instead of manual tokenization.
Following module developer guidelines, added comprehensive visual diagrams:
1. Text-to-Numbers Pipeline (Introduction):
- Added full boxed diagram showing 4-step tokenization process
- Clear visual flow from human text to numerical IDs
- Each step explained inline with the diagram
2. Character Tokenization Process:
- Step-by-step vocabulary building visualization
- Shows corpus → unique chars → vocab with IDs
- Encoding process with ID lookup visualization
- Decoding process with reverse lookup
- All in clear nested boxes
3. BPE Training Algorithm:
- Comprehensive 4-step process with nested boxes
- Pair frequency analysis with bar charts (████)
- Before/After merge visualizations
- Iteration examples showing vocabulary growth
- Final results with key insights
4. Memory Layout for Embedding Tables:
- Visual bars showing relative memory sizes
- Character (204KB) vs BPE-50K (102MB) vs Word-100K (204MB)
- Shows fp32/fp16/int8 precision trade-offs
- Real production model examples (GPT-2/3, BERT, T5, LLaMA)
- Clear table format for comparison
Educational improvements:
- More visual, less text-heavy
- Clearer step-by-step flows
- Better intuition building
- Production context throughout
- Following module developer ASCII diagram patterns
Students now see:
- HOW tokenization works (not just WHAT)
- WHY different strategies exist
- WHAT the memory implications are
- HOW production models make these choices
- 09_spatial: Export Conv2d, MaxPool2d, AvgPool2d only
- 10_tokenization: Export Tokenizer, CharTokenizer, BPETokenizer only
- 11_embeddings: Export Embedding, PositionalEncoding only
Continues professional selective export pattern. Clean public APIs,
development utilities remain in development environment.
- Remove circular imports where modules imported from themselves
- Convert tinytorch.core imports to sys.path relative imports
- Only import dependencies that are actually used in each module
- Preserve documentation imports in markdown cells
- Use consistent relative path pattern across all modules
- Remove hardcoded absolute paths in favor of relative imports
Affected modules: 02_activations, 03_layers, 04_losses, 06_optimizers,
07_training, 09_spatial, 12_attention, 17_quantization