Commit Graph

9 Commits

Author SHA1 Message Date
Vijay Janapa Reddi
ddaaf68505 Merge transformer-training into dev
Complete Milestone 05 - 2017 Transformer implementation

Major Features:
- TinyTalks interactive dashboard with rich CLI
- Complete gradient flow fixes (13 tests passing)
- Multiple training examples (5-min, 10-min, levels 1-2)
- Milestone celebration card (perceptron style)
- Comprehensive documentation

Gradient Flow Fixes:
- Fixed reshape, matmul (3D), embedding, sqrt, mean, sub, div, GELU
- All transformer components now fully differentiable
- Hybrid attention approach for educational clarity + gradients

Training Results:
- 10-min training: 96.6% loss improvement, 62.5% accuracy
- 5-min training: 97.8% loss improvement, 66.7% accuracy
- Working chatbot with coherent responses

Files Added:
- tinytalks_dashboard.py (main demo)
- tinytalks_chatbot.py, tinytalks_dataset.py
- level1_memorization.py, level2_patterns.py
- Comprehensive docs and test suites

Ready for student use 2>&1
2025-10-30 17:48:11 -04:00
Vijay Janapa Reddi
fe07e2b7a5 fix(tokenization): Add missing imports to tokenization module
- Added typing imports (List, Dict, Tuple, Optional, Set) to export section
- Fixed NameError: name 'List' is not defined
- Fixed milestone copilot references from SimpleTokenizer to CharTokenizer
- Verified transformer learning: 99.1% loss decrease in 500 steps

Training results:
- Initial loss: 3.555
- Final loss: 0.031
- Training time: 52.1s for 500 steps
- Gradient flow: All 21 parameters receiving gradients
- Model: 1-layer GPT with 32d embeddings, 4 heads
2025-10-30 11:09:38 -04:00
Vijay Janapa Reddi
62636fa92a fix: Add missing typing imports to Module 10 tokenization
Issue: CharTokenizer was failing with NameError: name 'List' is not defined
Root cause: typing imports were not marked with #| export

Fix:
 Added #| export directive to import block in tokenization_dev.py
 Re-exported module using 'tito export 10_tokenization'
 typing.List, Dict, Tuple, Optional, Set now properly exported

Verification:
- CharTokenizer.build_vocab() works 
- encode() and decode() work 
- Tested on Shakespeare sample text 

This fixes the integration with vaswani_shakespeare.py which now properly
uses CharTokenizer from Module 10 instead of manual tokenization.
2025-10-28 09:44:24 -04:00
Vijay Janapa Reddi
bde003d908 fix: Adjust ASCII diagram spacing for consistent alignment 2025-10-24 17:51:11 -04:00
Vijay Janapa Reddi
c6853d7550 docs: Improve tokenization module with enhanced ASCII diagrams
Following module developer guidelines, added comprehensive visual diagrams:

1. Text-to-Numbers Pipeline (Introduction):
   - Added full boxed diagram showing 4-step tokenization process
   - Clear visual flow from human text to numerical IDs
   - Each step explained inline with the diagram

2. Character Tokenization Process:
   - Step-by-step vocabulary building visualization
   - Shows corpus → unique chars → vocab with IDs
   - Encoding process with ID lookup visualization
   - Decoding process with reverse lookup
   - All in clear nested boxes

3. BPE Training Algorithm:
   - Comprehensive 4-step process with nested boxes
   - Pair frequency analysis with bar charts (████)
   - Before/After merge visualizations
   - Iteration examples showing vocabulary growth
   - Final results with key insights

4. Memory Layout for Embedding Tables:
   - Visual bars showing relative memory sizes
   - Character (204KB) vs BPE-50K (102MB) vs Word-100K (204MB)
   - Shows fp32/fp16/int8 precision trade-offs
   - Real production model examples (GPT-2/3, BERT, T5, LLaMA)
   - Clear table format for comparison

Educational improvements:
- More visual, less text-heavy
- Clearer step-by-step flows
- Better intuition building
- Production context throughout
- Following module developer ASCII diagram patterns

Students now see:
- HOW tokenization works (not just WHAT)
- WHY different strategies exist
- WHAT the memory implications are
- HOW production models make these choices
2025-10-24 17:51:11 -04:00
Vijay Janapa Reddi
0e997e4a10 refactor: Standardize imports across modules 10-17 to match 01-09
Enforce consistent import pattern across all modules:
- Direct imports from tinytorch.core.* (no fallbacks)
- Remove all sys.path.append manipulations
- Remove try/except import fallbacks
- Remove mock/dummy class fallbacks

Fixed modules:
- Module 10 (tokenization): Removed try/except fallback
- Module 12 (attention): Removed sys.path.append for tensor/layers
- Module 15 (profiling): Removed sys.path + mock Tensor/Linear/Conv2d
- Module 16 (acceleration): Removed hardcoded path + importlib + mock Tensor
- Module 17 (quantization): Removed sys.path + disabled fallback block

All modules now follow the same pattern as modules 01-09:
  from tinytorch.core.tensor import Tensor
  from tinytorch.core.layers import Linear
  # etc.

No development fallbacks - assume tinytorch package is installed.
2025-10-24 17:51:10 -04:00
Vijay Janapa Reddi
df0d9eb7dc feat: implement selective exports for modules 09-11
- 09_spatial: Export Conv2d, MaxPool2d, AvgPool2d only
- 10_tokenization: Export Tokenizer, CharTokenizer, BPETokenizer only
- 11_embeddings: Export Embedding, PositionalEncoding only

Continues professional selective export pattern. Clean public APIs,
development utilities remain in development environment.
2025-09-30 09:56:50 -04:00
Vijay Janapa Reddi
9cfc7673cf feat: update advanced modules (09-20) with latest improvements
- Update spatial, tokenization, embeddings, attention modules
- Update transformers, kv-caching, profiling modules
- Update acceleration, quantization, compression modules
- Update benchmarking and capstone modules
- Align with current TinyTorch standards and patterns
2025-09-30 09:45:00 -04:00
Vijay Janapa Reddi
e1a9541c4b Clean up module imports: convert tinytorch.core to sys.path style
- Remove circular imports where modules imported from themselves
- Convert tinytorch.core imports to sys.path relative imports
- Only import dependencies that are actually used in each module
- Preserve documentation imports in markdown cells
- Use consistent relative path pattern across all modules
- Remove hardcoded absolute paths in favor of relative imports

Affected modules: 02_activations, 03_layers, 04_losses, 06_optimizers,
07_training, 09_spatial, 12_attention, 17_quantization
2025-09-30 08:58:58 -04:00