Files
TinyTorch/tinytorch
Vijay Janapa Reddi 3b21687f0f Implement REAL KV caching with 6x speedup
Module 14 now provides TRUE O(n²) → O(n) transformation with measurable speedup!

Implementation:
- cached_forward() now computes K,V only for NEW token
- Stores K,V in cache, retrieves full history for attention
- Uses numpy operations directly for efficiency
- Detects single-token (generation) vs full-sequence (training)
- First token handled via original path (cache initialization)

Results (test_kv_cache_milestone.py):
 WITHOUT cache: 118.2 tok/s (baseline)
 WITH cache: 705.6 tok/s (optimized)
 SPEEDUP: 6x on tiny model (2 layers, embed_dim=32)

For longer sequences: 10-15x+ speedup expected!

Milestone integration (vaswani_chatgpt.py):
- Resets cache at start of each generation
- Populates cache with prompt tokens
- Processes only new token when cache enabled
- Calls cache.advance() after each token
- Seamless fallback to standard generation

Gradient safety:
 Training (seq_len>1): Uses original path (full gradients)
 Generation (seq_len=1): Uses cache path (inference only)
 No gradient tracking in cache operations (uses .data)

This is how production LLMs work! Students learn real ML systems engineering.
2025-11-05 20:54:55 -05:00
..