mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-04-28 04:17:30 -05:00
New Features: - Add ReLUBackward for proper ReLU gradient computation - Patch ReLU.forward() in enable_autograd() for gradient tracking - Create polished XOR milestone scripts matching perceptron style XOR Milestone Scripts (milestones/02_xor_crisis_1969/): - xor_crisis.py: Shows single-layer perceptron FAILING (~50% accuracy) - xor_solved.py: Shows multi-layer network SUCCEEDING (75%+ accuracy) - Beautiful rich output with tables, panels, historical context - Pedagogically structured like the perceptron milestone Results: ✅ Single-layer: Stuck at ~50% (proves the crisis) ✅ Multi-layer: 75% accuracy (proves hidden layers work!) ✅ ReLU gradients flow correctly through network ✅ All 4 core activations now support autograd: - Sigmoid ✓, ReLU ✓, Tanh ✓ (future), GELU ✓ (future) Historical Significance: This recreates the exact problem that killed AI for 17 years and demonstrates the solution that started the modern era!