Commit Graph

2 Commits

Author SHA1 Message Date
Vijay Janapa Reddi
29d6054d8e Add comprehensive training infrastructure with validation and monitoring
Phase 1 Complete: Training Infrastructure
- TrainingMonitor class with loss tracking, validation splits, early stopping
- Fixed gradient flow by maintaining computational graph
- Updated XOR and MNIST to use new infrastructure
- Added progress visualization with status indicators

Results:
- Perceptron: 100% accuracy achieved
- XOR: Learning with validation monitoring
- MNIST: Gradient flow verified on all 6 parameters
- Validation splits prevent overfitting
- Early stopping triggers correctly

Next: Ensure all examples learn properly before optimization
2025-09-28 21:24:42 -04:00
Vijay Janapa Reddi
a099469591 Fix gradient flow in examples: Maintain computational graph
Critical fix: Examples now properly maintain the computational graph
for gradient flow by:
1. Using tensor operations (diff, multiplication) instead of numpy
2. Calling backward directly on the loss tensor with gradient argument
3. Properly extracting gradient data for parameter updates

Results:
- Perceptron: Now achieves 100% accuracy (loss decreases from 0.20 to 0.002)
- XOR: Now learning! Gets 3/4 correct after 5000 epochs (vs stuck at 50% before)
- Gradient flow confirmed working through all layers

The issue was breaking the graph by creating new Tensors from numpy arrays
for loss computation. Now using proper tensor operations maintains the graph.
2025-09-28 20:09:48 -04:00