mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-04-28 16:12:32 -05:00
- Created perceptron_trained.py milestone with full training loop - Restored tinytorch/core/optimizers.py with Optimizer, SGD, Adam, AdamW classes - Fixed imports to use tinytorch.core.* instead of tensor_dev - Fixed tinytorch/core/losses.py with all loss functions - Fixed tinytorch/core/training.py imports ISSUE: Training loop runs but doesn't learn (gradients not flowing) - Loss stays constant at 0.7911 - Weights don't update - Likely autograd (Module 05) backward() not fully implemented - Need to fix Tensor.backward() and gradient computation