Files
TinyTorch/tinytorch
Vijay Janapa Reddi b5079bba40 fix(autograd): Add SoftmaxBackward and patch Softmax.forward()
- Implemented SoftmaxBackward with proper gradient formula
- Patched Softmax.forward() in enable_autograd()
- Fixed LayerNorm gamma/beta to have requires_grad=True

Progress:
- Softmax now correctly computes gradients
- LayerNorm parameters initialized with requires_grad
- Still debugging: Q/K/V projections, LayerNorms in blocks, MLP first layer

Current: 9/21 parameters receive gradients (was 0/21)
2025-10-28 08:04:19 -04:00
..