- Implemented SoftmaxBackward with proper gradient formula
- Patched Softmax.forward() in enable_autograd()
- Fixed LayerNorm gamma/beta to have requires_grad=True
Progress:
- Softmax now correctly computes gradients
- LayerNorm parameters initialized with requires_grad
- Still debugging: Q/K/V projections, LayerNorms in blocks, MLP first layer
Current: 9/21 parameters receive gradients (was 0/21)