mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-04-28 17:07:30 -05:00
TransposeBackward: - New backward function for transpose operation - Patch Tensor.transpose() to track gradients - Critical for attention (Q @ K.T) gradient flow MatmulBackward batched fix: - Change np.dot to np.matmul for batched 3D+ tensors - Use np.swapaxes instead of .T for proper batched transpose - Fixes gradient shapes in attention mechanisms Tests added: - tests/05_autograd/test_batched_matmul_backward.py (3 tests) - Updated tests/regression/test_gradient_flow_fixes.py (9 tests total) All gradient flow issues for transformer training are now resolved!