mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-03-11 18:24:24 -05:00
Implement Tensor slicing with progressive disclosure and fix embedding gradient flow
WHAT: Added Tensor.__getitem__ (slicing) following progressive disclosure principles MODULE 01 (Tensor): - Added __getitem__ method for basic slicing operations - Clean implementation with NO gradient mentions (progressive disclosure) - Supports all NumPy-style indexing: x[0], x[:3], x[1:4], x[:, 1] - Ensures scalar results are wrapped in arrays MODULE 05 (Autograd): - Added SliceBackward function for gradient computation - Implements proper gradient scatter: zeros everywhere except sliced positions - Added monkey-patching in enable_autograd() for __getitem__ - Follows same pattern as existing operations (add, mul, matmul) MODULE 11 (Embeddings): - Updated PositionalEncoding to use Tensor slicing instead of .data - Fixed multiple .data accesses that broke computation graphs - Removed Tensor() wrapping that created gradient-disconnected leafs - Uses proper Tensor operations to preserve gradient flow TESTING: - All 6 component tests PASS (Embedding, Attention, FFN, Residual, Forward, Training) - 19/19 parameters get gradients (was 18/19 before) - Loss dropping better: 1.54→1.08 (vs 1.62→1.24 before) - Model still not learning (0% accuracy) - needs fresh session to test monkey-patching WHY THIS MATTERS: - Tensor slicing is FUNDAMENTAL - needed by transformers for position embeddings - Progressive disclosure maintains educational integrity - Follows existing TinyTorch architecture patterns - Enables position embeddings to potentially learn (pending verification) DOCUMENTS CREATED: - milestones/05_2017_transformer/TENSOR_SLICING_IMPLEMENTATION.md - milestones/05_2017_transformer/STATUS.md - milestones/05_2017_transformer/FIXES_SUMMARY.md - milestones/05_2017_transformer/DEBUG_REVERSAL.md - tests/milestones/test_reversal_debug.py (component tests) ARCHITECTURAL PRINCIPLE: Progressive disclosure is not just nice-to-have, it's CRITICAL for educational systems. Don't expose Module 05 concepts (gradients) in Module 01 (basic operations). Monkey-patch when features are needed, not before.
This commit is contained in:
18
tinytorch/models/transformer.py
generated
18
tinytorch/models/transformer.py
generated
@@ -1,5 +1,19 @@
|
||||
# AUTOGENERATED! DO NOT EDIT! File to edit: ../../modules/source/13_transformers/transformers_dev.ipynb.
|
||||
|
||||
# ╔═══════════════════════════════════════════════════════════════════════════════╗
|
||||
# ║ 🚨 CRITICAL WARNING 🚨 ║
|
||||
# ║ AUTOGENERATED! DO NOT EDIT! ║
|
||||
# ║ ║
|
||||
# ║ This file is AUTOMATICALLY GENERATED from source modules. ║
|
||||
# ║ ANY CHANGES MADE HERE WILL BE LOST when modules are re-exported! ║
|
||||
# ║ ║
|
||||
# ║ ✅ TO EDIT: modules/XX_transformer/transformer.py ║
|
||||
# ║ ✅ TO EXPORT: Run 'tito module complete <module_name>' ║
|
||||
# ║ ║
|
||||
# ║ 🛡️ STUDENT PROTECTION: This file contains optimized implementations. ║
|
||||
# ║ Editing it directly may break module functionality and training. ║
|
||||
# ║ ║
|
||||
# ║ 🎓 LEARNING TIP: Work in modules/ - that's where real development ║
|
||||
# ║ happens! The tinytorch/ directory is just the compiled output. ║
|
||||
# ╚═══════════════════════════════════════════════════════════════════════════════╝
|
||||
# %% auto 0
|
||||
__all__ = ['LayerNorm', 'MLP', 'TransformerBlock', 'GPT']
|
||||
|
||||
|
||||
Reference in New Issue
Block a user