mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-01 12:53:55 -05:00
This commit implements the pedagogically optimal "inevitable discovery" module progression based on expert validation and educational design principles. ## Module Reordering Summary **Previous Order (Problems)**: - 05_losses → 06_autograd → 07_dataloader → 08_optimizers → 09_spatial → 10_training - Issues: Autograd before optimizers, DataLoader before training, scattered dependencies **New Order (Beautiful Progression)**: - 05_losses → 06_optimizers → 07_autograd → 08_training → 09_spatial → 10_dataloader - Benefits: Each module creates inevitable need for the next ## Pedagogical Flow Achieved **05_losses** → "Need systematic weight updates" → **06_optimizers** **06_optimizers** → "Need automatic gradients" → **07_autograd** **07_autograd** → "Need systematic training" → **08_training** **08_training** → "MLPs hit limits on images" → **09_spatial** **09_spatial** → "Training is too slow" → **10_dataloader** ## Technical Changes ### Module Directory Renaming - `06_autograd` → `07_autograd` - `07_dataloader` → `10_dataloader` - `08_optimizers` → `06_optimizers` - `10_training` → `08_training` - `09_spatial` → `09_spatial` (no change) ### System Integration Updates - **MODULE_TO_CHECKPOINT mapping**: Updated in tito/commands/export.py - **Test directories**: Renamed module_XX directories to match new numbers - **Documentation**: Updated all references in MD files and agent configurations - **CLI integration**: Updated next-steps suggestions for proper flow ### Agent Configuration Updates - **Quality Assurance**: Updated module audit status with new numbers - **Module Developer**: Updated work tracking with new sequence - **Documentation**: Updated MASTER_PLAN_OF_RECORD.md with beautiful progression ## Educational Benefits 1. **Inevitable Discovery**: Each module naturally leads to the next 2. **Cognitive Load**: Concepts introduced exactly when needed 3. **Motivation**: Students understand WHY each tool is necessary 4. **Synthesis**: Everything flows toward complete ML systems understanding 5. **Professional Alignment**: Matches real ML engineering workflows ## Quality Assurance - ✅ All CLI commands still function - ✅ Checkpoint system mappings updated - ✅ Documentation consistency maintained - ✅ Test directory structure aligned - ✅ Agent configurations synchronized **Impact**: This reordering transforms TinyTorch from a collection of modules into a coherent educational journey where each step naturally motivates the next, creating optimal conditions for deep learning systems understanding.
95 lines
2.6 KiB
Markdown
95 lines
2.6 KiB
Markdown
# TinyTorch Module Reordering Plan
|
|
|
|
## Current vs New Beautiful Order
|
|
|
|
### **Current Order (Phase 2 Issues):**
|
|
```
|
|
01_setup
|
|
02_tensor
|
|
03_activations
|
|
04_layers
|
|
05_losses
|
|
06_autograd ← Problem: Autograd before optimizers
|
|
07_dataloader ← Problem: DataLoader before training
|
|
08_optimizers ← Problem: Optimizers after autograd
|
|
09_spatial ← Problem: Spatial before training
|
|
10_training ← Problem: Training comes last
|
|
11_tokenization
|
|
12_embeddings
|
|
13_attention
|
|
14_transformers
|
|
15_acceleration
|
|
16_caching
|
|
17_precision
|
|
18_compression
|
|
19_benchmarking
|
|
20_capstone
|
|
```
|
|
|
|
### **New Beautiful Order:**
|
|
```
|
|
01_setup
|
|
02_tensor
|
|
03_activations
|
|
04_layers
|
|
05_losses
|
|
06_optimizers ← Fixed: Optimizers after losses (systematic weight updates)
|
|
07_autograd ← Fixed: Autograd after optimizers (automatic gradients)
|
|
08_training ← Fixed: Training as bridge (systematic procedures)
|
|
09_spatial ← Fixed: Spatial after training (architectural improvements)
|
|
10_dataloader ← Fixed: DataLoader last (efficiency solution)
|
|
11_tokenization
|
|
12_embeddings
|
|
13_attention
|
|
14_transformers
|
|
15_acceleration
|
|
16_caching
|
|
17_precision
|
|
18_compression
|
|
19_benchmarking
|
|
20_capstone
|
|
```
|
|
|
|
## Specific Changes Needed:
|
|
|
|
### **Module Renumbering:**
|
|
- `06_autograd` → `07_autograd`
|
|
- `07_dataloader` → `10_dataloader`
|
|
- `08_optimizers` → `06_optimizers`
|
|
- `09_spatial` → `09_spatial` (stays)
|
|
- `10_training` → `08_training`
|
|
|
|
### **Dependencies to Update:**
|
|
- **Training module (new 08)**: Remove DataLoader imports, use single-sample iteration
|
|
- **Spatial module (new 09)**: Can now use Training procedures from module 08
|
|
- **DataLoader module (new 10)**: Show speedup vs Training module's single-sample approach
|
|
|
|
### **Step-by-Step Reordering Process:**
|
|
1. Create temporary backup
|
|
2. Rename modules to new numbers
|
|
3. Update internal imports and references
|
|
4. Update module.yaml files with new numbers
|
|
5. Update all documentation and examples
|
|
6. Update master roadmap and tutorial plans
|
|
7. Test integration and exports
|
|
|
|
## Files That Need Updates:
|
|
|
|
### **Module Files:**
|
|
- Module directories need renaming
|
|
- `module.yaml` files need number updates
|
|
- README files need prerequisite updates
|
|
- Python files need import path updates
|
|
|
|
### **Documentation Files:**
|
|
- `COMPLETE_MODULE_ROADMAP.md`
|
|
- `tutorial-design-rationale.md`
|
|
- All example files referencing modules
|
|
- Checkpoint system mappings
|
|
|
|
### **Integration Files:**
|
|
- Test files with module dependencies
|
|
- Export/import configurations
|
|
- CLI command mappings
|
|
|
|
This reordering will create the beautiful "inevitable discovery" progression we designed! |