Files
TinyTorch/verify_educational_loops.py
Vijay Janapa Reddi 2f23f757e7 MAJOR: Implement beautiful module progression through strategic reordering
This commit implements the pedagogically optimal "inevitable discovery" module progression based on expert validation and educational design principles.

## Module Reordering Summary

**Previous Order (Problems)**:
- 05_losses → 06_autograd → 07_dataloader → 08_optimizers → 09_spatial → 10_training
- Issues: Autograd before optimizers, DataLoader before training, scattered dependencies

**New Order (Beautiful Progression)**:
- 05_losses → 06_optimizers → 07_autograd → 08_training → 09_spatial → 10_dataloader
- Benefits: Each module creates inevitable need for the next

## Pedagogical Flow Achieved

**05_losses** → "Need systematic weight updates" → **06_optimizers**
**06_optimizers** → "Need automatic gradients" → **07_autograd**
**07_autograd** → "Need systematic training" → **08_training**
**08_training** → "MLPs hit limits on images" → **09_spatial**
**09_spatial** → "Training is too slow" → **10_dataloader**

## Technical Changes

### Module Directory Renaming
- `06_autograd` → `07_autograd`
- `07_dataloader` → `10_dataloader`
- `08_optimizers` → `06_optimizers`
- `10_training` → `08_training`
- `09_spatial` → `09_spatial` (no change)

### System Integration Updates
- **MODULE_TO_CHECKPOINT mapping**: Updated in tito/commands/export.py
- **Test directories**: Renamed module_XX directories to match new numbers
- **Documentation**: Updated all references in MD files and agent configurations
- **CLI integration**: Updated next-steps suggestions for proper flow

### Agent Configuration Updates
- **Quality Assurance**: Updated module audit status with new numbers
- **Module Developer**: Updated work tracking with new sequence
- **Documentation**: Updated MASTER_PLAN_OF_RECORD.md with beautiful progression

## Educational Benefits

1. **Inevitable Discovery**: Each module naturally leads to the next
2. **Cognitive Load**: Concepts introduced exactly when needed
3. **Motivation**: Students understand WHY each tool is necessary
4. **Synthesis**: Everything flows toward complete ML systems understanding
5. **Professional Alignment**: Matches real ML engineering workflows

## Quality Assurance

-  All CLI commands still function
-  Checkpoint system mappings updated
-  Documentation consistency maintained
-  Test directory structure aligned
-  Agent configurations synchronized

**Impact**: This reordering transforms TinyTorch from a collection of modules into a coherent educational journey where each step naturally motivates the next, creating optimal conditions for deep learning systems understanding.
2025-09-24 15:56:47 -04:00

92 lines
3.4 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
#!/usr/bin/env python3
"""
Verification script for educational matrix multiplication loops.
This script demonstrates that TinyTorch now uses educational triple-nested loops
for matrix multiplication, setting up the optimization progression for Module 15.
"""
from tinytorch.core.tensor import Tensor
from tinytorch.core.layers import Linear, matmul
import numpy as np
import time
def demonstrate_educational_loops():
"""Demonstrate the educational loop implementation."""
print("🔥 TinyTorch Educational Matrix Multiplication Demo")
print("=" * 60)
print("\n📚 Current Implementation: Triple-Nested Loops (Educational)")
print(" • Clear understanding of every operation")
print(" • Shows the fundamental computation pattern")
print(" • Intentionally simple for learning")
# Test basic functionality
print("\n1. Basic Matrix Multiplication Test:")
a = Tensor([[1, 2], [3, 4]])
b = Tensor([[5, 6], [7, 8]])
result = a @ b
print(f" {a.data.tolist()} @ {b.data.tolist()}")
print(f" = {result.data.tolist()}")
print(f" Expected: [[19, 22], [43, 50]] ✅")
# Test neural network layer
print("\n2. Neural Network Layer Test:")
layer = Linear(3, 2)
input_data = Tensor([[1.0, 2.0, 3.0]])
output = layer(input_data)
print(f" Input shape: {input_data.shape}")
print(f" Output shape: {output.shape}")
print(f" Uses educational matmul internally ✅")
# Show performance characteristics (intentionally slow)
print("\n3. Performance Characteristics (Intentionally Educational):")
sizes = [10, 50, 100]
for size in sizes:
a = Tensor(np.random.randn(size, size))
b = Tensor(np.random.randn(size, size))
start_time = time.time()
result = a @ b
elapsed = time.time() - start_time
print(f" {size}×{size} matrix multiplication: {elapsed:.4f}s")
print("\n🎯 Module 15 Optimization Progression Preview:")
print(" Step 1 (current): Educational loops - slow but clear")
print(" Step 2 (future): Loop blocking for cache efficiency")
print(" Step 3 (future): Vectorized operations with NumPy")
print(" Step 4 (future): GPU acceleration and BLAS libraries")
print("\n✅ Educational matrix multiplication ready!")
print(" Students will understand optimization progression by building it!")
def verify_correctness():
"""Verify that educational loops produce correct results."""
print("\n🔬 Correctness Verification:")
test_cases = [
# Simple 2x2
([[1, 2], [3, 4]], [[5, 6], [7, 8]], [[19, 22], [43, 50]]),
# Non-square
([[1, 2, 3], [4, 5, 6]], [[7, 8], [9, 10], [11, 12]], [[58, 64], [139, 154]]),
# Vector multiplication
([[1, 2, 3]], [[4], [5], [6]], [[32]]),
]
for i, (a_data, b_data, expected) in enumerate(test_cases):
a = Tensor(a_data)
b = Tensor(b_data)
result = a @ b
assert np.allclose(result.data, expected), f"Test {i+1} failed"
print(f" Test {i+1}: {a.shape} @ {b.shape}{result.shape}")
print(" All correctness tests passed!")
if __name__ == "__main__":
demonstrate_educational_loops()
verify_correctness()
print("\n🎉 Educational matrix multiplication setup complete!")
print(" Ready for Module 15 optimization journey!")