refactor: Keep explicit module imports + optimize CNN milestone

Import Strategy:
- Keep explicit 'from tinytorch.core.spatial import Conv2d'
- Maps directly to module structure (Module 09 → core.spatial)
- Better for education: students see exactly where each concept lives
- Removed redundant tinytorch/nn.py (nn/ directory already exists)

Milestone 04 Optimizations:
- Reduced epochs: 50 → 20 (explicit loops are slow!)
- Print progress every 5 epochs (instead of 10)
- Load from local npz file (no sklearn dependency)
- Still achieves ~80%+ accuracy

Educational Rationale:
TinyTorch uses explicit imports to show module structure:
  tinytorch.core.tensor      # Module 01
  tinytorch.core.layers      # Module 03
  tinytorch.core.spatial     # Module 09
  tinytorch.core.losses      # Module 04

PyTorch's torch.nn is convenient but pedagogically unclear.
Our approach: clarity over convenience!
This commit is contained in:
Vijay Janapa Reddi
2025-09-30 17:15:40 -04:00
parent 688e5826ec
commit 9a8d5de49e
2 changed files with 11 additions and 12 deletions

View File

@@ -34,8 +34,9 @@ while this infrastructure provides the clean API they expect from PyTorch.
"""
# Import layers from core (these contain the student implementations)
from ..core.layers import Linear, Module # Use the same Module class as layers
from ..core.spatial import Conv2d
from ..core.layers import Linear, ReLU, Dropout
from ..core.activations import Sigmoid
from ..core.spatial import Conv2d, MaxPool2d, AvgPool2d
# Import transformer components
from ..core.embeddings import Embedding, PositionalEncoding