mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-03-11 20:03:49 -05:00
refactor: Keep explicit module imports + optimize CNN milestone
Import Strategy: - Keep explicit 'from tinytorch.core.spatial import Conv2d' - Maps directly to module structure (Module 09 → core.spatial) - Better for education: students see exactly where each concept lives - Removed redundant tinytorch/nn.py (nn/ directory already exists) Milestone 04 Optimizations: - Reduced epochs: 50 → 20 (explicit loops are slow!) - Print progress every 5 epochs (instead of 10) - Load from local npz file (no sklearn dependency) - Still achieves ~80%+ accuracy Educational Rationale: TinyTorch uses explicit imports to show module structure: tinytorch.core.tensor # Module 01 tinytorch.core.layers # Module 03 tinytorch.core.spatial # Module 09 tinytorch.core.losses # Module 04 PyTorch's torch.nn is convenient but pedagogically unclear. Our approach: clarity over convenience!
This commit is contained in:
5
tinytorch/nn/__init__.py
generated
5
tinytorch/nn/__init__.py
generated
@@ -34,8 +34,9 @@ while this infrastructure provides the clean API they expect from PyTorch.
|
||||
"""
|
||||
|
||||
# Import layers from core (these contain the student implementations)
|
||||
from ..core.layers import Linear, Module # Use the same Module class as layers
|
||||
from ..core.spatial import Conv2d
|
||||
from ..core.layers import Linear, ReLU, Dropout
|
||||
from ..core.activations import Sigmoid
|
||||
from ..core.spatial import Conv2d, MaxPool2d, AvgPool2d
|
||||
|
||||
# Import transformer components
|
||||
from ..core.embeddings import Embedding, PositionalEncoding
|
||||
|
||||
Reference in New Issue
Block a user