Final stage of TinyTorch API simplification:
- Exported updated tensor module with Parameter function
- Exported updated layers module with Linear class and Module base class
- Fixed nn module to use unified Module class from core.layers
- Complete modern API now working with automatic parameter registration
✅ All 7 stages completed successfully:
1. Unified Tensor with requires_grad support
2. Module base class for automatic parameter registration
3. Dense renamed to Linear for PyTorch compatibility
4. Spatial helpers (flatten, max_pool2d) and Conv2d rename
5. Package organization with nn and optim modules
6. Modern API examples showing 50-70% code reduction
7. Complete export with working PyTorch-compatible interface
🎉 Students can now write PyTorch-like code while still implementing
all core algorithms (Conv2d, Linear, ReLU, Adam, autograd)
The API achieves the goal: clean professional interfaces that enhance
learning by reducing cognitive load on framework mechanics.
Stage 5 of TinyTorch API simplification:
- Created tinytorch.nn package with PyTorch-compatible interface
- Added Module base class in nn.modules for automatic parameter registration
- Added functional module with relu, flatten, max_pool2d operations
- Created tinytorch.optim package exposing Adam and SGD optimizers
- Updated main __init__.py to export nn and optim modules
- Linear and Conv2d now available through clean nn interface
Students can now write PyTorch-like code:
import tinytorch.nn as nn
import tinytorch.nn.functional as F
model = nn.Linear(784, 10)
x = F.relu(model(x))