feat: Create clean modular architecture with activations → layers separation

��️ Major architectural improvement implementing clean separation of concerns:

 NEW: Activations Module
- Complete activations module with ReLU, Sigmoid, Tanh implementations
- Educational NBDev structure with student TODOs + instructor solutions
- Comprehensive testing suite (24 tests) with mathematical correctness validation
- Visual learning features with matplotlib plotting (disabled during testing)
- Clean export to tinytorch.core.activations

🔧 REFACTOR: Layers Module
- Removed duplicate activation function implementations
- Clean import from activations module: 'from tinytorch.core.activations import ReLU, Sigmoid, Tanh'
- Updated documentation to reflect modular architecture
- Preserved all existing functionality while improving code organization

🧪 TESTING: Comprehensive Test Coverage
- All 24 activations tests passing 
- All 17 layers tests passing 
- Integration tests verify clean architecture works end-to-end
- CLI testing with 'tito test --module' works for both modules

📦 ARCHITECTURE: Clean Dependency Graph
- activations (math functions) → layers (building blocks) → networks (applications)
- Separation of concerns: pure math vs. neural network components
- Reusable components across future modules
- Single source of truth for activation implementations

�� PEDAGOGY: Enhanced Learning Experience
- Week-sized chunks: students master activations, then build layers
- Clear progression from mathematical foundations to applications
- Real-world software architecture patterns
- Modular design principles in practice

This establishes the foundation for scalable, maintainable ML systems education.
This commit is contained in:
Vijay Janapa Reddi
2025-07-10 21:32:25 -04:00
parent 7da85b3572
commit b47c8ef259
10 changed files with 2161 additions and 303 deletions

View File

@@ -343,7 +343,7 @@ def cmd_info(args):
def cmd_test(args):
"""Run tests for a specific module."""
valid_modules = ["setup", "tensor", "layers", "cnn", "data", "training",
valid_modules = ["setup", "tensor", "activations", "layers", "cnn", "data", "training",
"profiling", "compression", "kernels", "benchmarking", "mlops"]
if args.all: