MAJOR FEATURE: Multi-channel convolutions for real CNN architectures Key additions: - MultiChannelConv2D class with in_channels/out_channels support - Handles RGB images (3 channels) and arbitrary channel counts - He initialization for stable training - Optional bias parameters - Batch processing support Testing & Validation: - Comprehensive unit tests for single/multi-channel - Integration tests for complete CNN pipelines - Memory profiling and parameter scaling analysis - QA approved: All mandatory tests passing CIFAR-10 CNN Example: - Updated train_cnn.py to use MultiChannelConv2D - Architecture: Conv(3→32) → Pool → Conv(32→64) → Pool → Dense - Demonstrates why convolutions matter for vision - Shows parameter reduction vs MLPs (18KB vs 12MB) Systems Analysis: - Parameter scaling: O(in_channels × out_channels × kernel²) - Memory profiling shows efficient scaling - Performance characteristics documented - Production context with PyTorch comparisons This enables proper CNN training on CIFAR-10 with ~60% accuracy target.
TinyTorch Examples
Complete Applications Built with Your Framework
These examples demonstrate that the ML framework you built from scratch actually works! Each example is a real application that uses the components you created.
📁 Example Structure
Each example folder contains clearly named files:
train_*.py- Training scripts that teach the modeltest_*.py- Testing scripts that evaluate performancedemo_*.py- Interactive demonstrationsutils.py- Helper functions specific to that exampleREADME.md- Detailed documentation for students
🎯 The Three Capstone Examples
1. xornet/ - Neural Network Fundamentals
Proves: Your neural networks can learn non-linear functions
Files:
train_xor_network.py- Trains a network to solve XORvisualize_decision_boundary.py- Shows what the network learnedREADME.md- Explains why XOR is important
What students learn: XOR can't be solved linearly, but neural networks with hidden layers can solve it perfectly.
2. cifar10/ - Computer Vision
Proves: Your framework can handle real-world image classification
Files:
train_image_classifier.py- Trains CNN on CIFAR-10 imagestest_random_baseline.py- Shows random guessing gets ~10%evaluate_model.py- Tests your trained modelvisualize_predictions.py- Shows what the model seesREADME.md- Explains computer vision concepts
What students learn: How convolutions extract features and how real ML systems train on actual data.
3. tinygpt/ - Language Models
Proves: Your framework can build transformers and generate text
Files:
train_language_model.py- Trains GPT on text datagenerate_text.py- Interactive text generationtest_simple_patterns.py- Verifies the model can learntokenizer.py- Text processing utilitiesREADME.md- Explains language modeling
What students learn: How attention mechanisms enable language understanding and generation.
🚀 Running the Examples
Each example can be run immediately:
# XOR - Takes seconds, shows 100% accuracy
cd examples/xornet
python train_xor_network.py
# CIFAR-10 - Takes minutes, achieves 55%+ accuracy
cd examples/cifar10
python train_image_classifier.py
# TinyGPT - Takes minutes, generates text
cd examples/tinygpt
python train_language_model.py
python generate_text.py
📊 What Success Looks Like
- XORNet: 100% accuracy on XOR problem
- CIFAR-10: 55%+ accuracy (5.5x better than random)
- TinyGPT: Generates coherent character sequences
💡 For Students
These examples are the proof that you succeeded. You didn't just learn about neural networks - you built a framework capable of:
- Learning any function (XORNet)
- Classifying real images (CIFAR-10)
- Generating language (TinyGPT)
This is what ML engineers do in production!