Demonstrates how each architectural choice improves CIFAR-10 accuracy: - v1 Basic (2 conv): ~58-60% - beats MLP baseline - v2 Deeper (4 conv): ~62-65% - hierarchical features help - v3 Wider (more filters): ~65-68% - richer representations - v4 Full (all + dropout): ~68-70% - regularization prevents overfitting Key pedagogical value: - Shows WHY each improvement matters - Uses our actual MultiChannelConv2D implementation - Progressive improvements are measurable - Each version builds on the previous Architecture evolution clearly demonstrated: v1: Edges → v2: Shapes → v3: Textures → v4: Objects This proves our Conv2D implementation can achieve competitive performance when properly architected and trained!
TinyTorch Examples
Complete Applications Built with Your Framework
These examples demonstrate that the ML framework you built from scratch actually works! Each example is a real application that uses the components you created.
📁 Example Structure
Each example folder contains clearly named files:
train_*.py- Training scripts that teach the modeltest_*.py- Testing scripts that evaluate performancedemo_*.py- Interactive demonstrationsutils.py- Helper functions specific to that exampleREADME.md- Detailed documentation for students
🎯 The Three Capstone Examples
1. xornet/ - Neural Network Fundamentals
Proves: Your neural networks can learn non-linear functions
Files:
train_xor_network.py- Trains a network to solve XORvisualize_decision_boundary.py- Shows what the network learnedREADME.md- Explains why XOR is important
What students learn: XOR can't be solved linearly, but neural networks with hidden layers can solve it perfectly.
2. cifar10/ - Computer Vision
Proves: Your framework can handle real-world image classification
Files:
train_image_classifier.py- Trains CNN on CIFAR-10 imagestest_random_baseline.py- Shows random guessing gets ~10%evaluate_model.py- Tests your trained modelvisualize_predictions.py- Shows what the model seesREADME.md- Explains computer vision concepts
What students learn: How convolutions extract features and how real ML systems train on actual data.
3. tinygpt/ - Language Models
Proves: Your framework can build transformers and generate text
Files:
train_language_model.py- Trains GPT on text datagenerate_text.py- Interactive text generationtest_simple_patterns.py- Verifies the model can learntokenizer.py- Text processing utilitiesREADME.md- Explains language modeling
What students learn: How attention mechanisms enable language understanding and generation.
🚀 Running the Examples
Each example can be run immediately:
# XOR - Takes seconds, shows 100% accuracy
cd examples/xornet
python train_xor_network.py
# CIFAR-10 - Takes minutes, achieves 55%+ accuracy
cd examples/cifar10
python train_image_classifier.py
# TinyGPT - Takes minutes, generates text
cd examples/tinygpt
python train_language_model.py
python generate_text.py
📊 What Success Looks Like
- XORNet: 100% accuracy on XOR problem
- CIFAR-10: 55%+ accuracy (5.5x better than random)
- TinyGPT: Generates coherent character sequences
💡 For Students
These examples are the proof that you succeeded. You didn't just learn about neural networks - you built a framework capable of:
- Learning any function (XORNet)
- Classifying real images (CIFAR-10)
- Generating language (TinyGPT)
This is what ML engineers do in production!