- Replace manual steps with tito book build - Install tito CLI in workflow - Add tito/** to trigger paths - Simplify build process for consistency
7.0 KiB
Tiny🔥Torch: Build your own Machine Learning framework from scratch.
Learn ML by building your own PyTorch-style framework from the ground up. Start small. Go deep.
:class: tip
By the end of this course, you'll have **built your own complete ML framework** that can:
- ✅ Train neural networks on CIFAR-10 images (real dataset!)
- ✅ Implement automatic differentiation (the "magic" behind PyTorch)
- ✅ Optimize models for production deployment (75% size reduction)
- ✅ Handle the complete ML pipeline from data loading to monitoring
**Most importantly:** You'll understand how modern ML frameworks *actually* work under the hood.
📚 Educational Foundation
TinyTorch grew out of the CS249r: Tiny Machine Learning Systems course at Harvard University. While the Machine Learning Systems book covers the broad principles and practices of engineering ML systems, TinyTorch gives you hands-on experience building the systems yourself.
🚀 Choose Your Learning Path
:class: important
### **🔬 [Quick Exploration](usage-paths/quick-exploration.md)** *(5 minutes)*
*"I want to see what this is about"*
- Click and run code immediately in your browser (Binder)
- No installation or setup required
- Implement ReLU, tensors, neural networks interactively
- Perfect for getting a feel for the course
### **🏗️ [Serious Development](usage-paths/serious-development.md)** *(8+ weeks)*
*"I want to build this myself"*
- Fork the repo and work locally with full development environment
- Build complete ML framework from scratch with `tito` CLI
- 14 progressive assignments from setup to production MLOps
- Professional development workflow with automated testing
### **👨🏫 [Classroom Use](usage-paths/classroom-use.md)** *(Instructors)*
*"I want to teach this course"*
- Complete course infrastructure with NBGrader integration
- Automated grading for 200+ tests across all assignments
- Flexible pacing (8-16 weeks) with proven pedagogical outcomes
- Turn-key solution for ML systems education
🎯 What You'll Build
Progressive Complexity - Each Module Builds on Prior Work
:class: note
**Week 1-3: Core Infrastructure**
- **Setup**: Professional development workflow with CLI tools
- **Tensors**: Multi-dimensional arrays and operations (like NumPy, but yours!)
- **Activations**: ReLU, Sigmoid, Tanh - the math that makes learning possible
:class: note
**Week 4-6: Neural Network Components**
- **Layers**: Dense (linear) layers with matrix multiplication
- **Networks**: Sequential architecture - chain layers into complete models
- **CNNs**: Convolutional operations for computer vision
:class: note
**Week 7-10: Complete Training Pipeline**
- **DataLoader**: CIFAR-10 loading, batching, preprocessing
- **Autograd**: Automatic differentiation engine (the "magic" of PyTorch)
- **Optimizers**: SGD, Adam, learning rate scheduling
- **Training**: Loss functions, metrics, complete training orchestration
:class: note
**Week 11-14: Real-World Deployment**
- **Compression**: Model pruning and quantization (75% size reduction)
- **Kernels**: High-performance custom operations
- **Benchmarking**: Systematic evaluation and performance measurement
- **MLOps**: Production monitoring, continuous learning, complete pipeline
🎓 Learning Philosophy: Build → Use → Understand
Example: How You'll Learn Activation Functions
🔧 Build: Implement ReLU from scratch
def relu(x):
# YOU implement this function
return ??? # What should this be?
🚀 Use: Immediately use your own code
from tinytorch.core.activations import ReLU # YOUR implementation!
layer = ReLU()
output = layer.forward(input_tensor) # Your code working!
💡 Understand: See it working in real networks
# Your ReLU is now part of a real neural network
model = Sequential([
Dense(784, 128),
ReLU(), # <-- Your implementation
Dense(128, 10)
])
This pattern repeats for every component - you build it, use it immediately, then see how it fits into larger systems.
🌟 What Makes This Different
🔬 Engineering Principles
- Production-style code organization throughout every module
- Performance-focused engineering and optimization practices
- Professional development workflow with automated testing and CI
🚀 Immediate Feedback
- Code works immediately after implementation
- Visual progress indicators and success messages
- Comprehensive testing ensures your implementations work
- "Aha moments" when you see your code powering real neural networks
🎯 Progressive Complexity
- Start simple: implement
hello_world()function - Build systematically: each module enables the next
- End powerful: deploy production ML systems with monitoring
- No gaps: every step is carefully scaffolded
🚀 Ready to Start?
:class: tip
**Just exploring?** → **[🔬 Quick Exploration](usage-paths/quick-exploration.md)** *(Click and code in 30 seconds)*
**Ready to build?** → **[🏗️ Serious Development](usage-paths/serious-development.md)** *(Fork repo and build your ML framework)*
**Teaching a class?** → **[👨🏫 Classroom Use](usage-paths/classroom-use.md)** *(Complete course infrastructure)*
Quick Taste: Try Chapter 1 Right Now
Want to see what TinyTorch feels like? Launch the Setup chapter in Binder and implement your first TinyTorch function in 2 minutes!
🏗️ Big Picture: Why Build from Scratch?
Most ML education teaches you to use frameworks. TinyTorch teaches you to understand them.
Traditional ML Course: TinyTorch Approach:
├── import torch ├── class Tensor:
├── model = nn.Linear(10, 1) │ def __add__(self, other): ...
├── loss = nn.MSELoss() │ def backward(self): ...
└── optimizer.step() ├── class Linear:
│ def forward(self, x):
│ return x @ self.weight + self.bias
├── def mse_loss(pred, target):
│ return ((pred - target) ** 2).mean()
├── class SGD:
│ def step(self):
└── param.data -= lr * param.grad
Go from "How does this work?" 🤷 to "I implemented every line!" 💪
Result: You become the person others come to when they need to understand "how PyTorch actually works under the hood." Every line of code you write brings you closer to understanding how modern AI works.