Build Your Own ML Framework

Hands-on labs for the Machine Learning Systems textbook

Don't import it. Build it.

Build a complete machine learning (ML) framework from tensors to systems—understand how PyTorch, TensorFlow, and JAX really work under the hood.

```{raw} html ```
Start Building in 15 Minutes →
## Getting Started TinyTorch is organized into **four progressive tiers** that take you from mathematical foundations to production-ready systems. Each tier builds on the previous one, teaching you not just how to code ML components, but how they work together as a complete system.

🏗 Foundation (Modules 01-07)

Build the mathematical core that makes neural networks learn.

Unlocks: Perceptron (1957) • XOR Crisis (1969) • MLP (1986)

🏛️ Architecture (Modules 08-13)

Build modern neural architectures—from computer vision to language models.

Unlocks: CNN Revolution (1998) • Transformer Era (2017)

⏱️ Optimization (Modules 14-19)

Transform research prototypes into production-ready systems.

Unlocks: MLPerf Torch Olympics (2018) • 8-16× compression • 12-40× speedup

🏅 Torch Olympics (Module 20)

The ultimate test: Build a complete, competition-ready ML system.

Capstone: Vision • Language • Speed • Compression tracks

**[Complete course structure](chapters/00-introduction)** • **[Getting started guide](getting-started)** • **[Join the community](community)** ## Recreate ML History Walk through ML history by rebuilding its greatest breakthroughs with YOUR TinyTorch implementations. Click each milestone to see what you'll build and how it shaped modern AI. ```{raw} html
1957
The Perceptron
The first trainable neural network
Input → Linear → Sigmoid → Output
1969
XOR Crisis Solved
Hidden layers unlock non-linear learning
Input → Linear → ReLU → Linear → Output
1986
MLP Revival
Backpropagation enables deep learning (95%+ MNIST)
Images → Flatten → Linear → ... → Classes
1998
CNN Revolution 🎯
Spatial intelligence unlocks computer vision (75%+ CIFAR-10)
Images → Conv → Pool → ... → Classes
2017
Transformer Era
Attention launches the LLM revolution
Tokens → Attention → FFN → Output
2018
MLPerf Benchmarks
Production optimization (8-16× smaller, 12-40× faster)
Profile → Compress → Accelerate
``` **[View complete milestone details](chapters/milestones)** to see full technical requirements and learning objectives. ## Why Build Instead of Use? Understanding the difference between using a framework and building one is the difference between being limited by tools and being empowered to create them.

Traditional ML Education

```python import torch model = torch.nn.Linear(784, 10) output = model(input) # When this breaks, you're stuck ```

Problem: OOM errors, NaN losses, slow training—you can't debug what you don't understand.

TinyTorch Approach

```python from tinytorch import Linear # YOUR code model = Linear(784, 10) # YOUR implementation output = model(input) # You know exactly how this works ```

Advantage: You understand memory layouts, gradient flows, and performance bottlenecks because you implemented them.

**Systems Thinking**: TinyTorch emphasizes understanding how components interact—memory hierarchies, computational complexity, and optimization trade-offs—not just isolated algorithms. Every module connects mathematical theory to systems understanding. **See [Course Philosophy](chapters/00-introduction)** for the full origin story and pedagogical approach. ## The Build → Use → Reflect Approach Every module follows a proven learning cycle that builds deep understanding: ```{mermaid} graph LR B[Build
Implement from scratch] --> U[Use
Real data, real problems] U --> R[Reflect
Systems thinking questions] R --> B style B fill:#FFC107,color:#000 style U fill:#4CAF50,color:#fff style R fill:#2196F3,color:#fff ``` 1. **Build**: Implement each component yourself—tensors, autograd, optimizers, attention 2. **Use**: Apply your implementations to real problems—MNIST, CIFAR-10, text generation 3. **Reflect**: Answer systems thinking questions—memory usage, scaling behavior, trade-offs This approach develops not just coding ability, but systems engineering intuition essential for production ML. ## Is This For You? Perfect if you want to **debug ML systems**, **implement custom operations**, or **understand how PyTorch actually works**. **Prerequisites**: Python + basic linear algebra. No prior ML experience required. --- ## 🌍 Join the Community

See learners building ML systems worldwide

Add yourself to the map • Share your progress • Connect with builders

Join the Map →
--- **Next Steps**: **[Quick Start Guide](quickstart-guide)** (15 min) • **[Course Structure](chapters/00-introduction)** • **[FAQ](faq.md)**
🔥 TinyTorch MLSysBook GitHub Leaderboard