- Remove 'just' from all instances of the tagline - Update banner to lead with tagline - Consistent branding across docs, CLI, and demos
14 KiB
Build Your Own ML Framework
Hands-on labs for the Machine Learning Systems textbook
Don't import it. Build it.
Build a complete machine learning (ML) framework from tensors to systems—understand how PyTorch, TensorFlow, and JAX really work under the hood.
<!-- Hero GIF Carousel - Compact Design -->
<div class="hero-carousel-compact">
<div class="carousel-track">
<div class="carousel-item active">
<div class="gif-preview">
<img src="_static/demos/01-clone-setup.gif" alt="Clone & Setup workflow" loading="lazy" />
<div class="preview-fallback">💻</div>
</div>
</div>
<div class="carousel-item">
<div class="gif-preview">
<img src="_static/demos/02-build-jupyter.gif" alt="Build in Jupyter workflow" loading="lazy" />
<div class="preview-fallback">📓</div>
</div>
</div>
<div class="carousel-item">
<div class="gif-preview">
<img src="_static/demos/03-export-tito.gif" alt="Export with TITO workflow" loading="lazy" />
<div class="preview-fallback">🛠️</div>
</div>
</div>
<div class="carousel-item">
<div class="gif-preview">
<img src="_static/demos/04-validate-history.gif" alt="Validate with History workflow" loading="lazy" />
<div class="preview-fallback">🏆</div>
</div>
</div>
</div>
<div class="carousel-nav">
<button class="nav-arrow prev" onclick="moveCarousel(-1)">←</button>
<button class="nav-arrow next" onclick="moveCarousel(1)">→</button>
</div>
</div>
Getting Started
TinyTorch is organized into four progressive tiers that take you from mathematical foundations to production-ready systems. Each tier builds on the previous one, teaching you not just how to code ML components, but how they work together as a complete system.
🏗 Foundation (Modules 01-07)
Build the mathematical core that makes neural networks learn.
Unlocks: Perceptron (1957) • XOR Crisis (1969) • MLP (1986)
🏛️ Architecture (Modules 08-13)
Build modern neural architectures—from computer vision to language models.
Unlocks: CNN Revolution (1998) • Transformer Era (2017)
⏱️ Optimization (Modules 14-19)
Transform research prototypes into production-ready systems.
Unlocks: MLPerf Torch Olympics (2018) • 8-16× compression • 12-40× speedup
🏅 Torch Olympics (Module 20)
The ultimate test: Build a complete, competition-ready ML system.
Capstone: Vision • Language • Speed • Compression tracks
Complete course structure • Getting started guide • Join the community
Recreate ML History
Walk through ML history by rebuilding its greatest breakthroughs with YOUR TinyTorch implementations. Click each milestone to see what you'll build and how it shaped modern AI.
<div class="ml-timeline-container">
<div class="ml-timeline-line"></div>
<div class="ml-timeline-item left perceptron">
<div class="ml-timeline-dot"></div>
<div class="ml-timeline-content">
<div class="ml-timeline-year">1957</div>
<div class="ml-timeline-title">The Perceptron</div>
<div class="ml-timeline-desc">The first trainable neural network</div>
<div class="ml-timeline-tech">Input → Linear → Sigmoid → Output</div>
</div>
</div>
<div class="ml-timeline-item right xor">
<div class="ml-timeline-dot"></div>
<div class="ml-timeline-content">
<div class="ml-timeline-year">1969</div>
<div class="ml-timeline-title">XOR Crisis Solved</div>
<div class="ml-timeline-desc">Hidden layers unlock non-linear learning</div>
<div class="ml-timeline-tech">Input → Linear → ReLU → Linear → Output</div>
</div>
</div>
<div class="ml-timeline-item left mlp">
<div class="ml-timeline-dot"></div>
<div class="ml-timeline-content">
<div class="ml-timeline-year">1986</div>
<div class="ml-timeline-title">MLP Revival</div>
<div class="ml-timeline-desc">Backpropagation enables deep learning (95%+ MNIST)</div>
<div class="ml-timeline-tech">Images → Flatten → Linear → ... → Classes</div>
</div>
</div>
<div class="ml-timeline-item right cnn">
<div class="ml-timeline-dot"></div>
<div class="ml-timeline-content">
<div class="ml-timeline-year">1998</div>
<div class="ml-timeline-title">CNN Revolution 🎯</div>
<div class="ml-timeline-desc">Spatial intelligence unlocks computer vision (75%+ CIFAR-10)</div>
<div class="ml-timeline-tech">Images → Conv → Pool → ... → Classes</div>
</div>
</div>
<div class="ml-timeline-item left transformer">
<div class="ml-timeline-dot"></div>
<div class="ml-timeline-content">
<div class="ml-timeline-year">2017</div>
<div class="ml-timeline-title">Transformer Era</div>
<div class="ml-timeline-desc">Attention launches the LLM revolution</div>
<div class="ml-timeline-tech">Tokens → Attention → FFN → Output</div>
</div>
</div>
<div class="ml-timeline-item right olympics">
<div class="ml-timeline-dot"></div>
<div class="ml-timeline-content">
<div class="ml-timeline-year">2018</div>
<div class="ml-timeline-title">MLPerf Benchmarks </div>
<div class="ml-timeline-desc">Production optimization (8-16× smaller, 12-40× faster)</div>
<div class="ml-timeline-tech">Profile → Compress → Accelerate</div>
</div>
</div>
</div>
View complete milestone details to see full technical requirements and learning objectives.
Why Build Instead of Use?
Understanding the difference between using a framework and building one is the difference between being limited by tools and being empowered to create them.
Traditional ML Education
import torch
model = torch.nn.Linear(784, 10)
output = model(input)
# When this breaks, you're stuck
Problem: OOM errors, NaN losses, slow training—you can't debug what you don't understand.
TinyTorch Approach
from tinytorch import Linear # YOUR code
model = Linear(784, 10) # YOUR implementation
output = model(input)
# You know exactly how this works
Advantage: You understand memory layouts, gradient flows, and performance bottlenecks because you implemented them.
Systems Thinking: TinyTorch emphasizes understanding how components interact—memory hierarchies, computational complexity, and optimization trade-offs—not just isolated algorithms. Every module connects mathematical theory to systems understanding.
See Course Philosophy for the full origin story and pedagogical approach.
The Build → Use → Reflect Approach
Every module follows a proven learning cycle that builds deep understanding:
graph LR
B[Build<br/>Implement from scratch] --> U[Use<br/>Real data, real problems]
U --> R[Reflect<br/>Systems thinking questions]
R --> B
style B fill:#FFC107,color:#000
style U fill:#4CAF50,color:#fff
style R fill:#2196F3,color:#fff
- Build: Implement each component yourself—tensors, autograd, optimizers, attention
- Use: Apply your implementations to real problems—MNIST, CIFAR-10, text generation
- Reflect: Answer systems thinking questions—memory usage, scaling behavior, trade-offs
This approach develops not just coding ability, but systems engineering intuition essential for production ML.
Is This For You?
Perfect if you want to debug ML systems, implement custom operations, or understand how PyTorch actually works.
Prerequisites: Python + basic linear algebra. No prior ML experience required.
🌍 Join the Community
See learners building ML systems worldwide
Add yourself to the map • Share your progress • Connect with builders
Join the Map →Next Steps: Quick Start Guide (15 min) • Course Structure • FAQ