# Tinyπ₯Torch
### Build Your Own ML Framework From Scratch
[](https://github.com/harvard-edge/cs249r_book/releases?q=tinytorch)
[](https://github.com/harvard-edge/cs249r_book/discussions/1076)
[](https://mlsysbook.ai/tinytorch)
[](https://python.org)
[](LICENSE)
[](https://mlsysbook.ai)
**Most ML courses teach you to *use* frameworks. TinyTorch teaches you to *build* them.**
[The Vision](#why-tinytorch) Β· [20 Modules](#-20-progressive-modules) Β· [Share Feedback](https://github.com/harvard-edge/cs249r_book/discussions/1076)
---
> π§ **Preview Release** β TinyTorch is functional but evolving. We're sharing early to shape the direction with community input rather than building in isolation.
>
> π
**Classroom Ready**: Summer/Fall 2026 Β· **Right Now**: [We want your feedback](#-help-shape-tinytorch)
---
## Why TinyTorch?
Everyone wants to be an astronaut π§βπ. Very few want to be the rocket scientist π.
In machine learning, we see the same pattern. Everyone wants to train models, run inference, deploy AI. Very few want to understand how the frameworks actually work. Even fewer want to build one.
**The world is full of users. We do not have enough builders.**
### The Solution: AI Bricks π§±
TinyTorch teaches you the **AI bricks**βthe stable engineering foundations you can use to build any AI system.
- **Small enough to learn from**: bite-sized code that runs even on a Raspberry Pi
- **Big enough to matter**: showing the real architecture of how frameworks are built
A Harvard University course that transforms you from framework user to systems engineer, giving you the deep understanding needed to optimize, debug, and innovate at the foundation of AI.
---
## What You'll Build
A **complete ML framework** capable of:
π― **North Star Achievement**: Train CNNs for image classification
- Real computer vision on standard benchmark datasets
- Built entirely from scratch using only NumPy
- Competitive performance with modern frameworks
**Additional Capabilities**:
- GPT-style language models with attention mechanisms
- Modern optimizers (Adam, SGD) with learning rate scheduling
- Performance profiling, optimization, and competitive benchmarking
**No dependencies on PyTorch or TensorFlow - everything is YOUR code!**
---
## π Help Shape TinyTorch
We're sharing TinyTorch early because we'd rather shape the direction with community input than build in isolation. Before diving into code, we want to hear from you:
**If you're a student:**
β What hands-on labs or projects would help you learn ML systems?
**If you teach:**
β What would make TinyTorch easy to bring into a course?
**If you're a practitioner:**
β What real-world systems tasks should we simulate?
**For everyone:**
β What natural extensions belong in this "AI bricks" model?
π£ **[Share your thoughts in the discussion β](https://github.com/harvard-edge/cs249r_book/discussions/1076)**
---
## Current Status
| Ready | In Progress | Coming Soon |
|-------|-------------|-------------|
| β
All 20 modules implemented | π§ Documentation polish | π
NBGrader integration |
| β
Complete test suite (600+ tests) | π§ Edge case handling | π
Community leaderboard |
| β
`tito` CLI for workflows | π§ Instructor resources | π
Binder/Colab support |
| β
Historical milestone scripts | | |
**Want to explore the code?** [Browse the repository structure](#repository-structure) to see how modules are organized.
**Adventurous early adopter?** Local installation works, but expect rough edges. See the [setup guide](site/getting-started.md).
---
## 20 Progressive Modules
Build your framework through four progressive parts:
| Part | Modules | What You Build |
|------|---------|----------------|
| **I. Foundations** | 01-08 | Tensors, activations, layers, losses, dataloader, autograd, optimizers, training |
| **II. Vision** | 09 | Conv2d, CNNs for image classification |
| **III. Language** | 10-13 | Tokenization, embeddings, attention, transformers |
| **IV. Optimization** | 14-20 | Profiling, quantization, compression, acceleration, benchmarking, capstone |
Each module asks: **"Can I build this capability from scratch?"**
π **[Full curriculum and module details β](https://mlsysbook.ai/tinytorch)**
---
## Historical Milestones
As you progress, unlock recreations of landmark ML achievements:
| Year | Milestone | Your Achievement |
|------|-----------|------------------|
| 1958 | Perceptron | Binary classification with gradient descent |
| 1969 | XOR Crisis | Multi-layer networks solve non-linear problems |
| 1986 | Backpropagation | Multi-layer network training |
| 1998 | CNN Revolution | **Image classification with convolutions** |
| 2017 | Transformer Era | Language generation with self-attention |
| 2018+ | MLPerf | Production-ready optimization |
**These aren't toy demos** - they're historically significant ML achievements rebuilt with YOUR framework!
---
## Learning Philosophy
```python
# Traditional Course:
import torch
model.fit(X, y) # Magic happens
# TinyTorch:
# You implement every component
# You measure memory usage
# You optimize performance
# You understand the systems
```
**Why Build Your Own Framework?**
- **Deep Understanding** - Know exactly what `loss.backward()` does
- **Systems Thinking** - Understand memory, compute, and scaling
- **Debugging Skills** - Fix problems at any level of the stack
- **Production Ready** - Learn patterns used in real ML systems
---
## Documentation
| Audience | Resources |
|----------|-----------|
| **Students** | [Course Website](https://mlsysbook.ai/tinytorch) γ» [Getting Started](site/getting-started.md) |
| **Instructors** | [Instructor Guide](INSTRUCTOR.md) |
| **Contributors** | [Contributing Guide](CONTRIBUTING.md) |
---
## Repository Structure
```
TinyTorch/
βββ src/ # π» Python source files (developers/contributors edit here)
β βββ 01_tensor/ # Module 01: Tensor operations from scratch
β β βββ 01_tensor.py # Python source (version controlled)
β β βββ ABOUT.md # Conceptual overview & learning objectives
β βββ 02_activations/ # Module 02: ReLU, Softmax activations
β βββ 03_layers/ # Module 03: Linear layers, Module system
β βββ 04_losses/ # Module 04: MSE, CrossEntropy losses
β βββ 05_dataloader/ # Module 05: Efficient data pipelines
β βββ 06_autograd/ # Module 06: Automatic differentiation
β βββ 07_optimizers/ # Module 07: SGD, Adam optimizers
β βββ 08_training/ # Module 08: Complete training loops
β βββ 09_convolutions/ # Module 09: Conv2d, MaxPool2d, CNNs
β βββ 10_tokenization/ # Module 10: Text processing
β βββ 11_embeddings/ # Module 11: Token & positional embeddings
β βββ 12_attention/ # Module 12: Multi-head attention
β βββ 13_transformers/ # Module 13: Complete transformer blocks
β βββ 14_profiling/ # Module 14: Performance analysis
β βββ 15_quantization/ # Module 15: Model compression (precision reduction)
β βββ 16_compression/ # Module 16: Pruning & distillation
β βββ 17_acceleration/ # Module 17: Hardware optimization
β βββ 18_memoization/ # Module 18: KV-cache/memoization
β βββ 19_benchmarking/ # Module 19: Performance measurement
β βββ 20_capstone/ # Module 20: Complete ML systems
β
βββ modules/ # π Generated notebooks (learners work here)
β βββ 01_tensor/ # Auto-generated from src/
β β βββ tensor.ipynb # Jupyter notebook for learning
β β βββ README.md # Practical implementation guide
β β βββ tensor.py # Your implementation
β βββ ... # (20 module directories)
β
βββ site/ # π Course website & documentation (Jupyter Book)
β βββ intro.md # Landing page
β βββ _toc.yml # Site navigation (links to modules)
β βββ _config.yml # HTML website configuration
β βββ chapters/ # Course content chapters
β βββ modules/ # Module documentation
β
βββ milestones/ # π Historical ML evolution - prove what you built!
β βββ 01_1958_perceptron/ # Rosenblatt's first trainable network
β βββ 02_1969_xor/ # Minsky's challenge & multi-layer solution
β βββ 03_1986_mlp/ # Backpropagation & MNIST digits
β βββ 04_1998_cnn/ # LeCun's CNNs & CIFAR-10
β βββ 05_2017_transformer/ # Attention mechanisms & language
β βββ 06_2018_mlperf/ # Modern optimization & profiling
β
βββ tito/ # ποΈ CLI tool for streamlined workflows
β βββ main.py # Entry point
β βββ commands/ # 23 command modules
β βββ core/ # Core utilities
β
βββ tinytorch/ # π¦ Generated package (import from here)
β βββ core/ # Core ML components
β βββ ... # Your built framework!
β
βββ tests/ # β
Comprehensive test suite (600+ tests)
```
**Key workflow**: `src/*.py` β `modules/*.ipynb` β `tinytorch/*.py`
---
## Join the Community
TinyTorch is part of the [ML Systems Book](https://mlsysbook.ai) ecosystem. We're building an open community of learners and educators passionate about ML systems.
**Ways to get involved:**
- β Star this repo to show support
- π¬ Join [Discussions](https://github.com/harvard-edge/cs249r_book/discussions) to ask questions
- π Report issues or suggest improvements
- π€ Contribute modules, fixes, or documentation
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
---
## Related Projects
"TinyTorch" is a popular name for educational ML frameworks. We acknowledge excellent projects with similar names:
- [tinygrad](https://github.com/tinygrad/tinygrad) - George Hotz's minimalist framework
- [micrograd](https://github.com/karpathy/micrograd) - Andrej Karpathy's tiny autograd
- [MiniTorch](https://minitorch.github.io/) - Cornell's educational framework
**Our TinyTorch** distinguishes itself through its 20-module curriculum, NBGrader integration, ML systems focus, and connection to the [ML Systems Book](https://mlsysbook.ai) ecosystem.
---
## Contributors
Thanks to these wonderful people who helped improve TinyTorch!
**Legend:** πͺ² Bug Hunter Β· β‘ Code Warrior Β· π Documentation Hero Β· π¨ Design Artist Β· π§ Idea Generator Β· π Code Reviewer Β· π§ͺ Test Engineer Β· π οΈ Tool Builder
**Recognize a contributor:** Comment on any issue or PR:
```
@all-contributors please add @username for bug, code, doc, or ideas
```
---
## Acknowledgments
Created by [Prof. Vijay Janapa Reddi](https://vijay.seas.harvard.edu) at Harvard University.
---
## License
MIT License - see [LICENSE](LICENSE) for details.
---
**[π Full Documentation](https://mlsysbook.ai/tinytorch)** γ» **[π¬ Discussions](https://github.com/harvard-edge/cs249r_book/discussions)** γ» **[π ML Systems Book](https://mlsysbook.ai)**
**Start Small. Go Deep. Build ML Systems.**