# TinyπŸ”₯Torch ### Build Your Own ML Framework From Scratch [![Version](https://img.shields.io/github/v/tag/harvard-edge/cs249r_book?filter=tinytorch-v*&label=version&color=D4740C&logo=fireship&logoColor=white)](https://github.com/harvard-edge/cs249r_book/releases?q=tinytorch) [![Status](https://img.shields.io/badge/status-preview-orange?logo=github)](https://github.com/harvard-edge/cs249r_book/discussions/1076) [![Docs](https://img.shields.io/badge/docs-mlsysbook.ai-blue?logo=readthedocs)](https://mlsysbook.ai/tinytorch) [![Python](https://img.shields.io/badge/python-3.10+-3776ab?logo=python&logoColor=white)](https://python.org) [![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE) [![Harvard](https://img.shields.io/badge/Harvard-CS249r-A51C30?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCI+PHBhdGggZmlsbD0id2hpdGUiIGQ9Ik0xMiAyTDIgN2wxMCA1IDEwLTV6TTIgMTdsMTAgNSAxMC01TTIgMTJsMTAgNSAxMC01Ii8+PC9zdmc+)](https://mlsysbook.ai) **Most ML courses teach you to *use* frameworks. TinyTorch teaches you to *build* them.** [The Vision](#why-tinytorch) Β· [20 Modules](#-20-progressive-modules) Β· [Share Feedback](https://github.com/harvard-edge/cs249r_book/discussions/1076)
--- > [!NOTE] > **πŸ“Œ Early release (2026)** > > TinyTorch is **live and usable**. It shipped with the **2026** MLSysBook refresh and we expect **steady iteration**β€”modules, APIs, and course materials will keep improving. Community input drives what we prioritize next. > > **Classroom roadmap** β€” Summer/Fall 2026 | **Right now** β€” [Help shape TinyTorch](#-help-shape-tinytorch) --- ## Why TinyTorch? Everyone wants to be an astronaut πŸ§‘β€πŸš€. Very few want to be the rocket scientist πŸš€. In machine learning, we see the same pattern. Everyone wants to train models, run inference, deploy AI. Very few want to understand how the frameworks actually work. Even fewer want to build one. **The world is full of users. We do not have enough builders.** ### The Solution: AI Bricks 🧱 TinyTorch teaches you the **AI bricks**β€”the stable engineering foundations you can use to build any AI system. - **Small enough to learn from**: bite-sized code that runs even on a Raspberry Pi - **Big enough to matter**: showing the real architecture of how frameworks are built A Harvard University course that transforms you from framework user to systems engineer, giving you the deep understanding needed to optimize, debug, and innovate at the foundation of AI. --- ## What You'll Build A **complete ML framework** capable of: 🎯 **North Star Achievement**: Train CNNs for image classification - Real computer vision on standard benchmark datasets - Built entirely from scratch using only NumPy - Competitive performance with modern frameworks **Additional Capabilities**: - GPT-style language models with attention mechanisms - Modern optimizers (Adam, SGD) with learning rate scheduling - Performance profiling, optimization, and competitive benchmarking **No dependencies on PyTorch or TensorFlow - everything is YOUR code!** --- ## πŸ›  Help Shape TinyTorch We're sharing TinyTorch early because we'd rather shape the direction with community input than build in isolation. Before diving into code, we want to hear from you: **If you're a student:** β†’ What hands-on labs or projects would help you learn ML systems? **If you teach:** β†’ What would make TinyTorch easy to bring into a course? **If you're a practitioner:** β†’ What real-world systems tasks should we simulate? **For everyone:** β†’ What natural extensions belong in this "AI bricks" model? πŸ“£ **[Share your thoughts in the discussion β†’](https://github.com/harvard-edge/cs249r_book/discussions/1076)** --- ## Current Status
Ready In Progress Coming Soon
βœ… All 20 modules implemented πŸ”§ Documentation polish πŸ“… NBGrader integration
βœ… Complete test suite (600+ tests) πŸ”§ Edge case handling πŸ“… Community leaderboard
βœ… tito CLI for workflows πŸ”§ Instructor resources πŸ“… Binder/Colab support
βœ… Historical milestone scripts
**Want to explore the code?** [Browse the repository structure](#repository-structure) to see how modules are organized. **Adventurous early adopter?** Local installation works, but expect rough edges. See the [setup guide](quarto/getting-started.qmd). --- ## πŸ—οΈ 20 Progressive Modules Build your framework through four progressive parts:
Part Modules What You Build
I. Foundations 01-08 Tensors, activations, layers, losses, dataloader, autograd, optimizers, training
II. Vision 09 Conv2d, CNNs for image classification
III. Language 10-13 Tokenization, embeddings, attention, transformers
IV. Optimization 14-20 Profiling, quantization, compression, acceleration, memoization, benchmarking, capstone
Each module asks: **"Can I build this capability from scratch?"** πŸ“– **[Full curriculum and module details β†’](https://mlsysbook.ai/tinytorch)** --- ## πŸ† Historical Milestones As you progress, unlock recreations of landmark ML achievements:
Year Milestone Your Achievement
1958 Perceptron Binary classification with gradient descent
1969 XOR Crisis Multi-layer networks solve non-linear problems
1986 Backpropagation Multi-layer network training
1998 CNN Revolution Image classification with convolutions
2017 Transformer Era Language generation with self-attention
2018+ MLPerf Production-ready optimization
**These aren't toy demos** - they're historically significant ML achievements rebuilt with YOUR framework! --- ## Learning Philosophy ```python # Traditional Course: import torch model.fit(X, y) # Magic happens # TinyTorch: # You implement every component # You measure memory usage # You optimize performance # You understand the systems ``` **Why Build Your Own Framework?** - **Deep Understanding** - Know exactly what `loss.backward()` does - **Systems Thinking** - Understand memory, compute, and scaling - **Debugging Skills** - Fix problems at any level of the stack - **Production Ready** - Learn patterns used in real ML systems --- ## Documentation
Audience Resources
Students Course Website ・ Getting Started
Instructors Instructor Guide
Contributors Contributing Guide
--- ## Repository Structure ```text TinyTorch/ β”œβ”€β”€ src/ # πŸ’» Python source files (developers/contributors edit here) β”‚ β”œβ”€β”€ 01_tensor/ # Module 01: Tensor operations from scratch β”‚ β”‚ β”œβ”€β”€ 01_tensor.py # Python source (version controlled) β”‚ β”‚ └── module.yaml # Module metadata β”‚ β”‚ # Chapter content lives in tinytorch/quarto/modules/01_tensor.qmd β”‚ β”œβ”€β”€ 02_activations/ # Module 02: ReLU, Softmax activations β”‚ β”œβ”€β”€ 03_layers/ # Module 03: Linear layers, Module system β”‚ β”œβ”€β”€ 04_losses/ # Module 04: MSE, CrossEntropy losses β”‚ β”œβ”€β”€ 05_dataloader/ # Module 05: Efficient data pipelines β”‚ β”œβ”€β”€ 06_autograd/ # Module 06: Automatic differentiation β”‚ β”œβ”€β”€ 07_optimizers/ # Module 07: SGD, Adam optimizers β”‚ β”œβ”€β”€ 08_training/ # Module 08: Complete training loops β”‚ β”œβ”€β”€ 09_convolutions/ # Module 09: Conv2d, MaxPool2d, CNNs β”‚ β”œβ”€β”€ 10_tokenization/ # Module 10: Text processing β”‚ β”œβ”€β”€ 11_embeddings/ # Module 11: Token & positional embeddings β”‚ β”œβ”€β”€ 12_attention/ # Module 12: Multi-head attention β”‚ β”œβ”€β”€ 13_transformers/ # Module 13: Complete transformer blocks β”‚ β”œβ”€β”€ 14_profiling/ # Module 14: Performance analysis β”‚ β”œβ”€β”€ 15_quantization/ # Module 15: Model compression (precision reduction) β”‚ β”œβ”€β”€ 16_compression/ # Module 16: Pruning & distillation β”‚ β”œβ”€β”€ 17_acceleration/ # Module 17: Hardware optimization β”‚ β”œβ”€β”€ 18_memoization/ # Module 18: KV-cache/memoization β”‚ β”œβ”€β”€ 19_benchmarking/ # Module 19: Performance measurement β”‚ └── 20_capstone/ # Module 20: Complete ML systems β”‚ β”œβ”€β”€ modules/ # πŸ““ Generated notebooks (learners work here) β”‚ β”œβ”€β”€ 01_tensor/ # Auto-generated from src/ β”‚ β”‚ β”œβ”€β”€ tensor.ipynb # Jupyter notebook for learning β”‚ β”‚ β”œβ”€β”€ README.md # Practical implementation guide β”‚ β”‚ └── tensor.py # Your implementation β”‚ └── ... # (20 module directories) β”‚ β”œβ”€β”€ quarto/ # 🌐 Course website & documentation (Quarto) β”‚ β”œβ”€β”€ index.qmd # Landing page β”‚ β”œβ”€β”€ _quarto.yml # Site navigation & configuration β”‚ β”œβ”€β”€ install.sh # One-line installer (served at mlsysbook.ai/tinytorch/install.sh) β”‚ β”œβ”€β”€ modules/ # Module chapter QMDs β”‚ β”œβ”€β”€ milestones/ # Milestone chapter QMDs β”‚ β”œβ”€β”€ tito/ # tito CLI reference β”‚ └── community/ # Community / about pages β”‚ β”œβ”€β”€ milestones/ # πŸ† Historical ML evolution - prove what you built! β”‚ β”œβ”€β”€ 01_1958_perceptron/ # Rosenblatt's first trainable network β”‚ β”œβ”€β”€ 02_1969_xor/ # Minsky's challenge & multi-layer solution β”‚ β”œβ”€β”€ 03_1986_mlp/ # Backpropagation & MNIST digits β”‚ β”œβ”€β”€ 04_1998_cnn/ # LeCun's CNNs & CIFAR-10 β”‚ β”œβ”€β”€ 05_2017_transformer/ # Attention mechanisms & language β”‚ └── 06_2018_mlperf/ # Modern optimization & profiling β”‚ β”œβ”€β”€ tito/ # πŸŽ›οΈ CLI tool for streamlined workflows β”‚ β”œβ”€β”€ main.py # Entry point β”‚ β”œβ”€β”€ commands/ # 26 command modules β”‚ └── core/ # Core utilities β”‚ β”œβ”€β”€ tinytorch/ # πŸ“¦ Generated package (import from here) β”‚ β”œβ”€β”€ core/ # Core ML components β”‚ └── ... # Your built framework! β”‚ └── tests/ # βœ… Comprehensive test suite (600+ tests) ``` **Key workflow**: `src/*.py` β†’ `modules/*.ipynb` β†’ `tinytorch/*.py` --- ## Join the Community TinyTorch is part of the [ML Systems Book](https://mlsysbook.ai) ecosystem. We're building an open community of learners and educators passionate about ML systems. **Ways to get involved:** - ⭐ Star this repo to show support - πŸ’¬ Join [Discussions](https://github.com/harvard-edge/cs249r_book/discussions) to ask questions - πŸ› Report issues or suggest improvements - 🀝 Contribute modules, fixes, or documentation See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. --- ## Related Projects "TinyTorch" is a popular name for educational ML frameworks. We acknowledge excellent projects with similar names: - [tinygrad](https://github.com/tinygrad/tinygrad) - George Hotz's minimalist framework - [micrograd](https://github.com/karpathy/micrograd) - Andrej Karpathy's tiny autograd - [MiniTorch](https://minitorch.github.io/) - Cornell's educational framework **Our TinyTorch** distinguishes itself through its 20-module curriculum, NBGrader integration, ML systems focus, and connection to the [ML Systems Book](https://mlsysbook.ai) ecosystem. --- ## Contributors Thanks to these wonderful people who helped improve TinyTorch! **Legend:** πŸͺ² Bug Hunter Β· ⚑ Code Warrior Β· πŸ“š Documentation Hero Β· 🎨 Design Artist Β· 🧠 Idea Generator Β· πŸ”Ž Code Reviewer Β· πŸ§ͺ Test Engineer Β· πŸ› οΈ Tool Builder
Vijay Janapa Reddi
Vijay Janapa Reddi

πŸͺ² πŸ§‘β€πŸ’» 🎨 ✍️ 🧠 πŸ”Ž πŸ§ͺ πŸ› οΈ
kai
kai

πŸͺ² πŸ§‘β€πŸ’» 🎨 ✍️ πŸ§ͺ
Dang Truong
Dang Truong

πŸͺ² πŸ§‘β€πŸ’» ✍️ πŸ§ͺ
Farhan Asghar
Farhan Asghar

πŸͺ² πŸ§‘β€πŸ’» 🎨 ✍️
Rocky
Rocky

πŸͺ² πŸ§‘β€πŸ’» ✍️ πŸ§ͺ
Didier Durand
Didier Durand

πŸͺ² πŸ§‘β€πŸ’» ✍️
rnjema
rnjema

πŸ§‘β€πŸ’» ✍️ πŸ› οΈ
Pratham Chaudhary
Pratham Chaudhary

πŸͺ² πŸ§‘β€πŸ’» ✍️
Karthik Dani
Karthik Dani

πŸͺ² πŸ§‘β€πŸ’»
Avik De
Avik De

πŸͺ² πŸ§ͺ
Takosaga
Takosaga

πŸͺ² ✍️
joeswagson
joeswagson

πŸ§‘β€πŸ’» πŸ› οΈ
AndreaMattiaGaravagno
AndreaMattiaGaravagno

πŸ§‘β€πŸ’» ✍️
Rolds
Rolds

πŸͺ² πŸ§‘β€πŸ’»
asgalon
asgalon

πŸ§‘β€πŸ’» ✍️
Amir Alasady
Amir Alasady

πŸͺ²
jettythek
jettythek

πŸ§‘β€πŸ’»
wzz
wzz

πŸͺ²
Ng Bo Lin
Ng Bo Lin

✍️
keo-dara
keo-dara

πŸͺ²
Wayne Norman
Wayne Norman

πŸͺ²
Ilham Rafiqin
Ilham Rafiqin

πŸͺ²
Oscar Flores
Oscar Flores

✍️
harishb00a
harishb00a

✍️
Pastor Soto
Pastor Soto

✍️
Salman Chishti
Salman Chishti

πŸ§‘β€πŸ’»
Aditya Mulik
Aditya Mulik

✍️
Ademola Arigbabuwo
Ademola Arigbabuwo

✍️
Yaroslav Halchenko
Yaroslav Halchenko

πŸ§‘β€πŸ’»
Harish
Harish

✍️
**Recognize a contributor:** Comment on any issue or PR: ```text @all-contributors please add @username for bug, code, doc, or ideas ``` --- ## Acknowledgments Created by [Prof. Vijay Janapa Reddi](https://vijay.seas.harvard.edu) at Harvard University. --- ## License MIT License - see [LICENSE](LICENSE) for details. ---
Full Documentation ・ Discussions ・ ML Systems Book Start Small. Go Deep. Build ML Systems.