mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-05-07 18:18:42 -05:00
Welcome page and several adjacent surfaces had dark-on-dark text in dark
mode after the v0.1.10 release. Two root causes:
1. preface.qmd used inline `style="background: #..."` divs for the
"Why does this matter?" callout, the MLSysBook/TinyTorch comparison,
and the "Who This Is For" persona grid. Inline styles beat CSS
class specificity, so dark-mode rules were ignored.
2. dark-mode.scss had no overrides for several components defined in
style.scss with hardcoded light colors: .comparison-title, the
.tier-{foundation,architecture,optimization,olympics} cards, the
.milestone-card family, .ml-timeline-tech, .preview-badge, and the
callout-body / callout-content text-color cascade.
Changes:
- Refactor the three preface.qmd blocks to use existing .callout-note
and .who-card / .who-grid classes (kept colored left-borders inline
since they are mode-invariant accents).
- Append dark-mode rules covering every selector identified above.
Mirrors the .callout-body !important cascade pattern from
shared/styles/_ecosystem-base-dark.scss:577-584.
PDF branches (when-format="pdf") untouched. Light mode unchanged.
237 lines
9.8 KiB
Plaintext
237 lines
9.8 KiB
Plaintext
---
|
|
title: "Welcome"
|
|
---
|
|
|
|
<div style="text-align: center; margin: 1rem 0 2rem 0;">
|
|
<img src="assets/images/tito.png" alt="TinyTorch" style="width: 100%;">
|
|
</div>
|
|
|
|
Everyone wants to be an astronaut. Very few want to be the rocket scientist.
|
|
|
|
Machine learning is no different. Everyone wants to train models, run inference, deploy AI. Few want to understand how the frameworks actually work. Fewer still want to build one.
|
|
|
|
The world has plenty of users. It does not have enough builders---people who can debug, optimize, and adapt systems when the black box breaks down.
|
|
|
|
TinyTorch is for the builders.
|
|
|
|
|
|
## The Problem
|
|
|
|
Most people can use PyTorch or TensorFlow. They can import libraries, call functions, train models. But very few understand how these frameworks work: how memory is managed for tensors, how autograd builds computation graphs, how optimizers update parameters. And almost no one has a guided, structured way to learn that from the ground up.
|
|
|
|
::: {.content-visible when-format="html"}
|
|
|
|
::: {.callout-note}
|
|
**Why does this matter?** Because users hit walls that builders do not:
|
|
|
|
- When your model runs out of memory, you need to understand **tensor allocation**
|
|
- When gradients explode, you need to understand the **computation graph**
|
|
- When training is slow, you need to understand where the **bottlenecks** are
|
|
- When deploying on a microcontroller, you need to know what can be **stripped away**
|
|
|
|
The framework becomes a black box you cannot debug, optimize, or adapt. You are stuck waiting for someone else to solve your problem.
|
|
:::
|
|
|
|
:::
|
|
|
|
::: {.content-visible when-format="pdf"}
|
|
|
|
Users hit walls that builders do not:
|
|
|
|
- Out of memory? You need to understand tensor allocation.
|
|
- Gradients exploding? You need to understand the computation graph.
|
|
- Training too slow? You need to find the bottleneck.
|
|
- Deploying on a microcontroller? You need to know what can be stripped away.
|
|
|
|
The framework becomes a black box you cannot debug, optimize, or adapt. You are stuck waiting for someone else to solve your problem.
|
|
|
|
:::
|
|
|
|
Students cannot learn this from production code. PyTorch is too large, too complex, too optimized. Fifty thousand lines of C++ across hundreds of files. No one learns to build rockets by studying the Saturn V.
|
|
|
|
They also cannot learn it from toy scripts. A hundred-line neural network does not reveal the architecture of a framework. It hides it.
|
|
|
|
|
|
## The Solution: AI Bricks
|
|
|
|
TinyTorch teaches you the **AI bricks**---the stable engineering foundations you can use to build any AI system. Small enough to learn from: bite-sized code that runs even on a Raspberry Pi. Big enough to matter: showing the real architecture of how frameworks are built.
|
|
|
|
::: {.content-visible when-format="html"}
|
|
|
|
<div class="who-grid">
|
|
|
|
<div class="who-card" style="border-left: 4px solid #1976d2;">
|
|
<strong>📖 MLSysBook</strong>
|
|
|
|
<p>The <a href="https://mlsysbook.ai">Machine Learning Systems</a> textbook teaches you the <em>concepts</em> of the rocket ship: propulsion, guidance, life support.</p>
|
|
</div>
|
|
|
|
<div class="who-card" style="border-left: 4px solid #ff8247;">
|
|
<strong>TinyTorch</strong>
|
|
|
|
<p>TinyTorch is where you actually <em>build</em> a small rocket with your own hands. Not a toy---a real framework.</p>
|
|
</div>
|
|
|
|
</div>
|
|
|
|
:::
|
|
|
|
::: {.content-visible when-format="pdf"}
|
|
|
|
**MLSysBook** --- the [Machine Learning Systems](https://mlsysbook.ai) textbook teaches the *concepts* of the rocket ship: propulsion, guidance, life support.
|
|
|
|
**TinyTorch** --- where you actually *build* a small rocket with your own hands. Not a toy. A real framework.
|
|
|
|
:::
|
|
|
|
This is how you move from *using* machine learning to *engineering* it---from running code in a notebook to designing the systems that run underneath.
|
|
|
|
|
|
## Who This Is For
|
|
|
|
::: {.content-visible when-format="html"}
|
|
|
|
<div class="who-grid">
|
|
|
|
<div class="who-card" style="border-left: 4px solid #9c27b0;">
|
|
<strong>Students & Researchers</strong>
|
|
|
|
<p>Want to understand ML systems deeply, not just use them superficially. If you've wondered "how does that actually work?", this is for you.</p>
|
|
</div>
|
|
|
|
<div class="who-card" style="border-left: 4px solid #4caf50;">
|
|
<strong>ML Engineers</strong>
|
|
|
|
<p>Need to debug, optimize, and deploy models in production. Understanding the systems underneath makes you more effective.</p>
|
|
</div>
|
|
|
|
<div class="who-card" style="border-left: 4px solid #2196f3;">
|
|
<strong>Systems Programmers</strong>
|
|
|
|
<p>You understand memory hierarchies, computational complexity, performance optimization. You want to apply it to ML.</p>
|
|
</div>
|
|
|
|
<div class="who-card" style="border-left: 4px solid #ffc107;">
|
|
<strong>Self-taught Engineers</strong>
|
|
|
|
<p>Can use frameworks but want to know how they work. Preparing for ML infrastructure roles and need systems-level understanding.</p>
|
|
</div>
|
|
|
|
</div>
|
|
|
|
:::
|
|
|
|
::: {.content-visible when-format="pdf"}
|
|
|
|
**Students & Researchers** — want to understand ML systems deeply, not just use them superficially. If you've wondered "how does that actually work?", this is for you.
|
|
|
|
**ML Engineers** — need to debug, optimize, and deploy models in production. Understanding the systems underneath makes you more effective.
|
|
|
|
**Systems Programmers** — you understand memory hierarchies, computational complexity, performance optimization. You want to apply it to ML.
|
|
|
|
**Self-taught Engineers** — can use frameworks but want to know how they work. Preparing for ML infrastructure roles and need systems-level understanding.
|
|
|
|
:::
|
|
|
|
What you need is not another API tutorial. You need to build.
|
|
|
|
## What You Will Build
|
|
|
|
By the end of TinyTorch, you will have implemented:
|
|
|
|
- A tensor library with broadcasting, reshaping, and matrix operations
|
|
- Activation functions with numerical stability considerations
|
|
- Neural network layers: linear, convolutional, normalization
|
|
- An autograd engine that builds computation graphs and computes gradients
|
|
- Optimizers that update parameters using those gradients
|
|
- Data loaders that handle batching, shuffling, and preprocessing
|
|
- A complete training loop that ties everything together
|
|
- Tokenizers, embeddings, attention, and transformer architectures
|
|
- Profiling, quantization, and optimization techniques
|
|
|
|
Not a simulation. The actual architecture of modern ML frameworks, implemented at a scale you can hold in your head.
|
|
|
|
## How to Learn
|
|
|
|
Each module follows a **Build-Use-Reflect** cycle: implement from scratch, apply to real problems, then connect what you built to production systems and understand the tradeoffs. Work through Foundation first, then choose your path based on your interests.
|
|
|
|
::: {.content-visible when-format="html"}
|
|
|
|
<div style="display: grid; grid-template-columns: 1fr 1fr; gap: 1rem; margin: 1.5rem 0;">
|
|
|
|
<div style="border: 1px solid #e0e0e0; padding: 1rem; border-radius: 0.5rem;">
|
|
<strong>Type every line yourself</strong>
|
|
|
|
<p style="margin: 0.5rem 0 0 0; font-size: 0.9rem; color: #666;">Do not copy-paste. The learning happens in the struggle of implementation.</p>
|
|
</div>
|
|
|
|
<div style="border: 1px solid #e0e0e0; padding: 1rem; border-radius: 0.5rem;">
|
|
<strong>Profile your code</strong>
|
|
|
|
<p style="margin: 0.5rem 0 0 0; font-size: 0.9rem; color: #666;">Use built-in profiling tools. Measure first, optimize second.</p>
|
|
</div>
|
|
|
|
<div style="border: 1px solid #e0e0e0; padding: 1rem; border-radius: 0.5rem;">
|
|
<strong>Run the tests</strong>
|
|
|
|
<p style="margin: 0.5rem 0 0 0; font-size: 0.9rem; color: #666;">Every module ships with tests. When they pass, you have built something real.</p>
|
|
</div>
|
|
|
|
<div style="border: 1px solid #e0e0e0; padding: 1rem; border-radius: 0.5rem;">
|
|
<strong>Compare with PyTorch</strong>
|
|
|
|
<p style="margin: 0.5rem 0 0 0; font-size: 0.9rem; color: #666;">Once your implementation works, compare with PyTorch's equivalent.</p>
|
|
</div>
|
|
|
|
</div>
|
|
|
|
:::
|
|
|
|
::: {.content-visible when-format="pdf"}
|
|
|
|
**Type every line yourself** — do not copy-paste. The learning happens in the struggle of implementation.
|
|
|
|
**Profile your code** — use the built-in profiling tools. Measure first, optimize second.
|
|
|
|
**Run the tests** — every module ships with tests. When they pass, you have built something real.
|
|
|
|
**Compare with PyTorch** — once your implementation works, compare with PyTorch's equivalent to see how production frameworks scale the same ideas.
|
|
|
|
:::
|
|
|
|
Take your time. The goal is not to finish fast. The goal is to understand deeply.
|
|
|
|
::: {.content-visible when-format="html"}
|
|
|
|
<div style="background: linear-gradient(135deg, #1e293b 0%, #0f172a 100%); padding: 1.5rem 2rem; border-radius: 0.5rem; margin: 2rem 0; text-align: center;">
|
|
<p style="color: #f8fafc; font-size: 1.2rem; font-style: italic; margin: 0; font-weight: 500;">"Building systems creates irreversible understanding."</p>
|
|
</div>
|
|
|
|
:::
|
|
|
|
::: {.content-visible when-format="pdf"}
|
|
|
|
> *Building systems creates irreversible understanding.*
|
|
|
|
:::
|
|
|
|
|
|
## The Bigger Picture
|
|
|
|
TinyTorch is one half of a two-book sequence. The [Machine Learning Systems](https://mlsysbook.ai) textbook teaches the concepts: how training works, why GPUs matter, what makes inference cheap or expensive. TinyTorch makes you build it. Together, they form a complete path into ML systems engineering.
|
|
|
|
This approach follows a long tradition in systems education: SICP's "build to understand" philosophy, xv6's transparent operating system, Nachos, Pintos. The pedagogical principles behind TinyTorch are detailed in our [research paper](https://arxiv.org/pdf/2601.19107), which positions this work within decades of CS education research.
|
|
|
|
The next generation of engineers cannot rely on magic. They need to see how everything fits together, from a single tensor allocation up to a full training loop, and feel that the systems running modern AI are not an unreachable tower but something they can open, shape, and rebuild.
|
|
|
|
That is what TinyTorch offers: the confidence that comes from having built it yourself.
|
|
|
|
*Prof. Vijay Janapa Reddi*<br>
|
|
*(Harvard University)*<br>
|
|
*2025*
|
|
|
|
|
|
## What's Next?
|
|
|
|
**[See the Big Picture →](big-picture.qmd)** --- How all 20 modules connect, what you'll build, and which path to take.
|