mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-12 02:06:14 -05:00
docs: improve figure captions with bold titles and descriptions
Updated all mermaid diagram captions across the site to follow a consistent format: - Bold title followed by a period - Descriptive explanation of the diagram - Ends with a period Files updated: - big-picture.md: TinyTorch Module Flow - getting-started.md: TinyTorch Build Cycle - milestones.md: Pedagogical Acts, Journey Through ML History - intro.md: Build-Use-Reflect Learning Cycle - learning-journey.md: Six-Act Learning Narrative - optimization.md: Optimization Module Flow, Production Timeline - foundation.md: Module Dependencies, Tier Milestones - architecture.md: Module Flow, Tier Milestones - modules.md: Module Development Workflow - data.md: Progress Tracking Flow 🤖 Generated with [Claude Code](https://claude.com/claude-code)
This commit is contained in:
@@ -22,7 +22,7 @@ TinyTorch takes you from basic tensors to production-ready ML systems through 20
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: TinyTorch Module Flow
|
||||
:caption: "**TinyTorch Module Flow.** The 20 modules progress through three tiers: Foundation (blue) builds core ML primitives, Architecture (purple) applies them to vision and language tasks, and Optimization (orange) makes systems production-ready."
|
||||
|
||||
graph LR
|
||||
subgraph F["FOUNDATION (01-07)"]
|
||||
|
||||
@@ -11,7 +11,7 @@ TinyTorch's 20 modules follow a carefully crafted six-act narrative arc. Each ac
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Six-Act Learning Narrative.** TinyTorch's 20 modules follow a carefully crafted arc from atomic components through production systems."
|
||||
graph LR
|
||||
Act1["Act I: Foundation<br/>01-04<br/>Atomic Components"] --> Act2["Act II: Learning<br/>05-07<br/>Gradient Revolution"]
|
||||
Act2 --> Act3["Act III: Data & Scale<br/>08-09<br/>Real Complexity"]
|
||||
|
||||
@@ -43,7 +43,7 @@ See [The Learning Journey](learning-journey.md) for the complete pedagogical nar
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Pedagogical Acts (What You're Learning)
|
||||
:caption: "**Pedagogical Acts and Historical Milestones.** Two dimensions of progress: Acts explain what you learn while milestones validate what you can build."
|
||||
graph TB
|
||||
subgraph "Pedagogical Acts (What You're Learning)"
|
||||
A1["Act I: Foundation<br/>Modules 01-04<br/>Atomic Components"]
|
||||
@@ -100,7 +100,7 @@ graph TB
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Journey Through ML History.** Six decades of breakthroughs you recreate with your own implementations, from the 1957 Perceptron to modern production systems."
|
||||
timeline
|
||||
title Journey Through ML History
|
||||
1957 : Perceptron : Binary classification with gradient descent
|
||||
|
||||
@@ -86,7 +86,7 @@ TinyTorch follows a simple three-step workflow that you'll repeat for each modul
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**TinyTorch Build Cycle.** The three-step workflow you repeat for each module: edit in Jupyter, export to the package, and validate with milestone scripts."
|
||||
graph LR
|
||||
A[1. Edit Module<br/>modules/NN_name.ipynb] --> B[2. Export to Package<br/>tito module complete N]
|
||||
B --> C[3. Validate with Milestones<br/>Run milestone scripts]
|
||||
|
||||
@@ -268,7 +268,7 @@ Every module follows a proven learning cycle that builds deep understanding:
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Build-Use-Reflect Learning Cycle.** Every module follows this proven pattern: implement from scratch, apply to real problems, then answer systems thinking questions."
|
||||
graph LR
|
||||
B[Build<br/>Implement from scratch] --> U[Use<br/>Real data, real problems]
|
||||
U --> R[Reflect<br/>Systems thinking questions]
|
||||
|
||||
@@ -19,7 +19,7 @@ The Architecture tier teaches you how to build the neural network architectures
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Architecture Module Flow.** Two parallel tracks branch from Foundation: vision (DataLoader, Convolutions) and language (Tokenization through Transformers)."
|
||||
graph TB
|
||||
F[ Foundation<br/>Tensor, Autograd, Training]
|
||||
|
||||
@@ -124,7 +124,7 @@ graph TB
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Architecture Tier Milestones.** After completing modules 08-13, you unlock computer vision (1998 CNN) and language understanding (2017 Transformer) breakthroughs."
|
||||
timeline
|
||||
title Historical Achievements Unlocked
|
||||
1998 : CNN Revolution : 75%+ accuracy on CIFAR-10 with spatial intelligence
|
||||
|
||||
@@ -19,7 +19,7 @@ The Foundation tier teaches you how to build a complete learning system from scr
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Foundation Module Dependencies.** Tensors and activations feed into layers, which connect to losses and autograd, enabling optimizers and ultimately training loops."
|
||||
graph TB
|
||||
M01[01. Tensor<br/>Multidimensional arrays] --> M03[03. Layers<br/>Linear transformations]
|
||||
M02[02. Activations<br/>Non-linear functions] --> M03
|
||||
@@ -125,7 +125,7 @@ graph TB
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Foundation Tier Milestones.** After completing modules 01-07, you unlock three historical achievements spanning three decades of neural network breakthroughs."
|
||||
timeline
|
||||
title Historical Achievements Unlocked
|
||||
1957 : Perceptron : Binary classification with gradient descent
|
||||
|
||||
@@ -19,7 +19,7 @@ The Optimization tier teaches you how to make ML systems fast, small, and deploy
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Optimization Module Flow.** Starting from profiling, two parallel tracks address size reduction (quantization, compression) and speed improvement (memoization, acceleration), converging at benchmarking."
|
||||
graph TB
|
||||
A[️ Architecture<br/>CNNs + Transformers]
|
||||
|
||||
@@ -138,7 +138,7 @@ graph TB
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Production Optimization Timeline.** Progressive improvements from baseline to production-ready: 8-16× smaller models and 12-40× faster inference."
|
||||
timeline
|
||||
title Production-Ready Systems
|
||||
Baseline : 100MB model, 0.5 tokens/sec, 95% accuracy
|
||||
|
||||
@@ -13,7 +13,7 @@ TinyTorch uses a clean, simple approach to track your ML systems engineering jou
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Progress Tracking Flow.** Build modules, export to package, unlock historical milestones, and track achievements through two parallel systems."
|
||||
graph LR
|
||||
A[Build Modules] --> B[Complete 01-20]
|
||||
B --> C[Export to Package]
|
||||
|
||||
@@ -13,7 +13,7 @@ TinyTorch follows a simple build-export-validate cycle:
|
||||
|
||||
```{mermaid}
|
||||
:align: center
|
||||
:caption: Architecture Overview
|
||||
:caption: "**Module Development Workflow.** The core cycle for building TinyTorch: start a module, edit in Jupyter, export to the package, test your imports, then move to the next module."
|
||||
graph LR
|
||||
A[Start/Resume Module] --> B[Edit in Jupyter]
|
||||
B --> C[Complete & Export]
|
||||
|
||||
Reference in New Issue
Block a user