From 7432aae15c97d24da60b3a93f6fb45c59cbff08b Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Sun, 14 Dec 2025 15:05:13 -0500 Subject: [PATCH] docs: improve figure captions with bold titles and descriptions MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Updated all mermaid diagram captions across the site to follow a consistent format: - Bold title followed by a period - Descriptive explanation of the diagram - Ends with a period Files updated: - big-picture.md: TinyTorch Module Flow - getting-started.md: TinyTorch Build Cycle - milestones.md: Pedagogical Acts, Journey Through ML History - intro.md: Build-Use-Reflect Learning Cycle - learning-journey.md: Six-Act Learning Narrative - optimization.md: Optimization Module Flow, Production Timeline - foundation.md: Module Dependencies, Tier Milestones - architecture.md: Module Flow, Tier Milestones - modules.md: Module Development Workflow - data.md: Progress Tracking Flow 🤖 Generated with [Claude Code](https://claude.com/claude-code) --- tinytorch/site/big-picture.md | 2 +- tinytorch/site/chapters/learning-journey.md | 2 +- tinytorch/site/chapters/milestones.md | 4 ++-- tinytorch/site/getting-started.md | 2 +- tinytorch/site/intro.md | 2 +- tinytorch/site/tiers/architecture.md | 4 ++-- tinytorch/site/tiers/foundation.md | 4 ++-- tinytorch/site/tiers/optimization.md | 4 ++-- tinytorch/site/tito/data.md | 2 +- tinytorch/site/tito/modules.md | 2 +- 10 files changed, 14 insertions(+), 14 deletions(-) diff --git a/tinytorch/site/big-picture.md b/tinytorch/site/big-picture.md index ed8884037..5bf5c1bf9 100644 --- a/tinytorch/site/big-picture.md +++ b/tinytorch/site/big-picture.md @@ -22,7 +22,7 @@ TinyTorch takes you from basic tensors to production-ready ML systems through 20 ```{mermaid} :align: center -:caption: TinyTorch Module Flow +:caption: "**TinyTorch Module Flow.** The 20 modules progress through three tiers: Foundation (blue) builds core ML primitives, Architecture (purple) applies them to vision and language tasks, and Optimization (orange) makes systems production-ready." graph LR subgraph F["FOUNDATION (01-07)"] diff --git a/tinytorch/site/chapters/learning-journey.md b/tinytorch/site/chapters/learning-journey.md index 97148ba08..7eda423fb 100644 --- a/tinytorch/site/chapters/learning-journey.md +++ b/tinytorch/site/chapters/learning-journey.md @@ -11,7 +11,7 @@ TinyTorch's 20 modules follow a carefully crafted six-act narrative arc. Each ac ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Six-Act Learning Narrative.** TinyTorch's 20 modules follow a carefully crafted arc from atomic components through production systems." graph LR Act1["Act I: Foundation
01-04
Atomic Components"] --> Act2["Act II: Learning
05-07
Gradient Revolution"] Act2 --> Act3["Act III: Data & Scale
08-09
Real Complexity"] diff --git a/tinytorch/site/chapters/milestones.md b/tinytorch/site/chapters/milestones.md index ea8f4fc08..becb65ce1 100644 --- a/tinytorch/site/chapters/milestones.md +++ b/tinytorch/site/chapters/milestones.md @@ -43,7 +43,7 @@ See [The Learning Journey](learning-journey.md) for the complete pedagogical nar ```{mermaid} :align: center -:caption: Pedagogical Acts (What You're Learning) +:caption: "**Pedagogical Acts and Historical Milestones.** Two dimensions of progress: Acts explain what you learn while milestones validate what you can build." graph TB subgraph "Pedagogical Acts (What You're Learning)" A1["Act I: Foundation
Modules 01-04
Atomic Components"] @@ -100,7 +100,7 @@ graph TB ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Journey Through ML History.** Six decades of breakthroughs you recreate with your own implementations, from the 1957 Perceptron to modern production systems." timeline title Journey Through ML History 1957 : Perceptron : Binary classification with gradient descent diff --git a/tinytorch/site/getting-started.md b/tinytorch/site/getting-started.md index 9a292673b..46c675c33 100644 --- a/tinytorch/site/getting-started.md +++ b/tinytorch/site/getting-started.md @@ -86,7 +86,7 @@ TinyTorch follows a simple three-step workflow that you'll repeat for each modul ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**TinyTorch Build Cycle.** The three-step workflow you repeat for each module: edit in Jupyter, export to the package, and validate with milestone scripts." graph LR A[1. Edit Module
modules/NN_name.ipynb] --> B[2. Export to Package
tito module complete N] B --> C[3. Validate with Milestones
Run milestone scripts] diff --git a/tinytorch/site/intro.md b/tinytorch/site/intro.md index 1ef6dcd14..7337de8d3 100644 --- a/tinytorch/site/intro.md +++ b/tinytorch/site/intro.md @@ -268,7 +268,7 @@ Every module follows a proven learning cycle that builds deep understanding: ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Build-Use-Reflect Learning Cycle.** Every module follows this proven pattern: implement from scratch, apply to real problems, then answer systems thinking questions." graph LR B[Build
Implement from scratch] --> U[Use
Real data, real problems] U --> R[Reflect
Systems thinking questions] diff --git a/tinytorch/site/tiers/architecture.md b/tinytorch/site/tiers/architecture.md index c7afbff70..4667266a5 100644 --- a/tinytorch/site/tiers/architecture.md +++ b/tinytorch/site/tiers/architecture.md @@ -19,7 +19,7 @@ The Architecture tier teaches you how to build the neural network architectures ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Architecture Module Flow.** Two parallel tracks branch from Foundation: vision (DataLoader, Convolutions) and language (Tokenization through Transformers)." graph TB F[ Foundation
Tensor, Autograd, Training] @@ -124,7 +124,7 @@ graph TB ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Architecture Tier Milestones.** After completing modules 08-13, you unlock computer vision (1998 CNN) and language understanding (2017 Transformer) breakthroughs." timeline title Historical Achievements Unlocked 1998 : CNN Revolution : 75%+ accuracy on CIFAR-10 with spatial intelligence diff --git a/tinytorch/site/tiers/foundation.md b/tinytorch/site/tiers/foundation.md index 89707c331..469f83cc0 100644 --- a/tinytorch/site/tiers/foundation.md +++ b/tinytorch/site/tiers/foundation.md @@ -19,7 +19,7 @@ The Foundation tier teaches you how to build a complete learning system from scr ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Foundation Module Dependencies.** Tensors and activations feed into layers, which connect to losses and autograd, enabling optimizers and ultimately training loops." graph TB M01[01. Tensor
Multidimensional arrays] --> M03[03. Layers
Linear transformations] M02[02. Activations
Non-linear functions] --> M03 @@ -125,7 +125,7 @@ graph TB ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Foundation Tier Milestones.** After completing modules 01-07, you unlock three historical achievements spanning three decades of neural network breakthroughs." timeline title Historical Achievements Unlocked 1957 : Perceptron : Binary classification with gradient descent diff --git a/tinytorch/site/tiers/optimization.md b/tinytorch/site/tiers/optimization.md index 1dbcf4800..1f0fa657f 100644 --- a/tinytorch/site/tiers/optimization.md +++ b/tinytorch/site/tiers/optimization.md @@ -19,7 +19,7 @@ The Optimization tier teaches you how to make ML systems fast, small, and deploy ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Optimization Module Flow.** Starting from profiling, two parallel tracks address size reduction (quantization, compression) and speed improvement (memoization, acceleration), converging at benchmarking." graph TB A[️ Architecture
CNNs + Transformers] @@ -138,7 +138,7 @@ graph TB ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Production Optimization Timeline.** Progressive improvements from baseline to production-ready: 8-16× smaller models and 12-40× faster inference." timeline title Production-Ready Systems Baseline : 100MB model, 0.5 tokens/sec, 95% accuracy diff --git a/tinytorch/site/tito/data.md b/tinytorch/site/tito/data.md index e271a5a36..c2fd18158 100644 --- a/tinytorch/site/tito/data.md +++ b/tinytorch/site/tito/data.md @@ -13,7 +13,7 @@ TinyTorch uses a clean, simple approach to track your ML systems engineering jou ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Progress Tracking Flow.** Build modules, export to package, unlock historical milestones, and track achievements through two parallel systems." graph LR A[Build Modules] --> B[Complete 01-20] B --> C[Export to Package] diff --git a/tinytorch/site/tito/modules.md b/tinytorch/site/tito/modules.md index 946b85d7b..1c999b381 100644 --- a/tinytorch/site/tito/modules.md +++ b/tinytorch/site/tito/modules.md @@ -13,7 +13,7 @@ TinyTorch follows a simple build-export-validate cycle: ```{mermaid} :align: center -:caption: Architecture Overview +:caption: "**Module Development Workflow.** The core cycle for building TinyTorch: start a module, edit in Jupyter, export to the package, test your imports, then move to the next module." graph LR A[Start/Resume Module] --> B[Edit in Jupyter] B --> C[Complete & Export]