Cross-reference audit subagent: scanned all 30 scoped .qmd files for
orphan table / figure / listing labels (caption with {#tbl-/fig-/lst-...}
but no corresponding @label reference in prose). Added natural
references for orphans so every labeled artifact is now introduced in
the surrounding text.
Final counts: 247 labels defined, 216 refs used (87% coverage). The
remaining ~30 orphans were either self-describing (milestone-result
tables whose placement is obvious from context) or inside scope I left
untouched to preserve existing voice.
Also included: tiers-optimization-dependencies.svg updates from the
earlier Gemini consistency audit that had been left uncommitted.
Audit report at .claude/_reviews/crossref-audit-report.md.
Wave 4 editorial content across 20 modules + new glossary back matter:
1. Module opener hooks (20 new 2-3 sentence paragraphs between the
chapter heading and Module Info callout). Every hook LEADS with
the systems angle (memory, bandwidth, arithmetic intensity,
cache, HBM, roofline, KV cache, hardware utilization, etc.) and
connects back to the ML story. Reinforces that this is a lab
guide for ML systems, not an ML-theory textbook.
2. Code-listing captions on substantive code blocks (roughly >10
lines, defines a class/function/algorithm). Populates Quarto's
List of Listings front matter. Combined across F1/F2/L/O
subagent waves: roughly 60 listings now carry
'**Listing N.M — Brief description**' captions.
3. Figure alt-text audit across 20 module diagrams. Most already
carried objective specific alt-text; a handful were rewritten
for precision.
4. Glossary at back matter (tinytorch/quarto/glossary.qmd + registered
in pdf/_quarto.yml). 90 alphabetical entries spanning tensor /
memory / autograd / training / architecture / optimization
terms. One-sentence definitions. Module cross-references where
the term is central. Lab-guide voice, not dictionary.
5. Style discipline: no em-dashes in prose (caption templates
'— Description' are the only exception, required by parser).
All agent outputs and the hand-revised hooks audited for em-dash
use.
6. SVG trailing-newline hygiene: 8 SVGs touched by the Gemini style
audit had lost their trailing newline. Restored per the SVG
file-hygiene rule.
Subagent A: add Quarto caption + {#tbl-...} label to every pipe table
so they index into the List of Tables (previously empty). 151 tables
across 28 files: 20 modules, 7 milestones, big-picture,
getting-started, conclusion. Caption style: single bold sentence
under 15 words; labels namespaced by module number.
Subagent B: hyperlink well-known external references in Further
Reading / Additional Resources sections. 20 links added spanning
arXiv papers (Attention Is All You Need, Scaling Laws, Deep
Compression, AlexNet, LeNet, Rosenblatt 1958, Rumelhart/Hinton/
Williams 1986), Jay Alammar's Illustrated Transformer / Word2Vec,
Karpathy's Recipe for Training Neural Nets, Ruder's gradient descent
overview, Jurafsky & Martin SLP3, Cybenko's UAT, Drepper cache paper,
HuggingFace Transformers.
6 references left unlinked (physical textbooks and ambiguous blog
posts where canonical URL couldn't be verified).
Audit found two inconsistencies that this fixes:
1. Systems Implication callouts split 9 note / 12 warning — the
12 warning instances were pre-existing 'legacy' classification.
Per the preamble convention (Systems Implication = insight =
note, not warning), convert all 12 to callout-note. Result:
21/21 Systems Implication callouts now use callout-note (blue
bar), consistent semantic signal across all modules.
2. Answer callouts (101 instances) were callout-note. Since
Check-Your-Understanding wrappers are callout-tip (green), and
answers are the 'reveal' to questions (productive/reward
semantic, not neutral info), switch all 101 to callout-tip
collapse=true. Green callout feels like reveal, matches CYU's
visual language, and distinguishes answers from the 149 generic
notes used for Coming-Up/Module-Info/Further-Reading boxes.
Final callout inventory across 20 modules:
callout-note ~140 (Systems Implication, Coming-Up, Further-
Reading, Module-Info, Historical Context)
callout-tip ~180 (Check Your Understanding wrappers,
Answers, Key Takeaways titles)
callout-warning ~22 (Save-Your-Progress, Performance-Note,
other warnings — not Systems Implication)
No content change — only class swaps. Titles untouched.
Four parallel subagents completed the content work derived from Wave 2
audits. Each of the 20 module files now has:
- A Check Your Understanding callout (callout-tip) at chapter end,
with 3-5 technical-specific checkboxes keyed to that chapter's
unique content. Checkbox content targets specific concepts, NOT
generic 'did you understand this' wording. Callout titles are
text-only ('Check Your Understanding — <Module>') — no emojis,
survives the strip filter, renders identically cross-platform.
- A Key Takeaways section (3-4 bullet recap + Coming-next hook
to the next module) inserted before Further Reading / What's
Next. Serves as the section students flip back to when reviewing.
Per-module audit gap fills:
- 01_tensor: normalized to canonical 12-section order; added
Get Started section; removed duplicate Further Reading fragment.
- 04_losses: added explicit O(B × C) complexity line in Core Concepts.
- 07_optimizers: added SGD O(P) / Adam O(P) + O(2P) memory line.
- 11_embeddings: added Systems Implication callout (sparse gradients,
HBM layout, distributed sharding).
- 13_transformers: added O(N² · d) attention complexity + Systems
Implication callout (KV-cache memory growth under autoregressive
decoding).
- 14_profiling: added FLOP/bandwidth complexity framing + Systems
Implication callout (reading the roofline as a decision tree).
- 15_quantization: added constant-factor speedup complexity framing.
- 16_compression: added pruning O(N log N) / compression ratio math.
- 17_acceleration: added fused-kernel memory-complexity reduction.
- 19_benchmarking: added Little's Law L = λW block equation.
- 20_capstone: added end-to-end inference complexity decomposition
with each optimization module mapped to its attacking term.
20 modules × avg ~24 lines added each (~480 lines of new pedagogy).
All callout titles are text-only. No ::: fence regressions.
Audit trail:
.claude/_reviews/section-consistency-report.md
.claude/_reviews/systems-emphasis-report.md
.claude/_reviews/wave-plan.md
Audit findings from .claude/_reviews/section-consistency-report.md and
systems-emphasis-report.md:
- Strip leading emoji prefixes from Systems Implication callout titles
across 17 modules. strip-emojis.lua removes them from PDF render
anyway (XeLaTeX + Latin Modern fonts don't cover emoji ranges
cross-platform), so source now matches what ships.
- Remove duplicate trailing '## Get Started' section from modules
10, 11, 13, 14 — copy-paste artifacts.
- Update pdf/_quarto.yml preamble comment: callout conventions
are class + title-word, no emoji prefixes.
* polish(tinytorch/diagrams): align 23 Lab Guide SVGs with book style guide
The hand-authored diagrams under tinytorch/quarto/assets/images/diagrams/
were drifting from the ML Systems SVG style guide — they used Tailwind
CSS greys and custom oranges instead of the book palette. This made them
visually clash with the canonical big-picture-module-flow.svg (under
assets/images/svg/) and with every other diagram in the textbook.
Normalize to book style:
- Fills: #f4f5f7, #f8f9fa, #e5e7eb → #f7f7f7 / #bbb (book neutrals)
- Text: #1f2937, #6b7280 → #333 / #999 (book text hierarchy)
- Arrows: #9ca3af → #555 (book neutral stroke)
- Accent orange: #fff1e8 / #ff8246 → #fdebd0 / #c87b2a (book routing)
- Corner radius: rx="2" → rx="4" per style-guide standard
- Stroke width: flat "1" → "1.2" (secondary tier per style guide)
- Section titles: font-size="11" (bold) → "12" per style-guide headers
No content, layout, or positional changes — only style-token swaps.
All 23 SVGs re-validated as well-formed XML.
Side effect: the orphan diagrams/00_big-picture-module-flow.svg is also
normalized. The canonical big-picture is under assets/images/svg/ and
was already book-styled (not touched by this pass).
* polish(tinytorch/01_tensor): port Cache Tiling systems callout from ABOUT.md
The stashed work from the tinytorch-updates branch included a Cache
Tiling "Systems Implication" callout for Module 01, but it lived in
tinytorch/src/01_tensor/ABOUT.md — a path that was retired when the
module docs consolidated into tinytorch/quarto/modules/. The companion
"Contiguous Memory & Strides" callout from the same stash already made
it over during the consolidation; this one got left behind.
Insert the callout right after the matmul O(n³) explanation (the
natural bridge into "why does the naive loop lose to BLAS?"), and
before the Shape Manipulation section. Progressive-disclosure check:
every concept the callout introduces (cache misses, BLAS, O(n³) vs
O(n²)) is already on the page or introduced inline in the callout
itself (Memory-Bound / Compute-Bound are defined parenthetically).
The pattern matches the existing book-style sidebar on line 587.
* deps: bump TypeScript 5→6, @types/node 20→25, ipykernel 6→7
Resolves 9 stale Dependabot PRs (#1494, #1497, #1490, #1491, #1479,
#1477, #1475, #1467, #1454) whose package-lock conflicts blocked
auto-rebase.
VSCode extensions: bump TypeScript devDep to ^6.0.3 and @types/node to
^25.6.0 across book-ext, mlsysim-ext, labs-ext, kits-ext, tinytorch-ext.
Regenerate lockfiles.
TypeScript 6.0 no longer auto-includes ambient @types packages; add
explicit "types": ["node", "vscode"] to each tsconfig so
child_process / fs / Buffer / vscode API types continue to resolve.
All 5 extensions now compile clean with TS 6.0.3 + @types/node 25.6.0.
TinyTorch: bump ipykernel floor to >=7.2.0 in pyproject.toml,
requirements.txt, and binder/requirements.txt.
- Add 'Hardware Reality' systems callouts (Compute/Memory) to all 20 modules
- Enhance 'Seminal Papers' sections with systems implications
- Polish narrative flow to bridge algorithmic concepts and hardware constraints
- Standardize Quarto callout blocks for systems insights
Parallel agent pass (one per chapter, 30 chapters) followed by
orchestrator-level cross-chapter polish, framed against the
"iconic lab book" bar codified in tools/CHAPTER_POLISH_BRIEF.md.
Per-chapter agent edits (5–15 high-leverage changes each):
- Sharper openers — first paragraph earns the chapter in two
sentences; killed "in this module we will..." throat-clearing.
- Stronger "What's Next?" bridges — every chapter now ends on
a concrete question the next chapter answers, not a passive
feature list.
- Tightened verbose prose; cut hedging adverbs ("essentially",
"fundamentally", "remarkably") and hype words ("powerful",
"elegant", "revolutionary", "comprehensive").
- House-rule enforcement: blank lines before bullet lists,
callouts titled, ASCII-art fences given `text` language tag.
Project-wide bug fix surfaced by the pass:
pdf/_quarto.yml has `execute: enabled: false` set globally,
which means every `{python} foo` inline shortcode and every
`{python}` setup chunk was rendering as raw text in BOTH
HTML and PDF. Across the 20 modules + 6 milestones + 3 front/
back-matter chapters this added up to ~300 broken inline
shortcodes and ~50 dead setup blocks. The agents inlined the
pre-computed values directly into prose/tables/answer blocks
and removed the dead chunks.
Cross-chapter consistency fixes (orchestrator pass):
- Removed the `## ⚡ PyTorch` panel-tabset emoji that the
remaining agents missed (07_optimizers); XeLaTeX renders this
as a phantom-glyph trap when the tab title promotes to an
H2 in PDF.
- Standardized milestone H2 `## YOUR Code Powers This` →
`## Your Code Powers This` across the 6 milestone chapters
(was inconsistent — only 04_cnn had been normalized).
- Removed leftover scaffolding line `(see ../assets/images/...)`
in modules/13_transformers.qmd line 332.
- Deleted three remaining orphan `{python}` blocks the agents
conservatively left in modules/15_quantization.qmd (×2) and
modules/18_memoization.qmd (×1) since their consumers were
already inlined.
- modules/04_losses.qmd: `\medskip` between Case 1 and Case 2
in the multi-label callout was too subtle in tcolorbox; now
`\vspace{1em}` so the cases visibly separate in PDF.
Net effect: PDF dropped from 384 pages → 336 pages (-12.5%)
and 2.0 MB → 1.8 MB. Zero `{python}` text leakage in the
rendered PDF (verified via pdftotext grep).
Verified visually:
- Foundation Tier Part page (flameorange + torchnavy branding)
- Module 03 Layers diagram (redrawn) + sharper figure caption
- Module 18 Memoization opener (5050→100 collapse hook)
- Conclusion close ("Don't import torch. You built it." as
uncontested final sentence)
- Case 1 / Case 2 separation in 04_losses (\vspace{1em} fix)
Reports collected from all 30 agents flagged a project-wide
follow-up worth a separate pass: every module repeats ~300
lines of inline `<style>`/`<script>`/action-cards HTML at the
top, which is HTML-only by construction (correctly inside a
`{=html}` raw block) but should probably be hoisted into a
shared partial/include.
Brings the TinyTorch lab guide's Quarto project in line with
book/quarto/, the only other in-tree Quarto publication that builds
both web and PDF outputs from a single source. The previous name had
three redundancies:
- already under tinytorch/, so "site-" prefix wasn't disambiguating
- also produces the PDF lab guide, so "site-" was misleading
- the top-level site/ dir made "site-quarto" read as "the site's
quarto config" rather than "the tinytorch site, in quarto"
After this rename the convention is straightforward:
book/quarto/ -> the textbook (web + PDF)
tinytorch/quarto/ -> the TinyTorch lab guide (web + PDF)
mlsysim/docs/ -> mlsysim API reference (kept as docs/, since it
really is API reference, not a publication)
Touches 7 GitHub workflows, both .gitignore files, the rename target's
own self-references (Makefile, _quarto.yml configs, STYLE.md,
measure-pdf-images.py), and 6 copies of subscribe-modal.js plus a few
shared scripts/configs whose comments documented the old path.
Verified: rebuilt pdf/TinyTorch-Guide.pdf (2.1M) cleanly from the new
location with 'make pdf' from tinytorch/quarto/.