Commit Graph

10415 Commits

Author SHA1 Message Date
Vijay Janapa Reddi
56e091f7e0 feat: standardize System Archetypes in Vol 1 and Vol 2; add canonical roster table to Introductions; ensure tight math-prose integration 2026-02-24 20:23:50 -05:00
Vijay Janapa Reddi
ad843e21f7 style: Vol2 register pass follow-up #2 — fix two more violations in sustainable_ai
Flagged by the sustainable_ai editor agent as newly discovered during fixing:

- line 635: "If your cluster consumes...how much...actually went...how much was wasted?"
  → impersonal declarative; removes "your", two embedded rhetorical questions, two "actually"
- line 2261: "You want to fine-tune a small language model" in .callout-notebook
  → "Consider fine-tuning a small language model" (impersonal)
2026-02-24 19:52:53 -05:00
Vijay Janapa Reddi
d67ad7005c style: Vol2 register pass follow-up — fix missed violations in distributed_training and sustainable_ai
Post-commit verification found 6 additional violations not caught by the
initial audit agents:

distributed_training (4 fixes):
- line 108: second person "If you could purchase a single GPU" → impersonal
- line 280: rhetorical Q "How exactly do 1,024 GPUs...agree" → declarative
- line 784: second person "Your AllReduce...Where do you look?" in
  .callout-perspective → impersonal problem statement
- line 1347: rhetorical Q "where did the missing 25%...go?" → declarative

sustainable_ai (2 fixes):
- line 2047: embedded rhetorical Q "where does the dominant share of energy go?" → declarative
- line 2414: closing rhetorical Q "what happens to these clusters...?" → declarative noun phrase
2026-02-24 19:50:36 -05:00
Vijay Janapa Reddi
1ddf9bd5e3 style: Vol2 register pass — eliminate rhetorical questions, second person, vague intensifiers
Systematic register audit and fix across all 13 non-clean Vol2 chapters.
Clean chapters (compute_infrastructure, network_fabrics, inference, responsible_ai,
frontmatter, backmatter) required no edits.

Violations fixed by chapter:
- introduction: 14 fixes (rhetorical Qs, second person, vague intensifiers)
- collective_communication: 27 fixes (rhetorical Qs, contractions, second person, intensifiers)
- distributed_training: 7 fixes (all rhetorical questions → declarative statements)
- ops_scale: 6 fixes (intensifiers, second person, rhetorical Q, announcement transition)
- performance_engineering: 3 fixes (rhetorical Q, second person, announcement transition)
- robust_ai: 4 fixes (hedging, second person in callout-notebook)
- sustainable_ai: 4 fixes (rhetorical Q, second person, bold starter in callout)
- fleet_orchestration: 4 fixes (rhetorical questions)
- security_privacy: 4 fixes (banned phrase, second person, rhetorical Q)
- edge_intelligence: 4 fixes (rhetorical Q, vague intensifiers)
- fault_tolerance: 1 fix (second person in callout-notebook)
- data_storage: 1 fix (sentence-starting "But,")
- conclusion: 2 fixes (first-person "We have climbed", "To conclude" opener)

Pre-commit rendering/inline-refs failures are pre-existing on this branch
(77 files, 116 rendering issues, 179 inline-ref errors in unrelated files).
None of the 13 edited files have rendering violations.
2026-02-24 19:46:16 -05:00
Vijay Janapa Reddi
e881d92625 refactor: introduce System Archetypes in mlsys/systems.py and integrate into Introduction and Serving chapters; verify math integrity and rationale for LEGO blocks 2026-02-24 19:12:51 -05:00
Vijay Janapa Reddi
a0ce7cc746 style: Vol1 register pass — academic formality across 16 chapters
Systematic prose register audit and fix pass across all substantive
Vol1 chapters, enforcing book-prose.md Section 1 "Tone Register &
Academic Formality" rules:

- Rhetorical questions in body prose → declarative statements
- Sentence-starting coordinating conjunctions (But/And/So) → restructured
- Banned AI-pattern phrases ("leverage" → "use", "state-of-the-art" →
  "top benchmark", "powerful" → precise alternatives, "groundbreaking"
  removed, "dramatic" → quantified)
- Contractions in body prose → expanded forms
- Second person "you/your" → impersonal/third-person voice
- Vague intensifiers ("just", "simply", "actually", "perhaps", "clearly",
  "very") → removed or replaced with precise language
- Bold paragraph starters in body prose → plain text

Protected content left unchanged: Purpose hook questions, .callout-
checkpoint content, code blocks, Python cells, TikZ/LaTeX math,
Fallacy/Pitfall structural labels, direct quotations.

Chapters modified (15 files, ~350 targeted edits):
introduction, ml_systems, nn_computation, ml_workflow, frameworks,
nn_architectures, training, data_selection, hw_acceleration,
model_compression, model_serving, benchmarking, ml_ops,
responsible_engr, conclusion

(data_engineering fixed in prior session)
2026-02-24 17:46:36 -05:00
Vijay Janapa Reddi
77f2735cb8 style: refine figure references across Vol1 and Vol2 to match academic tone (avoid 'Look at', 'below') 2026-02-24 15:13:01 -05:00
Vijay Janapa Reddi
af865684e0 vol2: finalize visual narrative (added power path, 3D parallelism, continuous batching, and carbon sankey diagrams) 2026-02-24 14:30:44 -05:00
Vijay Janapa Reddi
08e2f0f37a vol2: finalize notation, formatting, and standalone alignment 2026-02-24 13:55:03 -05:00
Vijay Janapa Reddi
085dce9aa2 style: Vol1 italic emphasis pass — Pattern 1 signposts + Pattern 7 punchlines 2026-02-24 13:51:10 -05:00
Vijay Janapa Reddi
b0eb8646f5 feat(vscode-ext): colored icons in Chapter Navigator for figures, tables, equations, callouts, listings
- Use ThemeColor (charts.*) for entry icons: blue=figure, purple=table, green=listing, orange=equation, yellow=callout
- Section items use descriptionForeground for subtle hierarchy
- Add mlsysbook.navigatorVisibleEntryKinds config to toggle which content types appear in navigator
2026-02-24 11:00:32 -05:00
Vijay Janapa Reddi
485dce379a style: Vol1 full SQS Phase 3 pass — prose quality, LEGO headers, locality
- Remove AI-pattern phrases: leverage/leverages (40+), utilize (10+), powerful (30+)
- Eliminate recap-style openers: 'With X established...' pattern (25+ instances)
- Fix sentence-initial coordinating conjunctions: But/And/So in body prose
- Replace vague intensifiers: very/significantly/somewhat → quantitative language
- Standardize LEGO headers: old P.I.C.O. naming → LOAD/EXECUTE/GUARD/OUTPUT
  (introduction, ml_workflow, benchmarking, responsible_engr)
- Fix unit spacing: 80ms→80 ms, 40GB→40 GB, 3GB→3 GB (training, hw_acceleration)
- Correct hyphen-as-en-dash in numeric ranges: 1-2%→1--2%, 50-200 ms→50--200 ms
- Convert bold paragraph starters in body prose to flowing paragraphs
  (ml_ops: data consistency/freshness/quality; training: Flash Attention conditions)
- Rewrite abstract section openers with concrete scenarios
- Fix contractions in body prose (doesn't/isn't/wasn't → expanded forms)
- Add end_chapter bookend to ml_workflow (was missing)
- Add end_chapter to mlsys/registry.py and export from __init__.py
- Standardize LEGO cell GUARD sections where missing (noted for author pass)
2026-02-24 10:57:17 -05:00
Vijay Janapa Reddi
9f0f7d2cf7 config: restore vol1 PDF build config (training active, ml_ops commented) 2026-02-24 09:27:32 -05:00
Vijay Janapa Reddi
352f95afe3 footnotes: ADD pass — 15 new footnotes across 9 Vol1 chapters
model_serving:
- fn-queuing-divergence: M/M/1 (1-ρ)^-1 math — 70% rule is mathematical, not heuristic
- fn-jevons-paradox: 1865 coal efficiency → inference demand paradox
- fn-speculative-decoding: k·α throughput math, parallel verification mechanism

training:
- fn-backprop-provenance: Linnainmaa 1970 / Werbos 1974 — 12-year adoption lag
- fn-saddle-points: overparameterized landscape geometry — saddle points > local minima
- fn-ridge-point-precision: precision shift moves the ridge point (FP32→BF16 doubles it)

hw_acceleration:
- fn-tensor-core-alignment: 8/16 multiple requirement, 8-16x fallback penalty

frameworks:
- fn-bf16-design: Google Brain 2018 origin, loss-scaling elimination via exponent match

model_compression:
- fn-sparsity-vectorization: SIMD lane waste mechanism, 90% threshold explained

nn_architectures:
- fn-kv-cache-depth: 14 GB weights + 1.07 GB/user math, memory-not-quality constraint

nn_computation:
- fn-batch-norm-cost: sync barrier, small-batch sensitivity, LayerNorm substitution
- fn-algorithm-hardware-lag: Werbos 1974→1986 lag; Bahdanau 2014→Transformer 2017

introduction:
- fn-ai-winters-systems: Lighthill Report + Lisp Machine collapse as systems failures

data_selection:
- fn-labeling-economics: $1k-$3k vs $75k-$150k clinical labeling cost arithmetic
- fn-chinchilla-ratio: D/N diagnostic (GPT-3 at 1.7, LLaMA-2 70B at 28, optimal ~20)
2026-02-24 09:14:52 -05:00
Vijay Janapa Reddi
d4bb450392 footnotes: enrich pass across 6 Vol1 chapters
hw_acceleration (6 actions):
- consolidate fn-hennessy-patterson-dsa into fn-dsa-efficiency (density fix), remove standalone
- fn-moores-law-scaling: cut baseline CS definition, start at ML compute-gap consequence
- fn-riscv-ai-customization: replace restatement with ISA licensing mechanism + ecosystem trade-off
- fn-dally-gpu-precision: remove unverifiable attribution, add Volta 6x throughput anchor
- fn-sparsity-nm-regularity: explain WHY 2:4 chosen (accuracy-performance knee, 2-bit metadata)

nn_architectures (2):
- fn-convolution-etymology: make self-standing, fix duplicate index tag → data reuse
- fn-dnn-tpu: anchor with TPU v1 vs K80 (2017): 25-30x inference throughput-per-watt

nn_computation (7):
- fn-gradient-instabilities: add 0.25^20 ≈ 10^-12 quantification, ReLU/residual mechanisms
- fn-tensor-operations: add NCHW/NHWC mismatch example (150 KB, 20-30% latency at 1k req/s)
- fn-bias-variance: drop etymology, add Double Descent compute-scaling consequence
- fn-parameter-memory-cost: pivot from Adam (covered elsewhere) to normalization layer insight
- fn-loss-function: remove MSE duplication, add loss-landscape geometry consequence
- fn-sigmoid-etymology: anchor "costly" with 50x silicon cost (chapter's own computed ratio)
- fn-edge-tpu-efficiency: fix tier (mobile not server), add Jetson Orin, 10x TDP gap

model_serving (1):
- fn-kendall-notation-serving: add M/G/c conservative-bias-as-feature insight (10-30% margin)

data_engineering (4):
- fn-data-cascades-silent: add 4-week median discovery time + structural silence mechanism
- fn-etl-quality-first: sharpen to ML-specific distributional validation vs. schema validation
- fn-feature-store-consistency: reframe around training-serving skew (5-15% accuracy loss)
- fn-data-lineage-forensics: add regulatory/debugging failure modes (GDPR, FCRA, graph traversal)

responsible_engr (6):
- fn-proxy-variable-bias: trim redundancy, add "fairness laundering" failure mode
- fn-model-card-transparency: rewrite around scope-creep failure (40-60% exceed documented scope)
- fn-adversarial-debiasing-cost: add distribution-shift stability insight (invariant representations)
- fn-tco-inference-dominance: replace Gartner etymology with 3 ML externality cost categories
- fn-ml-roi-heuristic: replace "industry experience" with mechanism (10-15x over 3 years)
- fn-audit-trail-scale: add append-only architecture pattern (Iceberg, Delta Lake, hash chains)
2026-02-24 09:01:21 -05:00
Vijay Janapa Reddi
24cd07b347 footnotes: trim fn-mlperf (benchmarking) — remove three-suite recap that duplicated body prose 2026-02-24 08:52:03 -05:00
Vijay Janapa Reddi
a4243a6c9a footnotes: enrich pass for ml_ops (8) and model_compression (8)
ml_ops rewrites:
- fn-telemetry-mlops: drop etymology, add distribution-shift detection consequence
- fn-model-registry-ops: reframe to failure mode prevented (shadow deployment, 30-90min rollback)
- fn-entropy-model-decay: add empirical λ ranges by domain + infrastructure cadence consequence
- fn-staging-validation-ops: sharpen ML-vs-conventional distinction (probabilistic vs. deterministic)
- fn-shadow-deploy-ml: replace body-restatement with cost/benefit threshold + asymmetric risk framing
- fn-drift-types-ops: redirect to detection lag asymmetry (feature drift vs. concept drift)
- fn-drift-covariate-shift: drop etymology, focus on Shimodaira support-assumption failure mode
- fn-ray-distributed-ml: sharpen tether to training-serving skew via silent format-translation bugs

model_compression rewrites:
- fn-pruning-lecun-1989: anchor on memory efficiency first, Hessian as mechanism
- fn-heuristic-pruning: quantify the trap (90%+ from early layers, bottlenecks preserved)
- fn-kl-divergence-distillation: add asymmetric KL consequence (calibration transfer)
- fn-nas-hardware-aware: add FLOPs-vs-latency divergence (3-5x for same FLOP count)
- fn-nas-reinforcement-learning: explain inner-loop cost mechanism (12,800-22,400 candidates)
- fn-nas-evolutionary: add mechanism bridge + weight-sharing necessity consequence
- fn-quantization-shannon: quantify tolerance (INT8 <1%, INT4 1-3%, per-model validation)
- fn-ste-gradient-trick: explain zero-gradient mechanism, STE identity substitution error
2026-02-24 08:51:27 -05:00
Vijay Janapa Reddi
6af7b2c4a4 footnotes: remove 4 redundant footnotes from ml_ops.qmd
All 4 failed the instrument check (duplicate body text or preempt subsections):
- fn-youtube-feedback-loop: restated the surrounding case study body verbatim
- fn-blue-green-deploy-ml: only added colour labels to "instant rollback" claim already in prose
- fn-ab-testing-ml: preempted the dedicated A/B testing subsection ~200 lines later
- fn-alerting-ml-thresholds: duplicated body text on adaptive thresholds (ML Tether = 1)
2026-02-24 08:46:10 -05:00
Vijay Janapa Reddi
43b8f35f85 footnotes: Group C citation integrity pass (Vol1)
- model_compression/fn-int8-energy-deployment: add [@horowitz2014computing] for 200× DRAM/MAC energy claim
- ml_ops/fn-ray-distributed-ml: replace unverifiable "10x" with mechanism-based framing (serialization overhead removal)
- ml_ops/fn-youtube-feedback-loop: replace unverifiable "2 years" with qualitative multi-year framing
- hw_acceleration/fn-hbm-bandwidth-cost: replace unverifiable "50% of BOM" with qualitative "dominant cost component"
2026-02-24 08:43:34 -05:00
Vijay Janapa Reddi
704f7555fe footnotes: Vol1 targeted enrich/add/remove pass from quality audit
Restoration pass (selective, based on Three-Job Rule audit):
- introduction: restore fn-eliza-brittleness, fn-dartmouth-systems, fn-bobrow-student
- data_engineering: restore fn-soc-always-on (always-on island architecture)
- benchmarking: restore fn-glue-saturation (Goodhart's Law arc, 1-year saturation)

Group A surgical edits:
- nn_computation: remove fn-overfitting (Context=1, Tether=1 — only confirmed failure)
- training: strip dead etymology from fn-convergence-training, fn-hyperparameter-training
- model_serving: enrich fn-onnx-runtime-serving with 5–15% TensorRT throughput figure

Group B new footnotes:
- nn_computation: add fn-alexnet-gpu-split (GTX 580 3 GB ceiling → model parallelism lineage)
- responsible_engr: add fn-zillow-dam (D·A·M decomposition of $304M failure)
2026-02-24 08:40:01 -05:00
Vijay Janapa Reddi
446d848fa8 vol2: finalize notation and formatting (standardized symbols, decimal units, and en-dash ranges) 2026-02-24 07:51:08 -05:00
Vijay Janapa Reddi
9f931da1ea vol2: comprehensive standalone transformation (Appendix A foundations review, progressive disclosure pass, and term decoupling) 2026-02-23 18:07:29 -05:00
Vijay Janapa Reddi
5b0b2235f3 vol2: comprehensive footnote pass (25+ new high-signal footnotes across 16 chapters) 2026-02-23 17:58:08 -05:00
Vijay Janapa Reddi
f6f98266a0 vol2: comprehensive transformation pass (P.I.C.O. refactor, archetypes, hardware trajectories) 2026-02-23 17:38:37 -05:00
Vijay Janapa Reddi
2d887c5778 vol1: content updates (intro, data_selection, nn_architectures, model_compression, appendix_machine, responsible_engr, foundations_principles) 2026-02-23 17:22:47 -05:00
Vijay Janapa Reddi
913c616177 vol1: add IDs to all principle callouts, use \ref in conclusion table and prose 2026-02-23 16:39:33 -05:00
Vijay Janapa Reddi
951669d356 fix: inline math × — dimensions as $N\times M$, multipliers as N$\times$
- Fix rendering: dimensions (e.g. 224×224) use single math span $N\times M$
- Revert multipliers to N$\times$ / N--M$\times$ per LaTeX convention
- Fix malformed $N\times$ M → $N\times M$ across vol1/vol2
- Add revert_times_multipliers.py (one-off) and fix_times_math.py (dimension-only)
- Update book-prose guidelines in .claude/rules (dimension vs multiplier)
2026-02-23 14:51:24 -05:00
Vijay Janapa Reddi
97e21d21ce Adds constants for book consistency
Introduces a set of constants to ensure consistency across the book's code and prose.

These constants include:
- Memory capacities, interconnect bandwidths, model sizes
- Useful measures (GiB, GB, second, etc.)
- Formatting tools
- Deployment tiers (cloud, edge, mobile)
2026-02-23 13:08:56 -05:00
Vijay Janapa Reddi
689b040fde Pull in PR #1199: Updated figure in chapter 11 (hw_acceleration war story) 2026-02-23 09:40:05 -05:00
Vijay Janapa Reddi
5247736ce3 Centralizes constants and formatting for consistency
Refactors various sections to utilize centralized constants and formatting functions, improving code maintainability and consistency across the book.

Specifically:

- Replaces hardcoded values with constants defined in `mlsys.constants`.
- Uses the `fmt` function for consistent number formatting.
- Removes redundant calculations and string conversions by leveraging existing functions and constants.
- Introduces a `TransformerScaling` namespace to encapsulate transformer scaling logic.
- Adds invariants (guardrails) to ensure calculations match the book's narrative.
- Refactors MNIST example and moves the inference calculation to MNISTInference.
- Integrates responsible AI principles with lifecycle stages.

This reduces code duplication and ensures a unified representation of key parameters and calculations throughout the book.
2026-02-23 09:33:35 -05:00
Vijay Janapa Reddi
a392f073af fix(vol2): apply book-prose rules — section opener, contractions, leverage/utilize 2026-02-22 19:03:42 -05:00
Vijay Janapa Reddi
5005c14cd6 style(prose): eliminate 'the fact that' where possible (book-prose)
- Replace with 'that' or rephrase clause as subject; fix one remaining But→However in nn_architectures
2026-02-22 19:00:37 -05:00
Vijay Janapa Reddi
f661814f4c style(prose): fix sentence-initial But/And/Or (book-prose)
- Replace But with However or rephrase; And with Furthermore/restructure; Or with Alternatively where appropriate
2026-02-22 18:59:48 -05:00
Vijay Janapa Reddi
c9959e4689 style(prose): expand contractions in body prose (book-prose)
- can't→cannot, don't→do not, it's→it is, won't→will not, etc.
- Skip code/check()/comments; fix narrative and callouts only
2026-02-22 18:58:39 -05:00
Vijay Janapa Reddi
f4a006ce71 style(prose): apply book-prose rules across vol1 and vol2
- Replace recap-style openers (Having established… we now turn to)
- Replace section meta-openers (This section examines/presents…) with concrete openings
- Remove announcement transitions (We will examine, we now turn to)
- Remove Importantly/Most importantly at sentence start
- Remove In summary, bleeding edge, the lesson is clear
- Replace leverage/utilize (verb) with use; keep high-leverage
- Replace building upon with building on; remove as noted there
- Sample fixes: can't→can we not, it's→it is, So,→Thus, (contractions/sentence-openers)
2026-02-22 18:57:12 -05:00
Vijay Janapa Reddi
ca33f2f758 chore: checkpoint staged state before prose-style audit fixes 2026-02-22 18:47:52 -05:00
Vijay Janapa Reddi
3bde64caf0 refactor: gold-standard footnote overhaul across all 30 Vol1+Vol2 chapters
- Rewrote ~1,026 footnotes to MIT Press gold standard (ML Systems Tether,
  Three-Question Audit, Five Types A-E)
- Fixed 16 cross-chapter duplicate fn- keys with chapter-specific suffixes
- Pruned footnotes that failed Three-Question Audit (prerequisites,
  navigation aids, tool catalog entries)
- Restored fn-goodharts-law with Strathern attribution and ML failure modes
- Added 9 gold-standard footnotes to performance_engineering.qmd (previously zero)
2026-02-22 18:31:50 -05:00
Vijay Janapa Reddi
4ac473278e Adds citations and clarifies energy/scaling laws
Adds missing citations and clarifies the text in the appendix on machine learning,
specifically around the energy hierarchy and scaling laws. It also updates the
fault tolerance section to include a reference to the backpropagation paper.
2026-02-22 16:11:26 -05:00
Vijay Janapa Reddi
15a7ef57fa fix: prose edits — index placement, definition titles, dimension spacing
- introduction: consolidate index tags, fix 224×224 spacing, definition callout titles
- data_selection, fault_tolerance, vol2 intro, responsible_ai, robust_ai: misc prose fixes
2026-02-22 14:01:43 -05:00
Vijay Janapa Reddi
4d53b7af4d Merge remote-tracking branch 'origin/feature/book-volumes' into feature/book-volumes 2026-02-22 14:00:56 -05:00
Vijay Janapa Reddi
aff8a0fc0d fix: × consistency — compound OK for a×b, LaTeX in prose/tables, Unicode only in fig-alt
- book-prose: allow compound × for simple products; require × alone only when
  followed by word/unit; Unicode × only in fig-alt
- Revert split × back to compound (e.g. $3 \times 10^{-4}$)
- data_engineering: 8× A100 → 8$\times$ A100 (LaTeX in table)
- appendix_dam: Python outputs use LaTeX ×
- hw_acceleration: table dimensions use compound math ($4\times4\times4$)
- benchmarking: fix Python equation string
2026-02-22 14:00:34 -05:00
Vijay Janapa Reddi
bee7db3a22 Merge pull request #1197 from harvard-edge/fix/ch10
Updated figures in chapter 10: model_compression
2026-02-22 13:09:20 -05:00
Vijay Janapa Reddi
95956dee3c Scales cover images to 100% width
Ensures cover images in Vol. 2 chapters fill the available width, improving visual presentation across different screen sizes.

Removes duplicate cover image from the introduction chapter.

Corrects a typographical error in Appendix Machine regarding energy ratios.
2026-02-22 12:48:37 -05:00
Vijay Janapa Reddi
77d0081e38 Refactors build process and validation logic
Refactors the build process to leverage shared output file resolution logic, ensuring consistency across build and debug commands.

Improves validation by streamlining bibliography handling and adding stricter citation matching.

Updates diagram dependencies and adjusts content for clarity and accuracy.
2026-02-22 12:06:46 -05:00
Zeljko Hrcek
4884d40b76 Updated figures in chapter 10: model_compression 2026-02-22 18:00:03 +01:00
Vijay Janapa Reddi
e9171b1379 fix: remove duplicate figure captions in vol2
Figures should have caption only in fig-cap attribute, not duplicated
as trailing text. Removed redundant captions from:
- introduction.qmd: fig-loss-vs-n-d, fig-data-scaling-regimes, fig-scaling-regimes
- sustainable_ai.qmd: fig-datacenter-energy-usage
2026-02-22 11:18:32 -05:00
Vijay Janapa Reddi
1a22405288 fix(vol2): correct fenced div closers (:::: → :::) in security_privacy 2026-02-22 10:31:21 -05:00
Vijay Janapa Reddi
e1a667e06f refactor(vol2): convert bold pseudo-headers in collective_communication and robust_ai
- collective_communication: Torus Topology (TPU Pods), Rail-Optimized Routing (NVIDIA DGX) → ####
- robust_ai: Conceptual Foundation, Fast Gradient Sign Method (FGSM) → ######
2026-02-22 10:09:35 -05:00
Vijay Janapa Reddi
1d087503a0 refactor(vol2): convert bold pseudo-headers to proper headers per book-prose rules
- inference: Pattern 1/2/3, Example 8-way tensor parallelism
- sustainable_ai: Hardware/Mobile/Edge measurement, Cascade/Wake-word/Federated
  patterns, TinyML stack, MLPerf benchmarks, Energy Delay Product
- edge_intelligence: Peak Memory Usage, Convergence/Non-IID/Heterogeneity,
  Communication-Computation Trade-off, When Does FL Work?
- ops_scale, fault_tolerance, security_privacy: prior bold-to-header conversions
2026-02-22 10:05:33 -05:00
Vijay Janapa Reddi
266079c816 Enables appendices in PDF output
Uncomments the appendices section in the PDF configuration
file to ensure that appendices are included in the PDF output.
2026-02-22 09:29:45 -05:00