Purpose sections are abstract by design—they teach principles,
not specific hardware. Replace GPU/TPU references with
"accelerators" in the three Vol 1 purpose sections that
named specific hardware (serving, hw_acceleration, dl_primer).
- Vol1: Harvard Crimson (#A51C30)
- Vol2: ETH Zurich Blue (#1F407A)
Architecture:
- themes/_theme-harvard.scss, _theme-eth.scss: Color variables
- _base-styles.scss, _dark-mode-base.scss: Shared styles using $accent
- style-vol1/2.scss, dark-mode-vol1/2.scss: Entry points per volume
Each volume now has its own distinct visual identity while sharing
the same underlying style rules.
AutoML content is better suited for Volume II's optimization chapter
(distributed-scale model search). Moved from vol1/optimizations/ to
vol2/optimization/ to keep it accessible for future integration.
Two issues:
1. LaTeX parser regex only matched numeric figure numbers (e.g., 1.1)
but appendices use letter prefixes (B.1, C.2, D.1). Changed \d+ to
[A-Z\d]+ so all 214 figures are captured.
2. --scan-all mode picked up _shelved QMD files that aren't in the
actual build, causing a count mismatch. Added _shelved to skip list.
AndreaMattiaGaravagn (truncated) and AndreaMattiaGaravagno were listed
as separate contributors. Merged into a single entry with the correct
username, avatar, and combined contributions (code + doc).
Students hit "No module named 'tinytorch.core.tensor'" in notebooks because
the Jupyter kernel used a different Python than where tinytorch was installed.
- setup: install ipykernel + nbdev, register named kernel during tito setup
- health: add Notebook Readiness checks (import, kernel, Python match)
- export: verify exported file exists and has content (fail loudly)
- Windows: add get_venv_bin_dir() helper for cross-platform venv paths
On PR events, github.ref_name resolves to the merge ref (e.g.
"1159/merge") which doesn't exist on raw.githubusercontent.com,
causing a 404. Use github.head_ref (the actual source branch)
for PRs, falling back to ref_name for push events.
Also adds -f flag to curl so HTTP errors fail immediately with
a clear message instead of silently saving the 404 HTML page.
Escape unescaped & characters in references.bib (Taylor & Francis,
AI & Machine-Learning) and replace Unicode em-dashes (U+2014) with
LaTeX --- ligatures in paper.tex for T1 font compatibility.
LLM now only parses username + contribution types (strict 2-field JSON).
Project detection is fully deterministic from file paths:
tinytorch/ → tinytorch, book/ → book, kits/ → kits, labs/ → labs
If project cannot be determined, the bot asks the user instead of
silently defaulting to book. Also removes @AndreaMattiaGaravagno
from book/.all-contributorsrc (was incorrectly added there by the
old workflow — their PR only touched tinytorch/ files).
The fresh install test script used / as the sed delimiter when
substituting the branch name, which breaks on any branch containing /
(e.g. feature/foo, fix/bar, or GitHub merge refs like 1156/merge).
Show the exact definition, tanh approximation, and sigmoid approximation
side by side so students understand where 1.702 comes from and why we
chose the sigmoid form. Avoids erf notation in favor of plain-language
description of Φ(x) appropriate for Module 2 students.
Related to harvard-edge/cs249r_book#1154
The hint claimed 1.702 comes from √(2/π) ≈ 0.798, which is incorrect.
The 1.702 constant is empirically fitted so that sigmoid(1.702x) ≈ Φ(x),
the Gaussian CDF. The √(2/π) constant appears in the separate tanh-based
GELU approximation, not the sigmoid approximation used here.
Fixesharvard-edge/cs249r_book#1154
- Clarify that attention time complexity is O(n²×d), not O(n²), since each
of the n² query-key pairs requires a d-dimensional dot product
- Fix Total Memory column in analyze_attention_memory_overhead() which was
duplicating the Optimizer column instead of summing all components
- Update KEY INSIGHT multiplier from 4x to 7x to match corrected total
Fixesharvard-edge/cs249r_book#1150
- Add pre-render hook to clear stale LaTeX data between builds
- Add post-render hook to generate FIGURE_LIST.txt in output dir
- LaTeX captures figure numbers and pages during compilation
- Use deferred write for accurate page numbers (after float placement)
- Python merges with QMD captions and alt-text
- Output automatically appears in _build/pdf-vol1/ after each build
Quarto requires #| directives to be at the start of code blocks.
Fixed 93+ code blocks across 15 files where imports came before
the echo: false directive, causing code to be visible in PDFs.
- Added PYTHONPATH='.' to quarto execute config
- Modified viz.setup_plot() to return (fig, ax, COLORS, plt)
- Cleaned up all plotting cells to use simple imports
- No more sys.path manipulation needed in individual cells
Replaces viz.plot_*() calls with actual plotting code while keeping
viz.setup_plot() for consistent styling. Pattern: data and plotting
logic is now visible in QMD, style comes from viz module.
Files updated:
- data_engineering.qmd (2 plots)
- dnn_architectures.qmd (2 plots)
- data_selection.qmd (3 plots)
- frameworks.qmd (2 plots)
- model_compression.qmd (1 plot)
- hw_acceleration.qmd (4 plots)
- dl_primer.qmd (1 plot)