Escape unescaped & characters in references.bib (Taylor & Francis,
AI & Machine-Learning) and replace Unicode em-dashes (U+2014) with
LaTeX --- ligatures in paper.tex for T1 font compatibility.
- Remove "Tensors to Systems" from subtitle for cleaner title
- Fix @article entries that should be @inproceedings (kannan2022astrasim,
micikevicius2018mixed, strubell2019energy, vaswani2017attention)
- Remove duplicate booktitle field from williams2009roofline
- Standardize year fields across entries
- New title: "TinyTorch: Building Machine Learning Systems from First
Principles: Tensors to Systems"
- Add strategic mentions of AI engineering as an emerging discipline
- Update competency matrix caption to reference AI engineer competencies
- Update conclusion to position TinyTorch as training AI engineers
- Add SEI AI Engineering workshop report reference
Trim detailed phase-by-phase validation plan to a concise summary.
Removes specific dates, sample sizes, and instrument names that would
age poorly. Keeps the Open Science Commitment and general validation
approach. Also removes two orphaned references (paas1992training,
sorva2012visual) that were only cited in the removed text.
- Switch from natbib to biblatex for better author truncation control
- Fix package structure references (tinytorch.nn.conv → tinytorch.core.spatial)
- Fix import examples to use actual tinytorch API patterns
- Fix class references (Transformer → GPT, Attention → MultiHeadAttention)
- Correct Adam coefficient from 0.001 to 0.01
- Fix 11 bibliography entries with wrong/corrupted data:
- abelson1996sicp, bruner1960process, hotz2023tinygrad
- tanenbaum1987minix, perkins1992transfer, papert1980mindstorms
- vygotsky1978mind, blank2019nbgrader, roberthalf2024talent
- keller2025ai, pytorch04release, tensorflow20
- Fix organization author names using double braces
- Configure maxbibnames=10 for "et al." truncation in bibliography
All 60 references verified via web search for arXiv submission.
Add MIT's xv6 teaching OS to the Related Work section alongside
Nachos and Pintos. The x86 to RISC-V transition exemplifies the
strip to essentials philosophy that TinyTorch follows.
Also link to the research paper from the preface for readers
interested in the pedagogical foundations.
Fixed citations that pointed to wrong papers:
- papert1980mindstorms: was citing book review → actual 1980 Basic Books
- bruner1960process: was citing 2023 medical paper → 1960 Harvard book
- perkins1992transfer: was citing Nature child mortality → 1992 encyclopedia
- collins1989cognitive: had 2018 reprint DOI → original 1989 publication
- vygotsky1978mind: malformed entry → proper 1978 formatting
- abelson1996sicp: was citing book review → MIT Press SICP
- roberthalf2024talent: was citing artists wages article → tech talent report
- kannan2022astrasim: was citing robot paper → ASTRA-sim2.0 ISPASS 2023
- hotz2023tinygrad: was citing GitHub prediction paper → tinygrad framework
- tanenbaum1987minix: was citing cancer research → MINIX OS book
- vaswani2017attention: had fake 2025 date → NeurIPS 2017
Removed unused entries:
- astrasimsim2020, chakkaravarthy2023astrasim, sergeev2018horovod
- rasley2020deepspeed, baydin2018automatic, chen2018tvm, paszke2017automatic
Presents the TinyTorch curriculum architecture, detailing learning theories, the ML Systems Competency Matrix, and the module structure.
Introduces a competency framework that operationalizes pedagogical goals into measurable outcomes, guiding the curriculum design to ensure systematic coverage of ML systems fundamentals.
Add new subsection positioning TinyTorch within the canonical tradition
of build-to-understand systems education: MINIX, SICP, Tiger compiler,
Nachos, and Pintos. This strengthens the paper by showing TinyTorch
follows a proven 50-year pedagogical pattern.
New references: tanenbaum1987minix, abelson1996sicp, appel2004tiger,
christopher1993nachos, pfaff2004pintos