Remove ungrounded claims from paper

- Remove fabricated claim about quantum ML and robotics community forks
  that don't actually exist
- Remove "quantum ML" from conclusion's list of future fork variants
- Change "validated through pilot implementations" to "designed for
  diverse institutional contexts" since validation is planned future work
This commit is contained in:
Vijay Janapa Reddi
2026-01-26 18:34:22 -05:00
parent 79ff0cd44a
commit a5f8a41100

View File

@@ -882,7 +882,7 @@ The Optimization Tier completes the systems-first integration arc: students who
\section{Course Deployment}
\label{sec:deployment}
Translating curriculum design into effective classroom practice requires addressing integration models, infrastructure accessibility, and student support structures. This section presents deployment patterns validated through pilot implementations and institutional feedback.
Translating curriculum design into effective classroom practice requires addressing integration models, infrastructure accessibility, and student support structures. This section presents deployment patterns designed for diverse institutional contexts.
\textbf{Textbook Integration.}
TinyTorch serves as the hands-on implementation companion to the \emph{Machine Learning Systems} textbook~\citep{mlsysbook2025} (\texttt{mlsysbook.ai}), creating synergy between theoretical foundations and systems engineering practice. While the textbook covers the full ML lifecycle—data engineering, training architectures, deployment monitoring, robust operations, and sustainable AI—TinyTorch provides the complementary experience of building core infrastructure from first principles. This integration enables a complete educational pathway: students study production ML systems architecture in the textbook (Chapter 4: distributed training patterns, Chapter 7: quantization strategies), then implement those same abstractions in TinyTorch (Module 06: autograd for backpropagation, Module 15: INT8 quantization). The two resources address different aspects of the same educational gap: understanding both \emph{how production systems work} (textbook's systems architecture perspective) and \emph{how to build them yourself} (TinyTorch's implementation depth).
@@ -1108,7 +1108,7 @@ TinyTorch deliberately focuses on single node ML systems fundamentals: understan
\textbf{Hardware Simulation Integration.}
TinyTorch's current profiling infrastructure—memory tracking (tracemalloc), FLOP counting, and performance benchmarking—provides algorithmic-level performance analysis. A natural extension would integrate architecture simulators (e.g., scale-sim~\citep{samajdar2018scale}, timeloop~\citep{parashar2019timeloop}, astra-sim~\citep{kannan2022astrasim}) to connect high-level ML operations with cycle-accurate hardware models. This layered approach mirrors real ML systems engineering: students first understand algorithmic complexity and memory patterns in TinyTorch, then trace those operations down to microarchitectural performance in simulators. Such integration would complete the educational arc from algorithmic implementation $\rightarrow$ systems profiling $\rightarrow$ hardware realization, enabling first-principle analysis of how model architecture, system configuration (memory hierarchy, compute units, interconnects), and hardware substrate jointly determine production performance. Early discussions with students and collaborators suggest strong pedagogical value in this systems-to-hardware pipeline, maintaining TinyTorch's accessibility (no GPU hardware required) while preparing students for hardware-aware optimization through measurement-driven analysis rather than black-box experimentation.
\textbf{Architecture Extensions.} Potential additions (graph neural networks, diffusion models, reinforcement learning) must justify inclusion through systems pedagogy rather than completeness. The question is not ``Can TinyTorch implement this?'' but rather ``Does implementing this teach fundamental systems concepts unavailable through existing modules?'' Graph convolutions might teach sparse tensor operations; diffusion models might illuminate iterative refinement trade-offs. However, extensions succeed only when maintaining TinyTorch's principle: \textbf{every line of code teaches a systems concept}. Community forks demonstrate this philosophy: quantum ML variants replace tensors with quantum state vectors (teaching circuit depth versus memory); robotics forks emphasize RL simulation overhead and real-time constraints. The curriculum remains intentionally incomplete as a production framework: completeness lies in foundational systems thinking applicable across all ML architectures.
\textbf{Architecture Extensions.} Potential additions (graph neural networks, diffusion models, reinforcement learning) must justify inclusion through systems pedagogy rather than completeness. The question is not ``Can TinyTorch implement this?'' but rather ``Does implementing this teach fundamental systems concepts unavailable through existing modules?'' Graph convolutions might teach sparse tensor operations; diffusion models might illuminate iterative refinement trade-offs. However, extensions succeed only when maintaining TinyTorch's principle: \textbf{every line of code teaches a systems concept}. The curriculum remains intentionally incomplete as a production framework: completeness lies in foundational systems thinking applicable across all ML architectures.
\subsection{Community and Sustainability}
@@ -1129,7 +1129,7 @@ Three pedagogical contributions enable this transformation. \textbf{Progressive
\textbf{For educators and bootcamp instructors}: TinyTorch supports flexible integration: self-paced learning requiring zero infrastructure (students run locally on laptops), institutional courses with automated NBGrader assessment, or industry team onboarding for ML engineers transitioning from application development to systems work. The modular structure enables selective adoption: foundation tier only (Modules 01--08, teaching core concepts), architecture focus (adding CNNs and Transformers through Module 13), or complete systems coverage (all 20 modules including optimization and deployment). No GPU access required, no cloud credits needed, no infrastructure barriers.
The complete codebase, curriculum materials, and assessment infrastructure are openly available at \texttt{mlsysbook.ai/tinytorch} (or \texttt{tinytorch.ai}) under permissive open-source licensing. We invite the global ML education community to adopt TinyTorch in courses, contribute curriculum improvements, translate materials for international accessibility, fork for domain-specific variants (quantum ML, robotics, edge AI), and empirically evaluate whether implementation-based pedagogy achieves its promise.
The complete codebase, curriculum materials, and assessment infrastructure are openly available at \texttt{mlsysbook.ai/tinytorch} (or \texttt{tinytorch.ai}) under permissive open-source licensing. We invite the global ML education community to adopt TinyTorch in courses, contribute curriculum improvements, translate materials for international accessibility, fork for domain-specific variants (robotics, edge AI), and empirically evaluate whether implementation-based pedagogy achieves its promise.
Sutton's Bitter Lesson teaches that general methods leveraging computation ultimately triumph, but someone must build the systems that enable that computation to scale. TinyTorch prepares students to be those engineers: practitioners who understand not just \emph{what} ML systems do, but \emph{why} they work and \emph{how} to make them scale. The difference between knowing frameworks and understanding them begins with what's inside \texttt{loss.backward()}, and TinyTorch makes that understanding accessible to everyone.