mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-11 17:49:25 -05:00
Rewrite abstract in formal academic style
This commit is contained in:
@@ -220,7 +220,7 @@
|
||||
|
||||
% Abstract - REVISED: Curriculum design focus
|
||||
\begin{abstract}
|
||||
Machine learning systems engineering requires understanding framework internals: why optimizers consume memory, when complexity becomes prohibitive, and how to navigate trade-offs among accuracy, latency, and memory. Yet current machine learning education separates algorithms from systems. Students learn gradient descent without measuring memory and attention mechanisms without profiling cost. This divide leaves graduates unable to debug production failures, widening the gap between research and deployment. We present TinyTorch, a twenty-module curriculum where students implement PyTorch's core components (tensors, autograd, optimizers, neural networks) entirely in pure Python. Three pedagogical patterns address this gap. First, progressive disclosure gradually reveals complexity: gradient tracking exists from Module 01 but activates only in Module 06. Second, systems-first integration embeds memory profiling from the start rather than treating it as an advanced topic. Third, historical milestone validation guides students to recreate major breakthroughs, from the Perceptron (1958) to Transformers, using exclusively student-built code. The curriculum requires no GPU, only 4 GB of RAM and a laptop, making systems education globally accessible. The goal is to prepare the next generation of \emph{AI engineers}: practitioners who understand not only what machine learning systems do, but why they work and how to make them scale. Open-source at \texttt{mlsysbook.ai/tinytorch}.
|
||||
Machine learning systems engineering requires understanding framework internals, yet current education separates algorithms from systems—students learn gradient descent without measuring memory, attention mechanisms without profiling computational costs. This divide leaves graduates unable to debug production failures, widening the gap between ML research and reliable deployment. We present TinyTorch, a 20-module curriculum in which students implement PyTorch's core components (tensors, autograd, optimizers, neural networks) entirely in pure Python. The curriculum employs three pedagogical patterns: progressive disclosure that gradually reveals complexity, systems-first integration that embeds memory profiling from the start, and historical milestone validation that guides students to recreate breakthroughs from Perceptron (1958) to Transformers using exclusively student-built code. Requiring only a laptop with 4GB RAM and no GPU, TinyTorch makes ML systems education globally accessible. The goal is to prepare the next generation of AI engineers—practitioners who understand not just what ML systems do, but why they work and how to make them scale. The curriculum is available open-source at \texttt{mlsysbook.ai/tinytorch}.
|
||||
\end{abstract}
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user