mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-03-11 18:53:37 -05:00
Remove remaining specific numbers and consolidate milestone validation
ISSUE 1: Residual specific numbers in milestone descriptions - Line 611: '95%+ MNIST accuracy' in MLP Revival description - Line 613: '75%+ CIFAR-10 accuracy' in CNN Revolution description FIX: Removed specific accuracy targets, focus on conceptual achievements: - MLP Revival: 'trains multi-layer networks end-to-end on MNIST digits' - CNN Revolution: 'training both MLP and CNN on CIFAR-10 to measure architectural improvements through direct comparison' ISSUE 2: 'Success Validation' subsection repeated milestone list Lines 625-632 listed all 6 milestones again with validation criteria, creating redundancy with 'The Six Historical Milestones' (lines 606-618) just above. ANALYSIS OF DISTINCT PURPOSES: - 'The Six Historical Milestones' (606-618): WHAT each milestone is, WHEN it happens, WHAT students import/build (historical framing + integration) - 'Success Validation' (622-632): HOW to validate correctness (validation approach) FIX: Consolidated 'Success Validation' from itemized milestone list into concise validation philosophy paragraph: - Explains validation approach: task-appropriate results, not optimization - Gives examples across categories: simple problems converge, complex datasets show learning, generative models produce coherent outputs - Emphasizes correctness over speed: 'implementations prove correct by solving real tasks, not by passing synthetic unit tests alone' - Connects to professional practice: mirrors debugging approach RESULT: - Eliminated 6-item redundant list - Reduced from 12 lines to 4 lines - Clearer distinct purpose: milestone descriptions vs validation philosophy - No loss of information, better organization 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -608,9 +608,9 @@ Second, \textbf{implementation validation beyond unit tests}: Milestones differ
|
||||
|
||||
\item \textbf{1969 XOR Solution} (after Module 07): Solve Minsky's ``impossible'' XOR problem with multi-layer perceptrons, proving critics wrong. Validates that autograd enables non-linear learning.
|
||||
|
||||
\item \textbf{1986 MLP Revival} (after Module 07): Handwritten digit recognition demonstrating backpropagation's power. Requires Modules 01--07 working together (tensor operations, activations, layers, losses, autograd, optimizers, training). Students import \texttt{from tinytorch.optim import SGD; from tinytorch.nn import CrossEntropyLoss}---their framework trains multi-layer networks end-to-end targeting 95\%+ MNIST accuracy.
|
||||
\item \textbf{1986 MLP Revival} (after Module 07): Handwritten digit recognition demonstrating backpropagation's power. Requires Modules 01--07 working together (tensor operations, activations, layers, losses, autograd, optimizers, training). Students import \texttt{from tinytorch.optim import SGD; from tinytorch.nn import CrossEntropyLoss}---their framework trains multi-layer networks end-to-end on MNIST digits.
|
||||
|
||||
\item \textbf{1998 CNN Revolution} (after Module 09): Image classification demonstrating convolutional architectures' advantage through targeting 75\%+ CIFAR-10 accuracy~\citep{krizhevsky2009cifar,lecun1998gradient}---the ``north star'' achievement validating framework correctness. Students import \texttt{from tinytorch.nn import Conv2d, MaxPool2d}, training both MLP and CNN on identical data to measure architectural improvements themselves.
|
||||
\item \textbf{1998 CNN Revolution} (after Module 09): Image classification demonstrating convolutional architectures' advantage~\citep{krizhevsky2009cifar,lecun1998gradient}. Students import \texttt{from tinytorch.nn import Conv2d, MaxPool2d}, training both MLP and CNN on CIFAR-10 to measure architectural improvements themselves through direct comparison.
|
||||
|
||||
\item \textbf{2017 Transformer Era} (after Module 13): Language generation with attention-based architecture. Validates that attention mechanisms, positional embeddings, and autoregressive sampling function correctly through coherent text generation.
|
||||
|
||||
@@ -619,18 +619,8 @@ Second, \textbf{implementation validation beyond unit tests}: Milestones differ
|
||||
|
||||
Each milestone: (1) recreates actual breakthroughs using exclusively student code, (2) uses \emph{only} TinyTorch implementations (no PyTorch/TensorFlow), (3) validates success through task-appropriate performance, and (4) demonstrates architectural comparisons showing why new approaches improved over predecessors.
|
||||
|
||||
\noindent\textbf{Success Validation:}
|
||||
Each milestone validates implementation correctness through task-appropriate performance (not state-of-the-art results). Success criteria balance historical plausibility with pedagogical validation---implementations must be functionally correct, not just syntactically complete:
|
||||
|
||||
\begin{itemize}[leftmargin=*, itemsep=1pt, parsep=0pt]
|
||||
\item \textbf{M03 (1958 Perceptron)}: Solves linearly separable problems (e.g., 4-point OR/AND tasks), demonstrating basic gradient descent convergence.
|
||||
\item \textbf{M06 (1969 XOR Solution)}: Solves XOR classification, proving multi-layer networks handle non-linear problems that single layers cannot.
|
||||
\item \textbf{M07 (1986 MLP Revival)}: Achieves strong MNIST digit classification accuracy, validating backpropagation through all layers of deep networks.
|
||||
\item \textbf{M10 (1998 LeNet CNN)}: Demonstrates meaningful CIFAR-10 learning (substantially better than random 10\% baseline), showing convolutional feature extraction works correctly.
|
||||
\item \textbf{M13 (2017 Transformer)}: Generates coherent multi-token text continuations on TinyTalks dataset, demonstrating functional attention mechanisms and autoregressive generation.
|
||||
\item \textbf{M20 (2024 AI Olympics)}: Student-selected challenge across Vision/Language/Speed/Compression tracks with self-defined success metrics, demonstrating production systems integration.
|
||||
\end{itemize}
|
||||
Performance targets differ from published state-of-the-art due to pure-Python constraints (no GPU acceleration, simplified architectures). Correctness matters more than speed: if a student's CNN learns meaningful CIFAR-10 features, their convolution, pooling, and backpropagation implementations compose correctly into a functional vision system.
|
||||
\noindent\textbf{Validation Approach:}
|
||||
Milestone success validates implementation correctness, not performance optimization. Students demonstrate functional implementations through task-appropriate results: simple problems converge (Perceptron solves linearly separable tasks, MLPs solve XOR), complex datasets show learning (MNIST/CIFAR-10 accuracy substantially exceeds random baselines), and generative models produce coherent outputs (Transformers generate meaningful text continuations). Performance differs from published state-of-the-art due to pure-Python constraints, but correctness matters more than speed---if a student's CNN learns meaningful CIFAR-10 features, their convolution, pooling, and backpropagation implementations compose correctly into a functional vision system. This approach mirrors professional debugging: implementations prove correct by solving real tasks, not by passing synthetic unit tests alone.
|
||||
|
||||
\section{Progressive Disclosure}
|
||||
\label{sec:progressive}
|
||||
|
||||
Reference in New Issue
Block a user