diff --git a/paper/paper.tex b/paper/paper.tex index d5af7942..63b28d0a 100644 --- a/paper/paper.tex +++ b/paper/paper.tex @@ -608,9 +608,9 @@ Second, \textbf{implementation validation beyond unit tests}: Milestones differ \item \textbf{1969 XOR Solution} (after Module 07): Solve Minsky's ``impossible'' XOR problem with multi-layer perceptrons, proving critics wrong. Validates that autograd enables non-linear learning. -\item \textbf{1986 MLP Revival} (after Module 07): Handwritten digit recognition demonstrating backpropagation's power. Requires Modules 01--07 working together (tensor operations, activations, layers, losses, autograd, optimizers, training). Students import \texttt{from tinytorch.optim import SGD; from tinytorch.nn import CrossEntropyLoss}---their framework trains multi-layer networks end-to-end targeting 95\%+ MNIST accuracy. +\item \textbf{1986 MLP Revival} (after Module 07): Handwritten digit recognition demonstrating backpropagation's power. Requires Modules 01--07 working together (tensor operations, activations, layers, losses, autograd, optimizers, training). Students import \texttt{from tinytorch.optim import SGD; from tinytorch.nn import CrossEntropyLoss}---their framework trains multi-layer networks end-to-end on MNIST digits. -\item \textbf{1998 CNN Revolution} (after Module 09): Image classification demonstrating convolutional architectures' advantage through targeting 75\%+ CIFAR-10 accuracy~\citep{krizhevsky2009cifar,lecun1998gradient}---the ``north star'' achievement validating framework correctness. Students import \texttt{from tinytorch.nn import Conv2d, MaxPool2d}, training both MLP and CNN on identical data to measure architectural improvements themselves. +\item \textbf{1998 CNN Revolution} (after Module 09): Image classification demonstrating convolutional architectures' advantage~\citep{krizhevsky2009cifar,lecun1998gradient}. Students import \texttt{from tinytorch.nn import Conv2d, MaxPool2d}, training both MLP and CNN on CIFAR-10 to measure architectural improvements themselves through direct comparison. \item \textbf{2017 Transformer Era} (after Module 13): Language generation with attention-based architecture. Validates that attention mechanisms, positional embeddings, and autoregressive sampling function correctly through coherent text generation. @@ -619,18 +619,8 @@ Second, \textbf{implementation validation beyond unit tests}: Milestones differ Each milestone: (1) recreates actual breakthroughs using exclusively student code, (2) uses \emph{only} TinyTorch implementations (no PyTorch/TensorFlow), (3) validates success through task-appropriate performance, and (4) demonstrates architectural comparisons showing why new approaches improved over predecessors. -\noindent\textbf{Success Validation:} -Each milestone validates implementation correctness through task-appropriate performance (not state-of-the-art results). Success criteria balance historical plausibility with pedagogical validation---implementations must be functionally correct, not just syntactically complete: - -\begin{itemize}[leftmargin=*, itemsep=1pt, parsep=0pt] - \item \textbf{M03 (1958 Perceptron)}: Solves linearly separable problems (e.g., 4-point OR/AND tasks), demonstrating basic gradient descent convergence. - \item \textbf{M06 (1969 XOR Solution)}: Solves XOR classification, proving multi-layer networks handle non-linear problems that single layers cannot. - \item \textbf{M07 (1986 MLP Revival)}: Achieves strong MNIST digit classification accuracy, validating backpropagation through all layers of deep networks. - \item \textbf{M10 (1998 LeNet CNN)}: Demonstrates meaningful CIFAR-10 learning (substantially better than random 10\% baseline), showing convolutional feature extraction works correctly. - \item \textbf{M13 (2017 Transformer)}: Generates coherent multi-token text continuations on TinyTalks dataset, demonstrating functional attention mechanisms and autoregressive generation. - \item \textbf{M20 (2024 AI Olympics)}: Student-selected challenge across Vision/Language/Speed/Compression tracks with self-defined success metrics, demonstrating production systems integration. -\end{itemize} -Performance targets differ from published state-of-the-art due to pure-Python constraints (no GPU acceleration, simplified architectures). Correctness matters more than speed: if a student's CNN learns meaningful CIFAR-10 features, their convolution, pooling, and backpropagation implementations compose correctly into a functional vision system. +\noindent\textbf{Validation Approach:} +Milestone success validates implementation correctness, not performance optimization. Students demonstrate functional implementations through task-appropriate results: simple problems converge (Perceptron solves linearly separable tasks, MLPs solve XOR), complex datasets show learning (MNIST/CIFAR-10 accuracy substantially exceeds random baselines), and generative models produce coherent outputs (Transformers generate meaningful text continuations). Performance differs from published state-of-the-art due to pure-Python constraints, but correctness matters more than speed---if a student's CNN learns meaningful CIFAR-10 features, their convolution, pooling, and backpropagation implementations compose correctly into a functional vision system. This approach mirrors professional debugging: implementations prove correct by solving real tasks, not by passing synthetic unit tests alone. \section{Progressive Disclosure} \label{sec:progressive}