Files
cs249r_book/tinytorch/paper
Vijay Janapa Reddi 69f46d4f7e Clarifies memoization computation savings
Refines the explanation of K,V computation savings in the memoization module,
quantifying redundant computations and highlighting the efficiency gain.

The paper and module now specify that generating 100 tokens requires 5,050
total K,V computations, but only 100 are necessary, resulting in 4,950
redundant calculations.
2026-02-19 17:59:10 -05:00
..

TinyTorch Research Paper

Complete LaTeX source for the TinyTorch research paper.


Files


Quick Start: Get PDF

  1. Go to Overleaf.com
  2. Create free account
  3. Upload paper.tex and references.bib
  4. Click "Recompile"
  5. Download PDF

Option 2: Local Compilation

./compile_paper.sh

Requires LaTeX installation (MacTeX or BasicTeX).


Paper Details

  • Format: Two-column LaTeX (conference-standard)
  • Length: ~12-15 pages
  • Sections: 7 complete sections
  • Tables: 3 (framework comparison, learning objectives, performance benchmarks)
  • Code listings: 5 (syntax-highlighted Python examples)
  • References: 22 citations

Key Contributions

  1. Progressive disclosure via monkey-patching - Novel pedagogical pattern
  2. Systems-first curriculum design - Memory/FLOPs from Module 01
  3. Historical milestone validation - 70 years of ML as learning modules
  4. Constructionist framework building - Students build complete ML system

Framed as design contribution with empirical validation planned for Fall 2025.


Submission Venues

  • ArXiv - Immediate (establish priority)
  • SIGCSE 2026 - August deadline (may need 6-page condensed version)
  • ICER 2026 - After classroom data (full empirical study)

Ready for submission! Upload to Overleaf to get your PDF.