mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-09 07:15:51 -05:00
Refines the explanation of K,V computation savings in the memoization module, quantifying redundant computations and highlighting the efficiency gain. The paper and module now specify that generating 100 tokens requires 5,050 total K,V computations, but only 100 are necessary, resulting in 4,950 redundant calculations.
TinyTorch Research Paper
Complete LaTeX source for the TinyTorch research paper.
Files
- paper.tex - Main paper (~12-15 pages, two-column format)
- references.bib - Bibliography (22 references)
- compile_paper.sh - Build script (requires LaTeX installation)
Quick Start: Get PDF
Option 1: Overleaf (Recommended)
- Go to Overleaf.com
- Create free account
- Upload
paper.texandreferences.bib - Click "Recompile"
- Download PDF
Option 2: Local Compilation
./compile_paper.sh
Requires LaTeX installation (MacTeX or BasicTeX).
Paper Details
- Format: Two-column LaTeX (conference-standard)
- Length: ~12-15 pages
- Sections: 7 complete sections
- Tables: 3 (framework comparison, learning objectives, performance benchmarks)
- Code listings: 5 (syntax-highlighted Python examples)
- References: 22 citations
Key Contributions
- Progressive disclosure via monkey-patching - Novel pedagogical pattern
- Systems-first curriculum design - Memory/FLOPs from Module 01
- Historical milestone validation - 70 years of ML as learning modules
- Constructionist framework building - Students build complete ML system
Framed as design contribution with empirical validation planned for Fall 2025.
Submission Venues
- ArXiv - Immediate (establish priority)
- SIGCSE 2026 - August deadline (may need 6-page condensed version)
- ICER 2026 - After classroom data (full empirical study)
Ready for submission! Upload to Overleaf to get your PDF.