- Delete outdated site/ directory
- Rename docs/ → site/ to match original architecture intent
- Update all GitHub workflows to reference site/:
- publish-live.yml: Update paths and build directory
- publish-dev.yml: Update paths and build directory
- build-pdf.yml: Update paths and artifact locations
- Update README.md:
- Consolidate site/ documentation (website + PDF)
- Update all docs/ links to site/
- Test successful: Local build works with all 40 pages
The site/ directory now clearly represents the course website
and documentation, making the repository structure more intuitive.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Package exports:
- Fix tinytorch/__init__.py to export all required components for milestones
- Add Dense as alias for Linear for compatibility
- Add loss functions (MSELoss, CrossEntropyLoss, BinaryCrossEntropyLoss)
- Export spatial operations, data loaders, and transformer components
Test infrastructure:
- Create tests/conftest.py to handle path setup
- Create tests/test_utils.py with shared test utilities
- Rename test_progressive_integration.py files to include module number
- Fix syntax errors in test files (spaces in class names)
- Remove stale test file referencing non-existent modules
Documentation:
- Update README.md with correct milestone file names
- Fix milestone requirements to match actual module dependencies
Export system:
- Run tito export --all to regenerate package from source modules
- Ensure all 20 modules are properly exported
Changes to README.md:
- Update Quick Start section to use `tito setup` instead of setup-environment.sh
- Change activation command to `source .venv/bin/activate`
- Update system check command from `tito system health` to `tito system doctor`
- Update module commands to use numbers (tito module start 01) instead of full names
- Update milestone/checkpoint commands to current CLI interface
- Change release banner to "December 2025 Pre-Release" with CLI focus
- Add user profile creation to setup features list
Changes to module ABOUT.md files:
- Update activation command from `source scripts/activate-tinytorch` to `source .venv/bin/activate`
- Update system check from `tito system health` to `tito system doctor`
Binder configuration verified:
- requirements.txt includes all necessary dependencies (numpy, rich, PyYAML, jupyter, matplotlib)
- postBuild script correctly installs TinyTorch package
- All deployment environments documented
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Clarifies this is the Harvard/MLSysBook TinyTorch
- Acknowledges 5+ other TinyTorch educational implementations
- Highlights unique features: 20-module curriculum, NBGrader, systems focus
- Links to ML Systems Book ecosystem (mlsysbook.ai/tinytorch)
- Community-positive framing
Major directory restructure to support both developer and learner workflows:
Structure Changes:
- NEW: src/ directory for Python source files (version controlled)
- Files renamed: tensor.py → 01_tensor.py (matches directory naming)
- All 20 modules moved from modules/ to src/
- CHANGED: modules/ now holds generated notebooks (gitignored)
- Generated from src/*.py using jupytext
- Learners work in notebooks, developers work in Python source
- UNCHANGED: tinytorch/ package (still auto-generated from notebooks)
Workflow: src/*.py → modules/*.ipynb → tinytorch/*.py
Command Updates:
- Updated export command to read from src/ and generate to modules/
- Export flow: discovers modules in src/, converts to notebooks in modules/, exports to tinytorch/
- All 20 modules tested and working
Configuration:
- Updated .gitignore to ignore modules/ directory
- Updated README.md with new three-layer architecture explanation
- Updated export.py source mappings and paths
Benefits:
- Clean separation: developers edit Python, learners use notebooks
- Better version control: only Python source committed, notebooks generated
- Flexible learning: can work in notebooks OR Python source
- Maintains backward compatibility: tinytorch package unchanged
Tested:
- Single module export: tito export 01_tensor ✅
- All modules export: tito export --all ✅
- Package imports: from tinytorch.core.tensor import Tensor ✅
- 20/20 modules successfully converted and exported
- Fix README.md: Replace broken references to non-existent files
- Remove STUDENT_VERSION_TOOLING.md references (file does not exist)
- Remove .claude/ directory references (internal development files)
- Remove book/ directory references (does not exist)
- Update instructor documentation links to point to existing files
- Point to INSTRUCTOR.md, TA_GUIDE.md, and docs/ for resources
- Fix paper.tex: Update instructor resources list
- Replace non-existent MAINTENANCE.md with TA_GUIDE.md
- Maintenance commitment details remain in paragraph text
- All referenced files now exist in repository
All documentation links now point to actual files in the repository
The itemize environment parameters [leftmargin=*, itemsep=1pt, parsep=0pt]
were appearing as visible text in the PDF because the enumitem package
wasn't loaded. This fix adds \usepackage{enumitem} to the preamble.
All itemized lists now format correctly with proper spacing and margins.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add alpha release badge above pip install command on intro page
- Update README status badge to reflect alpha status
- Clarify that package is in active development
Documentation updates across the codebase:
Root documentation:
- README.md: Updated references from book/ to site/
- CONTRIBUTING.md: Updated build and workflow instructions
- .shared-ai-rules.md: Updated AI assistant rules for new structure
GitHub configuration:
- Issue templates updated for new module locations
- Workflow references updated from book/ to site/
docs/ updates:
- STUDENT_QUICKSTART.md: New paths and structure
- module-rules.md: Updated module development guidelines
- NBGrader documentation: Updated for module restructuring
- Archive documentation: Updated references
Module documentation:
- modules/17_memoization/README.md: Updated after reordering
All documentation now correctly references:
- site/ instead of book/
- modules/XX_name/ instead of modules/source/
Created unified setup-environment.sh script that:
- Detects Apple Silicon and creates arm64-optimized venv
- Handles all dependencies automatically
- Creates activation helper with architecture awareness
- Works across macOS (Intel/Apple Silicon), Linux, Windows
Updated all documentation to use ONE setup command:
- README.md: Updated Quick Start
- docs/STUDENT_QUICKSTART.md: Updated Getting Started
- book/quickstart-guide.md: Updated 2-Minute Setup
Enhanced tito setup command with:
- Apple Silicon detection (checks for Rosetta vs native)
- Automatic arm64 enforcement when on Apple Silicon
- Architecture verification after venv creation
- Changed venv path from tinytorch-env to standard .venv
Students now have ONE clear path: ./setup-environment.sh
- Add last commit badge to show project is actively maintained
- Add commit activity badge to show consistent development
- Add GitHub stars badge for social proof
- Add contributors badge to highlight collaboration
- Updated main README to prominently feature historical milestones (1957-2024)
- Added new 'Journey Through ML History' section to book navigation
- Created comprehensive milestones-overview.md chapter explaining the progression
- Updated intro.md with milestone achievements section
- Enhanced quickstart-guide.md with milestone unlock information
- Reflects working milestones/ directory structure with 6 historical demonstrations
- Clear progression: Perceptron (1957) → XOR (1969) → MLP (1986) → CNN (1998) → Transformers (2017) → Systems (2024)
- Emphasizes proof-of-mastery approach with real achievements
MLOps Module Removal:
- Remove deleted Module 21 (MLOps) from all documentation
- Update TOC to end at Module 20 (Benchmarking)
- Fix references in intro.md and README.md
- Clean up learning timeline to reflect 20-module structure
TinyMLPerf Leaderboard Addition:
- Create comprehensive leaderboard placeholder page at /leaderboard
- Detail competition categories: MLP Sprint, CNN Marathon, Transformer Decathlon
- Outline benchmark specifications and fair competition guidelines
- Reference future tinytorch.org/leaderboard domain
- Add leaderboard to main navigation under Resources & Tools
- Update README to point to leaderboard page
The website now accurately represents our 20-module curriculum
without premature MLOps references and includes exciting
competition framework for student engagement.
Major improvements:
- Fix module ordering to match actual 20-module progression (01-20 + MLOps)
- Clarify DataLoader as generic batching tool (not just CIFAR-10)
- Add work-in-progress banner with compelling 'Why TinyTorch?' message
- Add TinyMLPerf competition and leaderboard section
- Remove premature industry feedback section
- Acknowledge other TinyTorch/MiniTorch projects
- Simplify additional resources section
- Update Mermaid diagram to show DataLoader correctly
- Ensure git URL points to mlsysbook/TinyTorch
The website now accurately reflects our 20-module structure with proper
categorization and professional presentation ready for Spring 2025 launch.
- Update README and website to be more professional while staying welcoming
- Remove excessive emojis from headers and tables
- Keep strategic emoji usage for emphasis (checkmarks, warnings)
- Clean up module tables and section headers
- Update Mermaid diagrams to be cleaner
- Fix module count (20 not 16) and accuracy claims (75%+ CIFAR-10)
- Strengthen ML Systems engineering messaging throughout
- Update milestone examples with correct historical references
- Maintain accessibility and professional tone
- Part I: Foundations (Modules 1-5) - Build MLPs, solve XOR
- Part II: Computer Vision (Modules 6-11) - Build CNNs, classify CIFAR-10
- Part III: Language Models (Modules 12-17) - Build transformers, generate text
Key changes:
- Renamed 05_dense to 05_networks for clarity
- Moved 08_dataloader to 07_dataloader (swap with attention)
- Moved 07_attention to 13_attention (Part III)
- Renamed 12_compression to 16_regularization
- Created placeholder dirs for new language modules (12,14,15,17)
- Moved old modules 13-16 to temp_holding for content migration
- Updated README with three-part structure
- Added comprehensive documentation in docs/three-part-structure.md
This structure gives students three natural exit points with concrete achievements at each level.
Major changes:
- Moved TinyGPT from Module 16 to examples/tinygpt (capstone demo)
- Fixed Module 10 (optimizers) and Module 11 (training) bugs
- All 16 modules now passing tests (100% health)
- Added comprehensive testing with 'tito test --comprehensive'
- Renamed example files for clarity (train_xor_network.py, etc.)
- Created working TinyGPT example structure
- Updated documentation to reflect 15 core modules + examples
- Added KISS principle and testing framework documentation
- Add MIT License with academic use notice and citation info
- Create comprehensive CONTRIBUTING.md with educational focus
- Emphasize systems thinking and pedagogical value
- Include mandatory git workflow standards from CLAUDE.md
- Restore proper file references in README.md
Repository now has complete contribution guidelines and licensing!
- Fix testing section with accurate demo/checkpoint counts (9 demos, 16 checkpoints)
- Update documentation links to point to existing files
- Remove references to missing CONTRIBUTING.md and LICENSE files
- Add reference to comprehensive test suite structure
- Point to actual documentation files in docs/ directory
- Ensure all claims match current reality
README now accurately reflects the actual TinyTorch structure!
- Update EXAMPLES mapping in tito to use new exciting names
- Add prominent examples section to main README
- Show clear progression: Module 05 → xornet, Module 11 → cifar10
- Update accuracy claims to realistic 57% (not aspirational 75%)
- Emphasize that examples are unlocked after module completion
- Connect examples to the learning journey
Students now understand when they can run exciting examples!
- Streamlined from 970 to 175 lines for clarity
- Focused on key information developers need
- Clear quick start instructions
- Concise module overview table
- Removed redundant FAQ section
- Simplified examples to essentials
- Better visual hierarchy with sections
- Professional badge presentation
- Maintained all critical information
The README is now more scannable and GitHub-friendly while
preserving the educational value and project overview.
- Corrected module dependencies based on actual YAML files
- Fixed diagram to show accurate prerequisite relationships:
- Tensor directly enables both Activations and Autograd
- DataLoader depends directly on Tensor (not through Spatial)
- Training depends on Dense, Spatial, Attention, Optimizers, and DataLoader
- TinyGPT depends on Attention, Optimizers, and Training
- Added sphinxcontrib-mermaid to requirements for diagram rendering
- Updated both intro.md and README.md with corrected diagrams
- Ensured mermaid extension is configured in _config.yml
- Add Harvard University badge and attribution
- Document professional academic design improvements
- Update quick start with virtual environment setup
- Add Jupyter Book website information
- Include instructor grading workflow with NBGrader
- Add prerequisites and learning resources section
- Update contributing and support information
- Add citation format for academic use
- Reflect 95% component reuse for TinyGPT
- Clean title format (TinyTorch with fire emoji)
Major Educational Framework Enhancements:
• Deploy interactive NBGrader text response questions across ALL modules
• Replace passive question lists with active 150-300 word student responses
• Enable comprehensive ML Systems learning assessment and grading
TinyGPT Integration (Module 16):
• Complete TinyGPT implementation showing 70% component reuse from TinyTorch
• Demonstrates vision-to-language framework generalization principles
• Full transformer architecture with attention, tokenization, and generation
• Shakespeare demo showing autoregressive text generation capabilities
Module Structure Standardization:
• Fix section ordering across all modules: Tests → Questions → Summary
• Ensure Module Summary is always the final section for consistency
• Standardize comprehensive testing patterns before educational content
Interactive Question Implementation:
• 3 focused questions per module replacing 10-15 passive questions
• NBGrader integration with manual grading workflow for text responses
• Questions target ML Systems thinking: scaling, deployment, optimization
• Cumulative knowledge building across the 16-module progression
Technical Infrastructure:
• TPM agent for coordinated multi-agent development workflows
• Enhanced documentation with pedagogical design principles
• Updated book structure to include TinyGPT as capstone demonstration
• Comprehensive QA validation of all module structures
Framework Design Insights:
• Mathematical unity: Dense layers power both vision and language models
• Attention as key innovation for sequential relationship modeling
• Production-ready patterns: training loops, optimization, evaluation
• System-level thinking: memory, performance, scaling considerations
Educational Impact:
• Transform passive learning to active engagement through written responses
• Enable instructors to assess deep ML Systems understanding
• Provide clear progression from foundations to complete language models
• Demonstrate real-world framework design principles and trade-offs
* Update README.md to lead with ML Systems value proposition
- Lead with "Build ML Systems From First Principles"
- Emphasize systems understanding through implementation
- Add learning path progression to TinyGPT
- Make MLSys book connection secondary/optional
- Focus on memory analysis, compute patterns, bottlenecks
* Update CLAUDE.md agent instructions for ML Systems focus
- Module Developer: Must include ML Systems analysis in every module
- Documentation Publisher: Must add systems insights sections
- QA Agent: Must test performance characteristics, not just correctness
- Add principle: "Every module teaches systems thinking through implementation"
- Require memory profiling, complexity analysis, scaling behavior
- Mandate production context and hardware implications
* Key positioning changes:
- TinyTorch = ML SYSTEMS course, not just ML algorithms
- Understanding comes through building complete systems
- Every implementation teaches memory, performance, scaling
- Bridge academic rigor with production engineering reality
This repositions TinyTorch as the definitive hands-on ML Systems engineering course.
- Add comprehensive README section showcasing 75% accuracy goal
- Update dataloader module README with CIFAR-10 support details
- Update training module README with checkpointing features
- Create complete CIFAR-10 training guide for students
- Document all north star implementations in CLAUDE.md
Students can now train real CNNs on CIFAR-10 using 100% TinyTorch code.
- Update Quick Start to show clear 3-step progression: Setup → Module 0 → Module 1
- Restructure module listing to highlight "START HERE!" for Module 0
- Add explicit "Module Progression" showing 0 → 1-16 flow
- Expand Module 0 description with bullet points about what users will explore
- Make it crystal clear that everyone should begin with Module 0 (Introduction)
The introduction module provides crucial system understanding before diving into implementation,
ensuring users understand the architecture and dependencies before building.
- Add virtual environment requirements and standards to CLAUDE.md
- Update README.md with new 00_introduction module overview
- Include visual system architecture and dependency analysis features
- Document proper development environment setup requirements
- Add troubleshooting guidance for environment issues
📖 Enhanced Visual Design:
- Wrapped entire FAQ content in blockquotes (>) for consistent grey background
- All bullet points, headers, and content now have improved readability
- Code blocks within blockquotes maintain proper formatting
- Consistent visual styling across all 8 FAQ entries
✨ User Experience Benefits:
- Grey background makes content much easier to read when expanded
- Better visual separation from surrounding text
- Professional appearance with improved contrast
- Reduces eye strain and improves content scanning
🎯 Technical Implementation:
- Added > prefix to all content lines within FAQ answers
- Maintained proper markdown formatting for headers, lists, and code
- Preserved existing structure while enhancing visual presentation
Result: FAQ dropdowns now have beautiful, consistent grey styling
that makes expanded content significantly easier to read and scan.
📱 New Access Method:
- Added Binder badge linking to mybinder.org launch
- Users can now run TinyTorch directly in browser without local setup
- Links to main branch: mybinder.org/v2/gh/MLSysBook/TinyTorch/main
🎯 User Experience Benefits:
- Zero-installation access for quick exploration
- Perfect for workshops, demos, and trying before installing
- Complements existing Jupyter Book documentation
- Positioned logically between Python and Jupyter Book badges
Result: Users now have multiple ways to engage with TinyTorch -
local installation, online documentation, and live interactive environment.
✨ Tone Improvements:
- Removed dismissive 'build toys' language about other tutorials
- Reframed as 'isolated components vs integrated systems' approach
- Much more respectful to other educators and learning resources
🏗️ Better Systems Engineering Analogy:
- Added compiler/OS analogy to explain systems thinking
- Helps readers understand why building integrated systems matters
- Concrete example: 'like understanding how every part of a compiler interacts'
📊 Enhanced Comparison:
- Updated comparison table to be more constructive
- Focus on 'Component vs Systems Approach' rather than dismissive contrasts
- Emphasizes integration and how everything connects
🎯 Educational Value:
- Explains WHY systems engineering matters without putting down alternatives
- Shows TinyTorch's unique value through positive comparison
- Maintains respectful tone while highlighting differentiating approach
Result: FAQ now educates about systems thinking benefits without
disrespecting other valuable learning resources. Much more professional
and constructive messaging.