Major changes:
- Moved TinyGPT from Module 16 to examples/tinygpt (capstone demo)
- Fixed Module 10 (optimizers) and Module 11 (training) bugs
- All 16 modules now passing tests (100% health)
- Added comprehensive testing with 'tito test --comprehensive'
- Renamed example files for clarity (train_xor_network.py, etc.)
- Created working TinyGPT example structure
- Updated documentation to reflect 15 core modules + examples
- Added KISS principle and testing framework documentation
- Update intro.md to show realistic 57.2% CIFAR-10 accuracy
- Replace aspirational 75% compression claims with actual achievements
- Highlight 100% XOR accuracy milestone
- Clean up milestone examples to match new directory structure
- Remove outdated example references from milestones
Website documentation now accurately reflects TinyTorch capabilities!
- Corrected module dependencies based on actual YAML files
- Fixed diagram to show accurate prerequisite relationships:
- Tensor directly enables both Activations and Autograd
- DataLoader depends directly on Tensor (not through Spatial)
- Training depends on Dense, Spatial, Attention, Optimizers, and DataLoader
- TinyGPT depends on Attention, Optimizers, and Training
- Added sphinxcontrib-mermaid to requirements for diagram rendering
- Updated both intro.md and README.md with corrected diagrams
- Ensured mermaid extension is configured in _config.yml
- Remove excessive emojis while maintaining strategic usage
- Update CSS with academic typography (Source Sans Pro, Source Serif Pro)
- Professional color scheme with academic blues (#2c3e50, #3498db)
- Clean navigation without emoji decorations
- Enhanced visual hierarchy with professional spacing
- University-level styling consistent with Harvard standards
- Maintained pedagogical effectiveness and engagement
- Improved readability with clean, accessible design
- Professional tone throughout all content
- Academic credibility without sacrificing approachability
- Replace ugly gray background with clean white theme
- Add proper logo styling and configuration
- Update book chapters from module READMEs
- Add educational-ml-docs-architect agent
- Clean up custom CSS for better readability
- Configure logo.png in correct location
- Update tito book command with proper chapters
Major Educational Framework Enhancements:
• Deploy interactive NBGrader text response questions across ALL modules
• Replace passive question lists with active 150-300 word student responses
• Enable comprehensive ML Systems learning assessment and grading
TinyGPT Integration (Module 16):
• Complete TinyGPT implementation showing 70% component reuse from TinyTorch
• Demonstrates vision-to-language framework generalization principles
• Full transformer architecture with attention, tokenization, and generation
• Shakespeare demo showing autoregressive text generation capabilities
Module Structure Standardization:
• Fix section ordering across all modules: Tests → Questions → Summary
• Ensure Module Summary is always the final section for consistency
• Standardize comprehensive testing patterns before educational content
Interactive Question Implementation:
• 3 focused questions per module replacing 10-15 passive questions
• NBGrader integration with manual grading workflow for text responses
• Questions target ML Systems thinking: scaling, deployment, optimization
• Cumulative knowledge building across the 16-module progression
Technical Infrastructure:
• TPM agent for coordinated multi-agent development workflows
• Enhanced documentation with pedagogical design principles
• Updated book structure to include TinyGPT as capstone demonstration
• Comprehensive QA validation of all module structures
Framework Design Insights:
• Mathematical unity: Dense layers power both vision and language models
• Attention as key innovation for sequential relationship modeling
• Production-ready patterns: training loops, optimization, evaluation
• System-level thinking: memory, performance, scaling considerations
Educational Impact:
• Transform passive learning to active engagement through written responses
• Enable instructors to assess deep ML Systems understanding
• Provide clear progression from foundations to complete language models
• Demonstrate real-world framework design principles and trade-offs
Major changes:
- Renamed entire system from "milestone" to "checkpoint" for academic framing
- Checkpoints are now positioned as academic progress markers in learning journey
- Implemented enhanced Rich CLI timeline with progress bars and connecting lines
- Added overall progress tracking (16/16 modules = 100%)
Enhanced timeline visualization:
- Horizontal view shows progress bar with filled/unfilled segments
- Visual connecting lines between checkpoints showing completion status
- Color-coded progress: green (complete), yellow (in-progress), dim (future)
- Percentage indicators for each checkpoint and overall progress
CLI improvements:
- `tito checkpoint status` - Shows overall and per-checkpoint progress
- `tito checkpoint timeline --horizontal` - Rich visual progress line
- `tito checkpoint timeline` - Vertical tree view with module details
- Better progress indicators with filled bars and connecting lines
Documentation updates:
- Renamed milestone-system.md to checkpoint-system.md
- Updated all references from milestone to checkpoint terminology
- Emphasized academic checkpoint philosophy and progress markers
- Added descriptions of new Rich CLI visualizations
Benefits:
- More academic framing aligns with educational context
- Visual progress bars provide immediate feedback on learning journey
- Checkpoint terminology is more familiar to students
- Rich CLI visualizations make progress tracking engaging
Features implemented:
- Complete milestone tracking system with Foundation → Architecture → Training → Inference → Serving progression
- Rich CLI visualization with status, timeline (horizontal/vertical), and progress tracking
- Ticker-based granular progress within each milestone showing module completion
- Comprehensive documentation explaining the pedagogical approach and system benefits
- Integration with existing tito CLI infrastructure and module detection
Key capabilities:
- `tito milestone status` - shows current progress and capabilities unlocked
- `tito milestone timeline` - visual progress timeline with multiple views
- `tito milestone test/unlock` - placeholder for future capability testing
- Automatic module detection and progress calculation
- Clear capability statements for each milestone achievement
Benefits:
- Transforms learning from "completing modules" to "building capabilities"
- Provides clear motivation through visual progress and capability unlocks
- Aligns with real ML engineering workflow: Foundation → Architecture → Training → Inference → Serving
- Gives students concrete sense of progress toward complete ML framework
- Moved Introduction to "Course Orientation" section (no longer Module 0)
- Renumbered all modules: Setup becomes Module 0, course now has 16 modules
- Updated table of contents to separate orientation from formal course modules
- Updated intro.md and vision.md to reflect 16 modules instead of 17
- Course now starts immediately with hands-on implementation (Setup)
- Maintains Build→Use→Reflect philosophy by removing non-implementation module
- Introduction remains accessible as orientation material without being numbered module
- Enhanced book/intro.md with comprehensive ML systems vision sections including "Our Vision", "Systems-First Thinking", "Beyond Code: Systems Intuition", and expanded "Who This Is For"
- Created book/vision.md with complete educational philosophy explaining the problem TinyTorch solves, systems thinking approach, target audience, and learning outcomes
- Updated book/_toc.yml to include vision document in Additional Resources section
- Content emphasizes training ML systems engineers vs ML users, focusing on memory management, performance analysis, and production trade-offs
- Maintains existing structure for NBGrader compatibility while clearly communicating educational vision to students
✨ Title Formatting:
- Split title into main header and subtitle for better readability
- Enhanced visual hierarchy in book introduction
🚀 Content Updates:
- Changed 'rocket ship' to 'AI rocket ship' for more specific branding
- Added '(Harvard)' to Prof. Vijay Janapa Reddi reference for clarity
- Maintains professional attribution while being more informative
Result: Cleaner book intro formatting with improved readability and attribution.
- Address math anxiety: explain math learning approach
- Address validation fears: highlight testing and feedback
- Address flexibility concerns: explain module dependencies
- Address toy project skepticism: emphasize real data and results
- Focus on actual questions students ask vs generic course info
- 4 key questions for students already interested in the course
- Focus on practical learning concerns vs skepticism
- Shorter than GitHub FAQ - appropriate for committed learners
- Covers time investment, skill level, support, modern relevance
- Changed from ambitious app development (computer vision, NLP, etc.) to realistic framework engineering
- New focus areas: performance optimization, algorithm extensions, systems engineering, benchmarking analysis, developer tools
- Projects now align with what students actually built: a complete ML framework
- Emphasizes systems engineering and optimization skills rather than application development
- Maintains 'no PyTorch imports' constraint to prove deep framework understanding
- Added 'Complete System Integration' section emphasizing how all 14 modules connect
- Highlighted that students build ONE cohesive ML framework, not isolated exercises
- Added capstone project section encouraging real applications using only TinyTorch
- Updated README.md 'What You'll Build' to emphasize system integration
- Added visual flow diagram showing module dependencies and connections
- Emphasized 'no PyTorch imports' constraint to prove framework completeness
Key additions:
- og:title, og:description, og:url, og:type, og:image for Open Graph
- twitter:card, twitter:title, twitter:description, twitter:image for Twitter
- Uses astronaut/rocket ship tagline for memorable social sharing
- Proper property/name attributes for platform compatibility
This will enable rich previews when sharing TinyTorch links in Slack, Twitter, etc.
Updates the introduction with additional motivational context and a clearer explanation of TinyTorch's purpose.
Emphasizes the hands-on learning approach and the benefits of building ML frameworks from scratch.
Replaces a sentence with an analogy to enhance the message's impact.
- Added 'Prof. Vijay Janapa Reddi (Harvard University)' right after title
- Positioned prominently for proper academic/course attribution
- Matches book config author field for consistency
- Standard practice for educational materials and courses
- Removed redundant 'How This Works' section (covered by Learning Philosophy)
- Removed academic jargon sentence about educational framework
- Cleaned up all em dashes, hyphens, and arrows per user preference
- Changed 'Build → Use → Master' to 'Build, Use, Master'
- Result: Much cleaner, more direct presentation
Key improvements:
- Moved educational framework positioning up front where visitors need it
- Blended 'Science vs Engineering' into more natural 'Core Difference'
- Removed defensive 'Our unique contribution' language
- Changed 'What Makes Different' to conversational 'How This Works'
- Removed bullet points for more natural paragraph flow
- Simplified acknowledgments without academic defensiveness
Result: Much more welcoming and confident presentation
- Added complementary learning reference to mlsysbook.ai
- Positioned as comprehensive systems knowledge companion
- TinyTorch = build systems, ML Systems book = systems context
- Perfect educational pairing for complete ML systems understanding
Updated the course journey section to match the exact navigation structure:
- Foundation: Setup, Tensors, Activations
- Building Blocks: Layers, Networks, CNNs
- Training Systems: DataLoader, Autograd, Optimizers, Training
- Production & Performance: Compression, Kernels, Benchmarking, MLOps
Changes:
- Cleaner bullet format with • separators
- Concise descriptions for each section
- Exact alignment with site navigation
- More scannable and consistent layout
Result: Perfect consistency between landing page and navigation structure.
Changes:
- Replaced em dashes (—) with simpler punctuation
- Used colons (:) for explanatory clauses
- Used periods (.) for sentence breaks
- Removed unnecessary punctuation complexity
Result: Cleaner, more readable text that flows better without distracting typography.
Changed main tagline from:
'Most ML education teaches you to use frameworks. TinyTorch teaches you to understand them.'
To:
'Most ML education teaches you to use frameworks. TinyTorch teaches you to build them.'
Rationale:
- 'Understand' is vague and passive
- 'Build' is concrete and action-oriented
- Aligns perfectly with engineering focus we just established
- Reinforces the hands-on, construction-based learning approach
- More compelling for engineering-minded learners
Updated in both README.md and book/intro.md for consistency.
Key improvement:
- Replaced 'Learning Opportunity' with 'Science vs Engineering' framing
- Clearly positions TinyTorch as ML engineering education vs traditional ML science
- Uses ⚖️ emoji to reinforce the comparison concept
- Bold formatting on key terms: **science** vs **engineering**
- Creates stronger identity formation: 'I want to be an ML engineer'
- Differentiates from theory-heavy courses with concrete value proposition
Result: Transforms value prop from 'better learning' to 'different career path' - much more compelling positioning for engineering-minded learners.
Key improvements:
- Added 'Learning Opportunity' section with positive framing
- Expanded 'What Makes TinyTorch Different' with concrete examples
- Enhanced learning philosophy with complete example cycle
- Moved CTA section lower after building value and understanding
- Added more substance to each section while maintaining scannability
- Improved course journey descriptions with more detail
- Better flow: Hook → Opportunity → Difference → Philosophy → Journey → CTA
- Maintained positive tone without putting other approaches down
Result: More substantial content that builds desire before asking for action.
- Add 'The Big Picture: Why Build from Scratch?' section at top
- Include 'What Makes TinyTorch Different' with 4 key differentiators
- Match the new big-picture-first structure from root README
- Maintain all existing content but improve hierarchy
- Ensure book and README stay consistent
- Updated title to match new tagline format
- Added humble educational foundation section referencing CS249r course
- Confirmed result-oriented 'What You'll Achieve' section works well
- All branding now consistent across book and documentation
- Clean author attribution without unnecessary copyright notices
- Updated all module references to start from 01 instead of 00
- Changed tagline to 'Build your own ML framework. Start small. Go deep.'
- Added educational foundation section linking to ML Systems book
- Updated README, documentation, CLI examples, and prerequisites
- Regenerated book content with consistent numbering throughout
- Maintains 14 modules total but with natural numbering (01-14)
✅ Rename all module directories: 00_setup → 01_setup, etc.
✅ Update convert_modules.py mappings for new directory names
✅ Update _toc.yml file paths and titles (1-14 instead of 0-13)
✅ Regenerate all overview pages with new numbering
✅ Fix all broken references in usage-paths and intro
✅ Update chapter references to use natural numbering
Benefits:
- More intuitive course progression starting from 1
- Matches academic course numbering conventions
- Eliminates confusion about 'Module 0' concept
- Cleaner mental model for students and instructors
- All references and links properly updated
Complete transformation: 14 modules now numbered 01-14
✅ Show actual implementation code instead of vague descriptions
✅ Contrast 'import torch' with 'class Tensor:' implementations
✅ Display real function definitions students will write
✅ Make clear students build every component from scratch
Changes:
- Replace vague 'Build your own tensors' with 'class Tensor:'
- Show actual method signatures: __add__, backward, forward
- Include concrete loss function: mse_loss implementation
- Display real optimizer logic: param.data -= lr * param.grad
- Change ending: 'I built this!' → 'I implemented every line!'
✅ Remove unnecessary nesting: book/tinytorch-course/ → book/
✅ Update all path references in scripts and workflows
✅ Cleaner development experience with shorter paths
✅ Book builds successfully with simplified structure
Changes:
- Move all book files up one directory level
- Update convert_modules.py paths
- Update GitHub Actions workflow paths
- Update book configuration paths
- Test confirms everything works correctly