Comprehensive summary of all changes made:
- Module reorganization complete
- Chapter updates complete
- All commits made in logical pieces
- Ready for testing and review
Cleanup of renamed files:
- Deleted old module source files (14_kvcaching, 15_profiling, 16_acceleration, etc.)
- Deleted old chapter markdown files
- These have been replaced by reorganized versions in previous commits
- Shows O(n²) latency growth in transformer generation
- Demonstrates problem before teaching solution
- Prepares module for reorganization to Module 15
- Add quick_profile() for simplified profiling interface
- Add analyze_weight_distribution() for compression module
- Both functions will be used by modules 15-18
Changed tier name from 'Intelligence' to 'Architecture' for clarity:
- Updated TOC: 🏛️ Architecture Tier (08-14)
- Updated all tier badges in modules 08-14
- Changed emoji from 🧠 to 🏛️ (building/columns)
Rationale:
- 'Intelligence' was vague and didn't describe content
- 'Architecture' accurately describes what students learn
- Professional terminology used in ML industry
- Clearer pedagogical narrative: different neural architectures
(data infrastructure, CNNs for vision, Transformers for language)
Content remains unchanged, only naming improved.
CLI improvements for better UX:
- Renamed 'tito community submit' to 'tito community share'
- Removed tito/commands/submit.py (moved to module workflow)
- Updated tito/main.py with cleaner command structure
- Removed module workflow commands (start/complete/resume)
- Updated __init__.py exports for CommunityCommand
- Updated _modidx.py with new module exports
Result: Cleaner CLI focused on essential daily workflows and
clear distinction between casual sharing vs formal competition.
Complete capstone competition implementation:
- Two division tracks: Closed (optimize) and Open (innovate)
- Baseline CNN model for CIFAR-10
- Validation and submission generation system
- Integration with Module 19 normalized scoring
- Honor code and GitHub repo submission workflow
- Worked examples and student templates
Module 20 is now a pedagogically sound capstone that applies
all Optimization Tier techniques in a fair competition format.
Enhancements to benchmarking module:
- Added calculate_normalized_scores() for fair hardware comparison
- Implemented speedup, compression ratio, accuracy delta metrics
- Added MLPerf principles section to educational content
- Updated module to support competition fairness
These changes enable Module 20 competition to work across different hardware.
Changes:
- Module 09: 'Spatial' → 'Spatial (CNNs)' in TOC for clarity
- Module 20: 'TinyMLPerf' → 'MLPerfEdu' to avoid confusion
* TinyMLPerf is a real benchmark for edge devices
* MLPerfEdu clearly indicates educational competition
* More accurate descriptor for this capstone
- Fixed 'Performance Tier' → 'Optimization Tier' in Module 20 objectives
Better naming makes the course structure clearer for students.
Changed tier badge text for modules 15-19 to match TOC naming:
- Was: **⚡ PERFORMANCE TIER**
- Now: **⚡ OPTIMIZATION TIER**
Ensures consistency between TOC and chapter badges.
All Foundation Tier modules (01-07) now use consistent formatting:
- Standard tier badge: **🏗️ FOUNDATION TIER** | Difficulty | Time
- Removed HTML divs and Module Info sections
- Clean Overview sections
- Consistent structure across all modules
Fixed Module 04 (Losses) which had wrong content (was about Networks)
KV Caching (Module 14) is about how transformers work efficiently,
not pure performance optimization. Moving it to Intelligence Tier.
Changes:
- Updated TOC: Intelligence Tier now 08-14 (was 08-13)
- Updated TOC: Optimization Tier now 15-19 (was 14-19)
- Changed Module 14 badge from PERFORMANCE to INTELLIGENCE
Cleaned up temporary files created during website standardization work:
- FINAL_STATUS.md, WEBSITE_USER_FEEDBACK.md, WORK_COMPLETE_README.md
- book/CONTENT_IMPROVEMENTS.md
- Tier overview placeholder files (content integrated into TOC structure)
These were working documents and are no longer needed.
Foundation Tier modules updated to final standardized version:
- Consistent YAML frontmatter with all metadata
- FOUNDATION tier badges throughout
- Professional tone with minimal emojis
- Complete learning objectives and systems thinking questions
- Real-world connections to production systems
TOC structure improvements:
- Clean 3-tier organization (Foundation, Intelligence, Performance)
- Proper tier captions and ordering
- All 20 modules properly integrated
- Capstone section clearly marked
- Add complete YAML frontmatter with metadata
- Add CAPSTONE badge with 5-star (Ninja) difficulty
- Standardize to exactly 5 learning objectives
- Implement competition structure with Closed/Open divisions
- Add comprehensive submission guidelines and validation
- Include normalized metrics for fair hardware comparison
- Add honor code and GitHub repo requirements
- Provide example optimizations at different skill levels
- Add Systems Thinking Questions on optimization priorities
- Connect to real MLPerf and industry applications
- Professional tone throughout
- Mark completion of all 20 modules!
- Add complete YAML frontmatter with metadata
- Add PERFORMANCE tier badge
- Standardize to exactly 5 learning objectives
- Implement Build → Use → Optimize pedagogical pattern
- Add Why This Matters with Google/OpenAI production context
- Add comprehensive Implementation Guide with Timer, MemoryProfiler, FLOPCounter
- Add Systems Thinking Questions on Amdahls Law and bottlenecks
- Add Real-World Connections to TPU optimization and inference serving
- Reduce emoji usage for professional tone
- Add clear What's Next navigation to Module 16
- Add complete YAML frontmatter with metadata
- Add PERFORMANCE tier badge (first Performance Tier module)
- Standardize to exactly 5 learning objectives
- Implement Build → Use → Optimize pedagogical pattern
- Add Why This Matters with ChatGPT/Claude production context
- Add historical evolution of caching in transformers
- Add comprehensive Implementation Guide with cache structures and cached attention
- Add Systems Thinking Questions on memory-speed trade-offs
- Add Real-World Connections to conversational AI and code completion
- Reduce emoji usage for professional tone
- Add clear What's Next navigation to Module 15
- Add complete YAML frontmatter with metadata
- Add INTELLIGENCE tier badge
- Standardize to exactly 5 learning objectives
- Implement Build → Use → Analyze pedagogical pattern
- Add Why This Matters with GPT-4/BERT/AlphaFold production context
- Add historical context from RNNs to Transformers revolution
- Add comprehensive Implementation Guide with scaled dot-product and multi-head attention code
- Add Systems Thinking Questions on O(n²) complexity and multi-head benefits
- Add Real-World Connections to LLMs, translation, and vision transformers
- Reduce emoji usage for professional tone
- Add clear What's Next navigation to Module 13
- Add complete YAML frontmatter with metadata
- Add INTELLIGENCE tier badge
- Standardize to exactly 5 learning objectives
- Implement Build → Use → Analyze pedagogical pattern
- Add Why This Matters with GPT-3/BERT production context
- Add historical evolution from Word2Vec to contextual embeddings
- Add comprehensive Implementation Guide with lookup tables and positional encodings
- Add Systems Thinking Questions on memory scaling and sparse gradients
- Add Real-World Connections to LLMs and recommendation systems
- Reduce emoji usage for professional tone
- Add clear What's Next navigation to Module 12
- Add complete YAML frontmatter with metadata
- Add INTELLIGENCE tier badge
- Standardize to exactly 5 learning objectives
- Implement Build → Use → Analyze pedagogical pattern
- Add Why This Matters with OpenAI/Google production context
- Add historical evolution from word-level to BPE
- Add comprehensive Implementation Guide with CharTokenizer and BPE code
- Add Systems Thinking Questions on vocab size vs sequence length trade-offs
- Add Real-World Connections to GPT, BERT, and code models
- Reduce emoji usage for professional tone
- Add clear What's Next navigation to Module 11
- Add complete YAML frontmatter with metadata
- Add INTELLIGENCE tier badge
- Standardize to exactly 5 learning objectives (systems/implementation/patterns/framework/optimization)
- Implement Build → Use → Analyze pedagogical pattern
- Add Why This Matters with production context (Tesla, Meta, medical imaging)
- Add historical context (LeNet to ResNet evolution)
- Add detailed Implementation Guide with Conv2D and pooling code
- Add Systems Thinking Questions on parameter efficiency and hierarchical features
- Add Real-World Connections to autonomous vehicles and medical imaging
- Reduce emoji usage for professional tone
- Add clear What's Next navigation to Module 10
- Complete task breakdown and statistics
- Review checklist for user
- Clear next steps and options
- Quick start commands for review
- Time investment summary
- Comprehensive summary of all improvements
- Quick quality check commands
- Clear next steps and options
- Explanation of design decisions
- Success metrics and statistics
- Analyze all improvements from user perspective
- Assess quality, consistency, and best practices
- Provide recommendations for next steps
- Review emoji reduction and professionalism
- Evaluate commit quality and structure
- Rate overall quality as Excellent (9/10)
- Add tier overview pages at start of each tier
- Update tier captions to be descriptive and professional
- Remove excessive emoji usage from captions
- Fix Performance Tier naming (was Optimization)
- Fix Module 20 title (TinyMLPerf Competition)
- Add leaderboard to Community section
- Create tier-2-intelligence.md (Modules 08-13)
- Create tier-3-performance.md (Modules 14-19)
- Professional tone with clear module roadmaps
- Link to tier milestones and prerequisites
- Consistent structure across all three tier pages
- Add Foundation Tier badge and complete metadata
- Implement complete training loops with validation
- Add checkpointing and metrics tracking
- Explain training dynamics and debugging
- Mark Foundation Tier completion with milestone unlock
- Link to Intelligence Tier (Module 08)