Module 02_tensor now follows the correct pattern learned from layers_dev:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Actual test functions (test_unit_tensor_creation, test_unit_tensor_properties, test_unit_tensor_arithmetic)
4. ## 🎯 Module Summary
✅ Moved test functions from end of file to proper location after standardized testing
✅ Removed duplicate test functions
✅ Students now see actual test implementations before the summary
✅ run_module_tests_auto will auto-discover and run all tests
Cleaned up duplicate/redundant nbgrader cells that were just comments referencing test functions. The actual test functions remain in their proper location after the standardized testing section.
Removed:
- Duplicate test-personal-info nbgrader cell (just a comment)
- Duplicate test-system-info nbgrader cell (just a comment)
- Redundant 'Inline Test Functions' section
This eliminates confusion and follows the clean pattern established by other modules.
Module 01_setup now follows correct pattern:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Actual test functions (test_unit_personal_info_basic, test_unit_system_info_basic)
4. ## 🎯 Module Summary
This ensures students see actual test implementations before the summary.
Ensures consistent testing framework across all TinyTorch modules with:
✅ Added standardized testing sections to modules that were missing them:
- 01_setup: Added complete testing section + module summary
- 02_tensor: Added testing section + comprehensive module summary
- 15_mlops: Standardized existing testing section to match convention
✅ All modules now follow the consistent pattern:
1. ## 🧪 Module Testing (markdown explanation)
2. Locked nbgrader cell with standardized-testing ID
3. run_module_tests_auto call to discover and run all tests
4. ## 🎯 Module Summary (educational wrap-up)
✅ Benefits:
- Consistent testing experience across all 16 modules
- Automatic test discovery and execution before module completion
- Standardized educational flow: learn → implement → test → reflect
- Professional testing practices with locked testing framework
✅ Verification: All 16 modules now have both:
- '## 🧪 Module Testing' section ✓
- 'run_module_tests_auto' call ✓
This ensures students always verify their implementations work correctly
before moving to the next module, following TinyTorch's educational philosophy.
- Remove loose test code from nbgrader cells that ran automatically on import
- Keep only proper test_unit_personal_info_basic() and test_unit_system_info_basic() functions
- Prevents tests from running when module is imported as package
- Follows established test naming conventions (test_unit_*)
- Improves module reliability and reduces side effects
Fixed issues:
- NBGrader cells now reference test functions instead of running test code directly
- All assertions and test logic properly contained in named test functions
- Module can be imported without automatically executing tests
- Delete all 15 .ipynb files from modules/source directories
- Align with TinyTorch's Python-first development philosophy
- .py files are the source of truth, .ipynb files are temporary outputs
- Prevents version control conflicts with notebook metadata
- Students work directly with .py files using Jupytext format
- Notebooks can be regenerated when needed via 'tito nbdev generate'
Removed files:
- All *_dev.ipynb files across modules 01-15
- Keeps repository clean and focused on source code
- Updated module.yaml files for 05_dense and 06_spatial to reference correct dev file names
- Fixed #| default_exp directives in dense_dev.py and spatial_dev.py to export to correct module names
- Fixed tensor assignment issues in 12_compression module by creating new Tensor objects instead of trying to assign to .data property
- Removed missing function imports from autograd integration test
- All individual module tests now pass (01_setup through 14_benchmarking)
- Generated correct module files: dense.py, spatial.py, attention.py
✅ NBGrader solution/test structure: ### BEGIN/END SOLUTION blocks
✅ Educational TODO sections: STEP-BY-STEP, HINTS, EXAMPLES, LEARNING CONNECTIONS
✅ Immediate unit tests: proper assertions after each solution
✅ TinyTorch consistency: same patterns as tensor, layers, activations modules
✅ All tests passing: 100% success rate with comprehensive coverage
Module now follows established TinyTorch educational format:
- Detailed TODO instructions for student implementation
- Solution blocks wrapped in NBGrader tags
- Immediate feedback with unit tests after each piece
- Progress tracking with emojis and clear status messages
Ready for NBGrader processing and student use.
- Implement scaled dot-product attention with masking support
- Build multi-head attention with learnable projections
- Create sinusoidal positional encoding for sequence understanding
- Add layer normalization for training stability
- Complete transformer block with residual connections
- Include self-attention wrapper and utility functions
- Full inline testing with 100% pass rate
- Educational content explaining attention mechanisms
- Foundation for modern AI architectures (GPT, BERT, etc.)
This module bridges classical ML (tensors, layers, networks) with
modern transformer architectures that power ChatGPT and contemporary AI.
- Added educational metadata (difficulty, time_estimate) to all module.yaml files
- Updated convert_readmes.py to read from YAML instead of hardcoded mappings
- Standardized difficulty progression: ⭐ → ⭐⭐ → ⭐⭐⭐ → ⭐⭐⭐⭐ → ⭐⭐⭐⭐⭐🥷
- Fixed path resolution for YAML reading in book build process
- Eliminated duplication: single source of truth for educational metadata
- Capstone gets special ninja treatment (⭐⭐⭐⭐⭐🥷) as beyond-expert level
- Updated book generation to include 15_capstone with 5-star difficulty rating
- Changed time estimate from '20-40 hours' to 'Capstone Project' for better visitor experience
- Removed specific week references from project phases for more encouraging presentation
- Maintained detailed project structure while making timeline more flexible
- Ensures consistent 5-star rating for expert-level modules across the framework
- Added bold formatting to match other modules' style
- Enhanced clarity with more specific descriptors
- Added 'efficiently' and 'with proper broadcasting' for precision
- Now consistent with activations and other modules formatting
- Improves visual hierarchy and readability in built book
Standardize module endings with motivational section + grid cards:
Added to 4 key modules:
- 01_setup: Foundation workflow mastery message
- 03_activations: Neural networks come alive message
- 06_cnn: Computer vision implementation message
- 09_optimizers: Learning algorithms message
Standard Format:
## 🎉 Ready to Build?
[Module-specific motivational content about what they're building]
Take your time, test thoroughly, and enjoy building something that really works! 🔥
[Grid cards automatically follow via converter]
Progress: 6/14 modules now have consistent endings
- ✅ 01_setup, 02_tensor, 03_activations, 06_cnn, 07_dataloader, 09_optimizers
- 🔄 8 more modules to standardize
Result: Better user experience with consistent motivation + clear next steps
Key Improvements:
1. **Meaningful titles**: Keep 'Module: CNN' format instead of just 'CNN'
2. **Clean breadcrumbs**: 'Home → CNN' instead of 'Home → Module 3: 03 Activations'
3. **Remove duplicate info**: Stop generating redundant Module Info boxes
4. **Use source formatting**: Let READMEs control their own presentation
5. **Enhanced README**: Added Jupyter Book admonition formatting to CNN module info
Results:
- More logical navigation and titles
- Single source of truth for module information
- Better formatted content boxes (CNN example with admonitions)
- Eliminated confusing duplicate content
- Cleaner, more professional presentation
README Updates:
- All modules now use consistent '🔥 Module: [Name]' format
- Removed inconsistent emojis (🧠, 🚀, 📊, 🧱, 🏋️)
- Removed module numbers and descriptive subtitles
- Clean, consistent branding across all 14 modules
Converter Updates:
- Added header cleaning logic to strip module prefixes from chapter titles
- Chapters now show clean names: 'CNN', 'Tensor', 'Setup', etc.
- No emoji or module numbers in final website headers
- Maintains clean, professional appearance
Result: Consistent source files + clean website presentation
- Updated all module references to start from 01 instead of 00
- Changed tagline to 'Build your own ML framework. Start small. Go deep.'
- Added educational foundation section linking to ML Systems book
- Updated README, documentation, CLI examples, and prerequisites
- Regenerated book content with consistent numbering throughout
- Maintains 14 modules total but with natural numbering (01-14)
✅ Rename all module directories: 00_setup → 01_setup, etc.
✅ Update convert_modules.py mappings for new directory names
✅ Update _toc.yml file paths and titles (1-14 instead of 0-13)
✅ Regenerate all overview pages with new numbering
✅ Fix all broken references in usage-paths and intro
✅ Update chapter references to use natural numbering
Benefits:
- More intuitive course progression starting from 1
- Matches academic course numbering conventions
- Eliminates confusion about 'Module 0' concept
- Cleaner mental model for students and instructors
- All references and links properly updated
Complete transformation: 14 modules now numbered 01-14
✅ Clean source file headers: 'Module X:' → clean descriptive titles
✅ Regenerate overview pages with clean headers
✅ More flexible content that works in any context
✅ Numbers still provided by book TOC structure
Changes:
- Remove 'Module X: ' prefix from all source file headers
- Headers now focus on descriptive content titles
- Book maintains proper chapter ordering via _toc.yml
- Content is more reusable across different presentations
- Flattened tests/ directory structure (removed integration/ and system/ subdirectories)
- Renamed all integration tests with _integration.py suffix for clarity
- Created test_utils.py with setup_integration_test() function
- Updated integration tests to use ONLY tinytorch package imports
- Ensured all modules are exported before running tests via tito export --all
- Optimized module test timing for fast execution (under 5 seconds each)
- Fixed MLOps test reliability and reduced timing parameters across modules
- Exported all modules (compression, kernels, benchmarking, mlops) to tinytorch package
- Shortened verbose 119-line summary to focused 32-line format
- Removed redundant sections and excessive congratulatory language
- Added standard Next Steps with actionable tito commands
- Now consistent with other module endings (tensor, layers, optimizers, etc.)
- Maintains essential accomplishments and real-world connections
- ✅ tito system info/doctor: Full system health check working
- ✅ tito module status: Shows all 14 modules with proper status
- ✅ tito export --all: Successfully exports all modules to tinytorch package
- ✅ tito test --all: Runs all inline tests (65/66 tests passing)
- ✅ tito nbgrader: All assignment management commands available
- ✅ tito package nbdev: NBDev integration working
- ✅ Global PATH: Added bin/ to PATH for global tito access
Only minor issue: 1 MLOps test failing due to script execution
All core functionality working perfectly for educational use
- Update MLOps module ending to match standard TinyTorch module format
- Remove verbose ending text, use concise professional summary
- Add comprehensive benchmarking integration tests
- Test benchmarking framework with real TinyTorch components
- Include tests for kernels, networks, and statistical validation
- Follow established integration test patterns
- Replace overly celebratory ending with standard progress indicator
- Use same format as other modules: 'Final Progress: [module] ready for [next step]!'
- Maintain professional, educational tone consistent with project
- Standardize module.yaml files (11-13) to match concise format of early modules
- Remove verbose sections, keep essential metadata only
- Update kernels README to match TinyTorch module style standards
- Add comprehensive integration tests for kernels module
- Test hardware-optimized operations with real TinyTorch components
- Prepare for systematic integration testing across all modules
- Complete MLOps pipeline with 4 core components:
1. ModelMonitor: Tracks performance over time, detects degradation
2. DriftDetector: Statistical tests for data distribution changes
3. RetrainingTrigger: Automated retraining based on thresholds
4. MLOpsPipeline: Orchestrates complete workflow integration
- Follows TinyTorch educational pattern exactly:
- Concept explanations before implementation
- Guided TODOs with step-by-step instructions
- Immediate testing after each component
- Progressive complexity building on previous modules
- Comprehensive summary with career applications
- Integrates all previous TinyTorch components:
- Uses training pipeline from Module 09
- Uses benchmarking from Module 12
- Uses compression from Module 10
- Demonstrates complete ecosystem integration
- Production-ready MLOps concepts:
- Performance monitoring and alerting
- Drift detection with statistical validation
- Automated retraining triggers
- Model lifecycle management
- Complete deployment workflows
- Educational value:
- Real-world MLOps applications (Netflix, Uber, Google)
- Industry connections (MLflow, Kubeflow, SageMaker)
- Career preparation for ML Engineer roles
- Complete capstone bringing together all 13 modules
- Technical implementation:
- 1700+ lines of educational content and code
- NBGrader integration for assessment
- Comprehensive test suite with 100+ points
- Auto-discovery testing framework
- Professional documentation and examples
This completes the TinyTorch ecosystem with production-ready MLOps
- Update kernels_dev.py with any modifications made during testing
- Add test_report.md generated by benchmarking module
- Ensure all changes from comprehensive testing are committed
- Simplify testing section to match kernels module convention
- Replace verbose summary with concise pattern matching other modules
- Fix type annotation for BenchmarkResult.metadata field
- Remove excessive detail from module summary (200+ lines → 30 lines)
- Maintain clean, professional educational structure
✅ **Generalized Language:**
- Changed 'capstone project' → 'ML project' throughout
- Renamed generate_capstone_report() → generate_project_report()
- Updated README.md to remove capstone assumptions
- Made module universally applicable
✅ **Maintained Functionality:**
- All 5 test functions still passing (100% success rate)
- Complete benchmarking workflow unchanged
- Professional reporting still generates high-quality outputs
- Statistical validation working correctly
✅ **Improved Focus:**
- Module now teaches systematic ML evaluation skills
- Applicable to research projects, industry work, personal projects
- Removed assumption of specific capstone context
- Enhanced universal applicability
✅ **Test Results:**
- All benchmarking tests passing
- Performance reporter generating professional reports
- Statistical validation working with confidence intervals
- Framework ready for any ML project evaluation
✅ **Full Module Implementation:**
- module.yaml: Proper metadata and dependencies
- README.md: Comprehensive documentation with learning objectives
- benchmarking_dev.py: Complete implementation with educational pattern
✅ **MLPerf-Inspired Architecture:**
- BenchmarkScenarios: Single-stream, server, and offline scenarios
- StatisticalValidator: Proper statistical validation and significance testing
- TinyTorchPerf: Complete framework integrating all components
- PerformanceReporter: Professional report generation for capstone projects
✅ **Educational Excellence:**
- Same structure as layers_dev.py with Build → Use → Analyze framework
- Comprehensive TODO guidance with step-by-step implementation
- Unit tests for each component with immediate feedback
- Integration testing with realistic TinyTorch models
- Professional module summary with career connections
✅ **Test Results:**
- All 5 test functions passing (100% success rate)
- Complete benchmarking workflow validated
- Statistical validation working correctly
- Professional reporting generating capstone-ready outputs
- Framework ready for student use
✅ **Capstone Preparation:**
- Students can now systematically evaluate their final projects
- Professional reporting suitable for academic presentations
- Statistical validation ensures meaningful results
- Industry-standard methodology following MLPerf patterns
🎓 **Perfect Bridge to Module 13 (MLOps):**
- Benchmarking establishes performance baselines
- MLOps will monitor production systems against these baselines
- Statistical validation transfers to production monitoring
- Professional reporting becomes production dashboards
✅ **Pedagogical Improvements:**
- Removed complex SimpleProfiler dependency
- Added simple time_kernel() function using time.perf_counter()
- Displays timing in microseconds (realistic for kernel operations)
- Focused learning on kernel optimization vs profiling complexity
✅ **Clean Learning Progression:**
- Module 11 (Kernels): Simple timing - 'Can I make this faster?'
- Module 12 (Benchmarking): Professional profiling - 'How do I measure systematically?'
- Module 13 (MLOps): Production monitoring - 'How do I track in production?'
✅ **Implementation Details:**
- Fixed imports to use matmul_naive from TinyTorch layers
- Simplified baseline implementation using NumPy dot product
- Reduced cognitive load by removing measurement complexity
- Maintained all kernel optimization concepts
⚠️ **Note:** Cache-friendly implementation needs debugging but core timing functionality works
🎯 **Impact:** Students can now focus on building optimized kernels with immediate microsecond-level performance feedback, setting up perfect progression to comprehensive benchmarking in Module 12.