Commit Graph

110 Commits

Author SHA1 Message Date
Vijay Janapa Reddi
767a4dc2e5 Fix 08_dataloader: Move STANDARDIZED MODULE TESTING before Module Summary
CORRECTED ORDER:
 BEFORE: Module Summary (line 979) → STANDARDIZED MODULE TESTING (line 1137) 
 AFTER: STANDARDIZED MODULE TESTING → Module Summary 

Changes:
- Moved complete testing section (Module Testing + standardized cell + integration tests + run_module_tests_auto) to line 979
- Moved Module Summary section to follow after testing
- Removed duplicate testing sections
- Now follows correct pattern: Testing → Summary

Module 08_dataloader now has proper ordering
2025-07-20 09:16:12 -04:00
Vijay Janapa Reddi
23b4c4353b Fix 12_compression: Add missing Module Summary section
Module 12_compression now follows the complete standardized pattern:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Integration test functions
4. ## 🎯 Module Summary (educational wrap-up) ← ADDED

 Added comprehensive Module Summary covering:
- Model compression techniques (pruning, quantization)
- Production deployment skills
- Mathematical foundations
- Real-world applications and industry connections
- Professional development outcomes

All 16 modules now follow the complete standardized testing pattern
2025-07-20 09:09:47 -04:00
Vijay Janapa Reddi
39ec4a725d Fix 02_tensor: Move test code to correct location
Module 02_tensor now follows the correct pattern learned from layers_dev:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Actual test functions (test_unit_tensor_creation, test_unit_tensor_properties, test_unit_tensor_arithmetic)
4. ## 🎯 Module Summary

 Moved test functions from end of file to proper location after standardized testing
 Removed duplicate test functions
 Students now see actual test implementations before the summary
 run_module_tests_auto will auto-discover and run all tests
2025-07-20 09:07:32 -04:00
Vijay Janapa Reddi
49b0a78b99 🧹 Remove duplicate nbgrader cells from 01_setup
Cleaned up duplicate/redundant nbgrader cells that were just comments referencing test functions. The actual test functions remain in their proper location after the standardized testing section.

Removed:
- Duplicate test-personal-info nbgrader cell (just a comment)
- Duplicate test-system-info nbgrader cell (just a comment)
- Redundant 'Inline Test Functions' section

This eliminates confusion and follows the clean pattern established by other modules.
2025-07-20 09:05:38 -04:00
Vijay Janapa Reddi
dd71fca2c9 Fix 01_setup: Add test code between testing section and summary
Module 01_setup now follows correct pattern:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Actual test functions (test_unit_personal_info_basic, test_unit_system_info_basic)
4. ## 🎯 Module Summary

This ensures students see actual test implementations before the summary.
2025-07-20 09:03:13 -04:00
Vijay Janapa Reddi
48f7b2dea7 🧪 Add standardized module testing to all modules
Ensures consistent testing framework across all TinyTorch modules with:

 Added standardized testing sections to modules that were missing them:
- 01_setup: Added complete testing section + module summary
- 02_tensor: Added testing section + comprehensive module summary
- 15_mlops: Standardized existing testing section to match convention

 All modules now follow the consistent pattern:
1. ## 🧪 Module Testing (markdown explanation)
2. Locked nbgrader cell with standardized-testing ID
3. run_module_tests_auto call to discover and run all tests
4. ## 🎯 Module Summary (educational wrap-up)

 Benefits:
- Consistent testing experience across all 16 modules
- Automatic test discovery and execution before module completion
- Standardized educational flow: learn → implement → test → reflect
- Professional testing practices with locked testing framework

 Verification: All 16 modules now have both:
- '## 🧪 Module Testing' section ✓
- 'run_module_tests_auto' call ✓

This ensures students always verify their implementations work correctly
before moving to the next module, following TinyTorch's educational philosophy.
2025-07-20 09:00:17 -04:00
Vijay Janapa Reddi
6d30438748 🧪 Fix setup module: Wrap all test code in test_ functions
- Remove loose test code from nbgrader cells that ran automatically on import
- Keep only proper test_unit_personal_info_basic() and test_unit_system_info_basic() functions
- Prevents tests from running when module is imported as package
- Follows established test naming conventions (test_unit_*)
- Improves module reliability and reduces side effects

Fixed issues:
- NBGrader cells now reference test functions instead of running test code directly
- All assertions and test logic properly contained in named test functions
- Module can be imported without automatically executing tests
2025-07-20 08:56:18 -04:00
Vijay Janapa Reddi
35bf079749 🧹 Remove backup files - Clean repository maintenance
- Delete 8 *_backup.py files from modules/source directories
- Remove tito/commands/test.py.backup file
- Eliminates obsolete backup files from version control
- Keeps repository clean and focused on current implementations
- Reduces repository size and improves maintainability

Removed files:
- modules/source/02_tensor/tensor_dev_backup.py
- modules/source/03_activations/activations_dev_backup.py
- modules/source/04_layers/layers_dev_backup.py
- modules/source/05_dense/dense_dev_backup.py
- modules/source/06_spatial/spatial_dev_backup.py
- modules/source/08_dataloader/dataloader_dev_backup.py
- modules/source/09_autograd/autograd_dev_backup.py
- modules/source/13_kernels/kernels_dev_backup.py
- tito/commands/test.py.backup
2025-07-20 08:42:59 -04:00
Vijay Janapa Reddi
771ed98a80 🧹 Remove Jupyter notebooks from modules/source - Python-first workflow
- Delete all 15 .ipynb files from modules/source directories
- Align with TinyTorch's Python-first development philosophy
- .py files are the source of truth, .ipynb files are temporary outputs
- Prevents version control conflicts with notebook metadata
- Students work directly with .py files using Jupytext format
- Notebooks can be regenerated when needed via 'tito nbdev generate'

Removed files:
- All *_dev.ipynb files across modules 01-15
- Keeps repository clean and focused on source code
2025-07-20 08:41:26 -04:00
Vijay Janapa Reddi
f77db43975 Production: Standardize test naming in optimization and deployment modules
- Compression: test_compression_metrics → test_unit_compression_metrics
- Compression: test_magnitude_pruning → test_unit_magnitude_pruning
- Compression: test_quantization → test_unit_quantization
- Compression: test_distillation → test_unit_distillation
- Compression: test_structured_pruning → test_unit_structured_pruning
- Compression: test_comprehensive_comparison → test_unit_comprehensive_comparison
- Kernels: All test_* → test_unit_* except test_kernel_integration_* → test_module_*
- Benchmarking: All test_* → test_unit_* except test_comprehensive_* → test_module_*
- MLOps: All test_* → test_unit_* except test_comprehensive_integration → test_module_*
- Finalizes test naming standardization across production-ready modules
2025-07-20 08:39:27 -04:00
Vijay Janapa Reddi
53abd2a1e9 🚀 Training System: Standardize test naming in ML training pipeline
- DataLoader: test_integration_* → test_module_* (module dependency tests)
- Autograd: test_variable_class → test_unit_variable_class
- Autograd: test_add_operation → test_unit_add_operation
- Autograd: test_multiply_operation → test_unit_multiply_operation
- Autograd: test_subtract_operation → test_unit_subtract_operation
- Autograd: test_chain_rule → test_unit_chain_rule
- Autograd: test_neural_network_training → test_module_neural_network_training
- Optimizers: test_integration_* → test_module_* (module dependency tests)
- Training: All test_* → test_unit_* except test_training → test_module_training
- Completes test standardization for complete training pipeline
2025-07-20 08:39:13 -04:00
Vijay Janapa Reddi
dfad756278 🧠 Core ML: Standardize test naming in neural network building blocks
- Activations: test_integration_* → test_module_* (module dependency tests)
- Layers: test_matrix_multiplication → test_unit_matrix_multiplication
- Layers: test_dense_layer → test_unit_dense_layer
- Layers: test_layer_activation → test_unit_layer_activation
- Dense: test_integration_* → test_module_* (module dependency tests)
- Spatial: test_integration_* → test_module_* (module dependency tests)
- Attention: test_integration_* → test_module_* (module dependency tests)
- Establishes unit vs module test distinction for neural network components
2025-07-20 08:39:00 -04:00
Vijay Janapa Reddi
82e18761fe Foundation: Standardize test naming in setup and tensor modules
- Rename test functions to follow test_unit_<name> convention
- Setup module: test_personal_info → test_unit_personal_info_basic
- Setup module: test_system_info → test_unit_system_info_basic
- Tensor module: test_tensor_* → test_unit_tensor_*
- Establishes consistent unit test naming for core foundation modules
2025-07-20 08:38:46 -04:00
Vijay Janapa Reddi
d4d6277604 🔧 Complete module restructuring and integration fixes
📦 Module File Organization:
- Renamed networks_dev.py → dense_dev.py in 05_dense module
- Renamed cnn_dev.py → spatial_dev.py in 06_spatial module
- Added new 07_attention module with attention_dev.py
- Updated module.yaml files to reference correct filenames
- Updated #| default_exp directives for proper package exports

🔄 Core Package Updates:
- Added tinytorch.core.dense (Sequential, MLP architectures)
- Added tinytorch.core.spatial (Conv2D, pooling operations)
- Added tinytorch.core.attention (self-attention mechanisms)
- Updated all core modules with latest implementations
- Fixed tensor assignment issues in compression module

🧪 Test Integration Fixes:
- Updated integration tests to use correct module imports
- Fixed tensor activation tests for new module structure
- Ensured compatibility with renamed components
- Maintained 100% individual module test success rate

Result: Complete 14-module TinyTorch framework with proper organization,
working integrations, and comprehensive test coverage ready for production use.
2025-07-18 02:10:49 -04:00
Vijay Janapa Reddi
13ac7ee885 Complete comprehensive capstone README rewrite
🎯 Major improvements to 16_capstone module documentation:

📚 Enhanced Structure:
- Updated to reflect actual 14-module progression (not 15)
- Celebrates complete ML framework students built
- Shows concrete working code examples using TinyTorch components

🚀 5 Specialized Tracks:
1. Performance Ninja - Speed/memory optimization, GPU acceleration
2. Algorithm Architect - Modern ML algorithms, Vision Transformers
3. Systems Engineer - Production infrastructure, distributed training
4. Benchmarking Scientist - Scientific framework comparison
5. Developer Experience Master - Debugging tools, visualization

 Professional Framework:
- 4-phase timeline: Analysis → Implementation → Optimization → Evaluation
- Concrete project examples with code samples for each track
- Clear success criteria and measurable goals
- Comprehensive deliverables structure (Technical Report, Code, Analysis, Demo)
- Pro tips for framework engineering success

🎓 Outcome: Transforms basic optimization into comprehensive framework
engineering specialization that demonstrates production ML systems mastery
2025-07-18 02:07:30 -04:00
Vijay Janapa Reddi
442e860d5f Fix module file naming and tensor assignment issues
- Updated module.yaml files for 05_dense and 06_spatial to reference correct dev file names
- Fixed #| default_exp directives in dense_dev.py and spatial_dev.py to export to correct module names
- Fixed tensor assignment issues in 12_compression module by creating new Tensor objects instead of trying to assign to .data property
- Removed missing function imports from autograd integration test
- All individual module tests now pass (01_setup through 14_benchmarking)
- Generated correct module files: dense.py, spatial.py, attention.py
2025-07-18 01:56:07 -04:00
Vijay Janapa Reddi
59d58718f9 refactor: Implement learner-focused module progression with better naming
 Renamed modules for clearer pedagogical flow:
- 05_networks → 05_dense (multi-layer dense/fully connected networks)
- 06_cnn → 06_spatial (convolutional networks for spatial patterns)
- 06_attention → 07_attention (attention mechanisms for sequences)

 Shifted remaining modules down by 1:
- 07_dataloader → 08_dataloader
- 08_autograd → 09_autograd
- 09_optimizers → 10_optimizers
- 10_training → 11_training
- 11_compression → 12_compression
- 12_kernels → 13_kernels
- 13_benchmarking → 14_benchmarking
- 14_mlops → 15_mlops
- 15_capstone → 16_capstone

 Updated module metadata (module.yaml files):
- Updated names, descriptions, dependencies
- Fixed prerequisite chains and enables relationships
- Updated export paths to match new names

New learner progression:
Foundation → Individual Layers → Dense Networks → Spatial Networks → Attention Networks → Training Pipeline

Perfect pedagogical flow: Build one layer → Stack dense layers → Add spatial patterns → Add attention mechanisms → Learn to train them all.
2025-07-18 00:12:50 -04:00
Vijay Janapa Reddi
7b85000c18 refactor: Remove '_comprehensive' suffixes from test function names
- test_attention_mechanism_comprehensive() → test_attention_mechanism()
- test_self_attention_wrapper_comprehensive() → test_self_attention_wrapper()
- test_attention_masking_comprehensive() → test_masking_utilities()

Follows standard TinyTorch naming conventions without unnecessary suffixes.
2025-07-18 00:03:40 -04:00
Vijay Janapa Reddi
190181306d feat: Complete attention module with auto testing and comprehensive summary
 Added standardized auto testing section with run_module_tests_auto()
 Added comprehensive module summary with detailed explanations
 Added test functions for comprehensive validation
 All core attention functionality working perfectly (100% success rate)

Module now complete with:
- Scaled dot-product attention implementation
- Self-attention wrapper class
- Complete masking utilities (causal, padding, bidirectional)
- Integration tests and behavior analysis
- Standardized TinyTorch testing framework integration
- Comprehensive educational summary covering:
  * Mathematical foundations (Attention formula)
  * Real-world applications (ChatGPT, BERT, GPT-4)
  * Architecture patterns and performance characteristics
  * Next steps and transformer building blocks

Ready for student use and NBGrader processing. Foundation for advanced transformer modules.
2025-07-18 00:01:59 -04:00
Vijay Janapa Reddi
b3b02eb07f refactor: Restructure attention module to match TinyTorch NBGrader patterns
 NBGrader solution/test structure: ### BEGIN/END SOLUTION blocks
 Educational TODO sections: STEP-BY-STEP, HINTS, EXAMPLES, LEARNING CONNECTIONS
 Immediate unit tests: proper assertions after each solution
 TinyTorch consistency: same patterns as tensor, layers, activations modules
 All tests passing: 100% success rate with comprehensive coverage

Module now follows established TinyTorch educational format:
- Detailed TODO instructions for student implementation
- Solution blocks wrapped in NBGrader tags
- Immediate feedback with unit tests after each piece
- Progress tracking with emojis and clear status messages

Ready for NBGrader processing and student use.
2025-07-17 23:17:06 -04:00
Vijay Janapa Reddi
05f59ca56a refactor: Simplify attention module to follow TinyTorch patterns
CHANGED: Simplified attention module to focus on core concepts
- Remove multi-head attention, positional encoding, layer norm, transformer block
- Keep only: scaled_dot_product_attention, SelfAttention, masking utilities
- Reduce complexity from  to  (matches CNN level)
- Cut from 885 lines to ~440 lines (aligned with other modules)
- Update dependencies: only requires tensor (not layers/activations/networks)
- Change pedagogical framework: 'Build → Use → Understand' (not Reflect)
- Focus on single concept per module (following established TinyTorch pattern)

RESULT: Clean, focused attention module teaching core mechanism
- Students master fundamental attention before advanced concepts
- Consistent with TinyTorch's one-concept-per-module approach
- Foundation for future multi-head attention and transformer modules
- All tests passing (100% success rate)
2025-07-17 23:11:33 -04:00
Vijay Janapa Reddi
25e9c2e74b feat: Add comprehensive attention module (06_attention)
- Implement scaled dot-product attention with masking support
- Build multi-head attention with learnable projections
- Create sinusoidal positional encoding for sequence understanding
- Add layer normalization for training stability
- Complete transformer block with residual connections
- Include self-attention wrapper and utility functions
- Full inline testing with 100% pass rate
- Educational content explaining attention mechanisms
- Foundation for modern AI architectures (GPT, BERT, etc.)

This module bridges classical ML (tensors, layers, networks) with
modern transformer architectures that power ChatGPT and contemporary AI.
2025-07-17 22:58:19 -04:00
Vijay Janapa Reddi
dc77c3de0e docs: Clean up whitespace and formatting in module READMEs
- Fixed trailing whitespace in several module README files
- Ensures consistent formatting across all documentation
2025-07-16 11:50:23 -04:00
Vijay Janapa Reddi
7b620d98aa refactor: Replace "Master" with "Reflect" in learning framework
- Updated learning philosophy from "Build, Use, Master" to "Build, Use, Reflect"
- Changed setup module: "Build → Use → Reflect"
- Changed capstone module: "Build → Optimize → Reflect"
- Promotes inclusive language and emphasizes metacognition over dominance
- Better pedagogical approach focusing on thoughtful analysis and system thinking
2025-07-16 11:48:28 -04:00
Vijay Janapa Reddi
507cdf50f5 refactor: Implement YAML-based difficulty and time system
- Added educational metadata (difficulty, time_estimate) to all module.yaml files
- Updated convert_readmes.py to read from YAML instead of hardcoded mappings
- Standardized difficulty progression: 🥷
- Fixed path resolution for YAML reading in book build process
- Eliminated duplication: single source of truth for educational metadata
- Capstone gets special ninja treatment (🥷) as beyond-expert level
2025-07-16 11:48:09 -04:00
Vijay Janapa Reddi
c294a8be66 Fix capstone difficulty rating and improve timeline messaging
- Updated book generation to include 15_capstone with 5-star difficulty rating
- Changed time estimate from '20-40 hours' to 'Capstone Project' for better visitor experience
- Removed specific week references from project phases for more encouraging presentation
- Maintained detailed project structure while making timeline more flexible
- Ensures consistent 5-star rating for expert-level modules across the framework
2025-07-16 11:11:58 -04:00
Vijay Janapa Reddi
566c2d512f Add Module 15: Capstone Framework Optimization
- Created comprehensive capstone module focused on framework engineering
- 5 optimization tracks: performance, algorithms, systems, analysis, developer tools
- Detailed example project: matrix operation optimization with 70x speedup
- Project structure: 4 phases with concrete deliverables and success criteria
- Updated table of contents and course navigation to include capstone
- README reflects complete 15-module course structure
- Realistic framework-focused projects instead of disconnected applications
2025-07-16 10:30:01 -04:00
Vijay Janapa Reddi
6fa4bde3d1 Fix tensor module learning objectives formatting
- Added bold formatting to match other modules' style
- Enhanced clarity with more specific descriptors
- Added 'efficiently' and 'with proper broadcasting' for precision
- Now consistent with activations and other modules formatting
- Improves visual hierarchy and readability in built book
2025-07-16 08:32:25 -04:00
Vijay Janapa Reddi
19a8123333 Standardize all 14 module READMEs with consistent structure
 Complete standardization of all TinyTorch module READMEs:

📊 **Module Info**: Consistent difficulty, time, prerequisites, next steps
🎯 **Learning Objectives**: Clear, measurable, action-oriented outcomes
🧠 **Pedagogical Framework**: Build → Use → [Context-specific verb]
📚 **What You'll Build**: Concrete code examples and implementations
🚀 **Getting Started**: Prerequisites check + development workflow
🧪 **Testing**: Comprehensive test coverage + inline feedback
🎯 **Key Concepts**: Real-world applications + technical foundations
🎉 **Ready to Build**: Motivational + grid cards for all modules

 All 14 modules now follow identical structure:
- 01_setup: Foundation workflow mastery
- 02_tensor: Core data structures
- 03_activations: Neural network fundamentals
- 04_layers: Building blocks
- 05_networks: Architecture design
- 06_cnn: Computer vision foundations
- 07_dataloader: Data pipeline engineering
- 08_autograd: Automatic differentiation
- 09_optimizers: Learning algorithms
- 10_training: End-to-end orchestration
- 11_compression: Model optimization
- 12_kernels: Performance optimization
- 13_benchmarking: Systematic evaluation
- 14_mlops: Production deployment (capstone)

🎓 **Student Experience**: Predictable navigation, clear expectations, motivational flow
👨‍🏫 **Instructor Experience**: Professional consistency, easy maintenance, coherent course

This establishes the single source of truth that will automatically convert to
clean website chapters via book/convert_readmes.py
2025-07-16 01:44:49 -04:00
Vijay Janapa Reddi
9f8a5a8aa3 PILOT: Implement standardized module README structure (Tensor module)
New Standard Structure Applied:
 📊 Module Info - Consistent difficulty, time, prerequisites
 🎯 Learning Objectives - Clear, measurable outcomes
 🧠 Build → Use → Understand - Pedagogical framework
 📚 What You'll Build - Concrete code examples
 🚀 Getting Started - Prerequisites check + workflow
 🧪 Testing Your Implementation - Inline + module + manual tests
 🎯 Key Concepts - Real-world connections + core ideas
 🎉 Ready to Build? - Motivational ending + grid cards

Benefits for Students:
- Predictable navigation structure
- Clear learning outcomes upfront
- Concrete examples of what they'll build
- Multiple testing approaches for confidence
- Real-world context for motivation

Benefits for Instructors:
- Professional consistency across modules
- Clear pedagogical progression
- Easy to maintain and update
- Coherent course experience

Next: Review this pilot, then apply to remaining 13 modules
2025-07-16 01:31:00 -04:00
Vijay Janapa Reddi
647b5677b5 Add consistent 'Ready to Build?' endings to README modules
Standardize module endings with motivational section + grid cards:

Added to 4 key modules:
- 01_setup: Foundation workflow mastery message
- 03_activations: Neural networks come alive message
- 06_cnn: Computer vision implementation message
- 09_optimizers: Learning algorithms message

Standard Format:
## 🎉 Ready to Build?
[Module-specific motivational content about what they're building]
Take your time, test thoroughly, and enjoy building something that really works! 🔥

[Grid cards automatically follow via converter]

Progress: 6/14 modules now have consistent endings
-  01_setup, 02_tensor, 03_activations, 06_cnn, 07_dataloader, 09_optimizers
- 🔄 8 more modules to standardize

Result: Better user experience with consistent motivation + clear next steps
2025-07-16 01:29:00 -04:00
Vijay Janapa Reddi
672f959dde Achieve complete emoji consistency across all modules for student materials
Decision: Keep emojis in section headers for better student experience

Rationale:
- 📊 🎯 🧠 📚 emojis provide visual scanning and semantic meaning
- More engaging and approachable for students
- Clear information architecture (Info, Objectives, Concepts, Implementation)
- 13/14 modules already used this pattern - now 14/14 consistent
- Maintains TOC navigation (vs hidden admonition boxes)

Changes:
- Fixed 01_setup: Added 🎯 Learning Objectives, 🧠 Overview, 📚 What You'll Build
- Fixed 11_compression: Added 📊 Module Info, 🎯 Learning Objectives, 🧠 Overview
- Reverted 06_cnn: Back to ## headers (from admonition boxes) for TOC visibility
- All modules now follow: 📊 Module Info → 🎯 Learning Objectives → 🧠 Build/Overview → 📚 What You'll Build

Result: Consistent, student-friendly visual hierarchy across all 14 modules
2025-07-16 01:25:32 -04:00
Vijay Janapa Reddi
00b11a6bf6 Improve module formatting and navigation consistency
Key Improvements:
1. **Meaningful titles**: Keep 'Module: CNN' format instead of just 'CNN'
2. **Clean breadcrumbs**: 'Home → CNN' instead of 'Home → Module 3: 03 Activations'
3. **Remove duplicate info**: Stop generating redundant Module Info boxes
4. **Use source formatting**: Let READMEs control their own presentation
5. **Enhanced README**: Added Jupyter Book admonition formatting to CNN module info

Results:
- More logical navigation and titles
- Single source of truth for module information
- Better formatted content boxes (CNN example with admonitions)
- Eliminated confusing duplicate content
- Cleaner, more professional presentation
2025-07-16 01:22:09 -04:00
Vijay Janapa Reddi
e1fd90af2f Standardize module headers - consistent 🔥 emoji and clean chapter titles
README Updates:
- All modules now use consistent '🔥 Module: [Name]' format
- Removed inconsistent emojis (🧠, 🚀, 📊, 🧱, 🏋️)
- Removed module numbers and descriptive subtitles
- Clean, consistent branding across all 14 modules

Converter Updates:
- Added header cleaning logic to strip module prefixes from chapter titles
- Chapters now show clean names: 'CNN', 'Tensor', 'Setup', etc.
- No emoji or module numbers in final website headers
- Maintains clean, professional appearance

Result: Consistent source files + clean website presentation
2025-07-16 01:18:07 -04:00
Vijay Janapa Reddi
074f695fb3 Generate notebook files from Python modules for direct access 2025-07-15 23:51:56 -04:00
Vijay Janapa Reddi
01e4aec62b Update module numbering from 00-13 to 01-14 and refresh tagline
- Updated all module references to start from 01 instead of 00
- Changed tagline to 'Build your own ML framework. Start small. Go deep.'
- Added educational foundation section linking to ML Systems book
- Updated README, documentation, CLI examples, and prerequisites
- Regenerated book content with consistent numbering throughout
- Maintains 14 modules total but with natural numbering (01-14)
2025-07-15 21:11:07 -04:00
Vijay Janapa Reddi
d82c75f9dc Renumber modules from 00-13 to 01-14 for natural numbering
 Rename all module directories: 00_setup → 01_setup, etc.
 Update convert_modules.py mappings for new directory names
 Update _toc.yml file paths and titles (1-14 instead of 0-13)
 Regenerate all overview pages with new numbering
 Fix all broken references in usage-paths and intro
 Update chapter references to use natural numbering

Benefits:
- More intuitive course progression starting from 1
- Matches academic course numbering conventions
- Eliminates confusion about 'Module 0' concept
- Cleaner mental model for students and instructors
- All references and links properly updated

Complete transformation: 14 modules now numbered 01-14
2025-07-15 18:51:36 -04:00
Vijay Janapa Reddi
76225baa42 Remove module numbers from headers for cleaner presentation
 Clean source file headers: 'Module X:' → clean descriptive titles
 Regenerate overview pages with clean headers
 More flexible content that works in any context
 Numbers still provided by book TOC structure

Changes:
- Remove 'Module X: ' prefix from all source file headers
- Headers now focus on descriptive content titles
- Book maintains proper chapter ordering via _toc.yml
- Content is more reusable across different presentations
2025-07-15 18:23:18 -04:00
Vijay Janapa Reddi
05391eb550 feat: Restructure integration tests and optimize module timing
- Flattened tests/ directory structure (removed integration/ and system/ subdirectories)
- Renamed all integration tests with _integration.py suffix for clarity
- Created test_utils.py with setup_integration_test() function
- Updated integration tests to use ONLY tinytorch package imports
- Ensured all modules are exported before running tests via tito export --all
- Optimized module test timing for fast execution (under 5 seconds each)
- Fixed MLOps test reliability and reduced timing parameters across modules
- Exported all modules (compression, kernels, benchmarking, mlops) to tinytorch package
2025-07-14 23:37:50 -04:00
Vijay Janapa Reddi
604cb2ac36 Fix MLOps module summary to match concise TinyTorch style
- Shortened verbose 119-line summary to focused 32-line format
- Removed redundant sections and excessive congratulatory language
- Added standard Next Steps with actionable tito commands
- Now consistent with other module endings (tensor, layers, optimizers, etc.)
- Maintains essential accomplishments and real-world connections
2025-07-14 21:11:08 -04:00
Vijay Janapa Reddi
025869fb6d Verify tito CLI functionality - all commands working correctly
-  tito system info/doctor: Full system health check working
-  tito module status: Shows all 14 modules with proper status
-  tito export --all: Successfully exports all modules to tinytorch package
-  tito test --all: Runs all inline tests (65/66 tests passing)
-  tito nbgrader: All assignment management commands available
-  tito package nbdev: NBDev integration working
-  Global PATH: Added bin/ to PATH for global tito access

Only minor issue: 1 MLOps test failing due to script execution
All core functionality working perfectly for educational use
2025-07-14 19:45:36 -04:00
Vijay Janapa Reddi
1c81bfbec1 Fix MLOps module ending and add benchmarking integration tests
- Update MLOps module ending to match standard TinyTorch module format
- Remove verbose ending text, use concise professional summary
- Add comprehensive benchmarking integration tests
- Test benchmarking framework with real TinyTorch components
- Include tests for kernels, networks, and statistical validation
- Follow established integration test patterns
2025-07-14 19:19:28 -04:00
Vijay Janapa Reddi
3531a44c5f Fix MLOps module ending to match consistent TinyTorch style
- Replace overly celebratory ending with standard progress indicator
- Use same format as other modules: 'Final Progress: [module] ready for [next step]!'
- Maintain professional, educational tone consistent with project
2025-07-14 19:14:09 -04:00
Vijay Janapa Reddi
1f58841e65 Clean up module configurations and add kernels integration tests
- Standardize module.yaml files (11-13) to match concise format of early modules
- Remove verbose sections, keep essential metadata only
- Update kernels README to match TinyTorch module style standards
- Add comprehensive integration tests for kernels module
- Test hardware-optimized operations with real TinyTorch components
- Prepare for systematic integration testing across all modules
2025-07-14 19:12:20 -04:00
Vijay Janapa Reddi
d60821892f Implement complete MLOps module (13_mlops) with production ML system lifecycle
- Complete MLOps pipeline with 4 core components:
  1. ModelMonitor: Tracks performance over time, detects degradation
  2. DriftDetector: Statistical tests for data distribution changes
  3. RetrainingTrigger: Automated retraining based on thresholds
  4. MLOpsPipeline: Orchestrates complete workflow integration

- Follows TinyTorch educational pattern exactly:
  - Concept explanations before implementation
  - Guided TODOs with step-by-step instructions
  - Immediate testing after each component
  - Progressive complexity building on previous modules
  - Comprehensive summary with career applications

- Integrates all previous TinyTorch components:
  - Uses training pipeline from Module 09
  - Uses benchmarking from Module 12
  - Uses compression from Module 10
  - Demonstrates complete ecosystem integration

- Production-ready MLOps concepts:
  - Performance monitoring and alerting
  - Drift detection with statistical validation
  - Automated retraining triggers
  - Model lifecycle management
  - Complete deployment workflows

- Educational value:
  - Real-world MLOps applications (Netflix, Uber, Google)
  - Industry connections (MLflow, Kubeflow, SageMaker)
  - Career preparation for ML Engineer roles
  - Complete capstone bringing together all 13 modules

- Technical implementation:
  - 1700+ lines of educational content and code
  - NBGrader integration for assessment
  - Comprehensive test suite with 100+ points
  - Auto-discovery testing framework
  - Professional documentation and examples

This completes the TinyTorch ecosystem with production-ready MLOps
2025-07-14 18:05:31 -04:00
Vijay Janapa Reddi
5bbb78f42a Add pending changes from module testing
- Update kernels_dev.py with any modifications made during testing
- Add test_report.md generated by benchmarking module
- Ensure all changes from comprehensive testing are committed
2025-07-14 17:23:16 -04:00
Vijay Janapa Reddi
833bf7eaa4 Fix Module 12 benchmarking to follow standardized patterns
- Simplify testing section to match kernels module convention
- Replace verbose summary with concise pattern matching other modules
- Fix type annotation for BenchmarkResult.metadata field
- Remove excessive detail from module summary (200+ lines → 30 lines)
- Maintain clean, professional educational structure
2025-07-14 16:45:03 -04:00
Vijay Janapa Reddi
b5678cb8c9 🔄 Remove Capstone-Specific Language from Benchmarking Module
 **Generalized Language:**
- Changed 'capstone project' → 'ML project' throughout
- Renamed generate_capstone_report() → generate_project_report()
- Updated README.md to remove capstone assumptions
- Made module universally applicable

 **Maintained Functionality:**
- All 5 test functions still passing (100% success rate)
- Complete benchmarking workflow unchanged
- Professional reporting still generates high-quality outputs
- Statistical validation working correctly

 **Improved Focus:**
- Module now teaches systematic ML evaluation skills
- Applicable to research projects, industry work, personal projects
- Removed assumption of specific capstone context
- Enhanced universal applicability

 **Test Results:**
- All benchmarking tests passing
- Performance reporter generating professional reports
- Statistical validation working with confidence intervals
- Framework ready for any ML project evaluation
2025-07-14 16:03:35 -04:00
Vijay Janapa Reddi
b6f4081338 🎯 Complete Module 12: Benchmarking - MLPerf-Inspired Performance Evaluation
 **Full Module Implementation:**
- module.yaml: Proper metadata and dependencies
- README.md: Comprehensive documentation with learning objectives
- benchmarking_dev.py: Complete implementation with educational pattern

 **MLPerf-Inspired Architecture:**
- BenchmarkScenarios: Single-stream, server, and offline scenarios
- StatisticalValidator: Proper statistical validation and significance testing
- TinyTorchPerf: Complete framework integrating all components
- PerformanceReporter: Professional report generation for capstone projects

 **Educational Excellence:**
- Same structure as layers_dev.py with Build → Use → Analyze framework
- Comprehensive TODO guidance with step-by-step implementation
- Unit tests for each component with immediate feedback
- Integration testing with realistic TinyTorch models
- Professional module summary with career connections

 **Test Results:**
- All 5 test functions passing (100% success rate)
- Complete benchmarking workflow validated
- Statistical validation working correctly
- Professional reporting generating capstone-ready outputs
- Framework ready for student use

 **Capstone Preparation:**
- Students can now systematically evaluate their final projects
- Professional reporting suitable for academic presentations
- Statistical validation ensures meaningful results
- Industry-standard methodology following MLPerf patterns

🎓 **Perfect Bridge to Module 13 (MLOps):**
- Benchmarking establishes performance baselines
- MLOps will monitor production systems against these baselines
- Statistical validation transfers to production monitoring
- Professional reporting becomes production dashboards
2025-07-14 16:00:18 -04:00
Vijay Janapa Reddi
2849677fd8 🔥 Simplify Kernels Module: Replace Complex Profiler with Simple Timing
 **Pedagogical Improvements:**
- Removed complex SimpleProfiler dependency
- Added simple time_kernel() function using time.perf_counter()
- Displays timing in microseconds (realistic for kernel operations)
- Focused learning on kernel optimization vs profiling complexity

 **Clean Learning Progression:**
- Module 11 (Kernels): Simple timing - 'Can I make this faster?'
- Module 12 (Benchmarking): Professional profiling - 'How do I measure systematically?'
- Module 13 (MLOps): Production monitoring - 'How do I track in production?'

 **Implementation Details:**
- Fixed imports to use matmul_naive from TinyTorch layers
- Simplified baseline implementation using NumPy dot product
- Reduced cognitive load by removing measurement complexity
- Maintained all kernel optimization concepts

⚠️ **Note:** Cache-friendly implementation needs debugging but core timing functionality works

🎯 **Impact:** Students can now focus on building optimized kernels with immediate microsecond-level performance feedback, setting up perfect progression to comprehensive benchmarking in Module 12.
2025-07-14 14:51:28 -04:00