- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Hardware-Optimized Operations
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Model Compression
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Neural Network Training
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Optimization Algorithms
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Automatic Differentiation
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before auto testing block
- Updated to ## 🎯 MODULE SUMMARY: Data Loading Systems
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before auto testing block
- Updated to ## 🎯 MODULE SUMMARY: Attention Mechanisms
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before auto testing block
- Updated to ## 🎯 MODULE SUMMARY: Convolutional Networks
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before auto testing block
- Updated to ## 🎯 MODULE SUMMARY: Neural Network Architectures
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Neural Network Layers
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before auto testing block
- Updated to ## 🎯 MODULE SUMMARY: Activation Functions
Improves notebook organization without changing any code logic or content.
- Added ## 🔧 DEVELOPMENT section before Step 1 where development begins
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Tensor Foundation
Improves notebook organization without changing any code logic or content.
- Moved ## 🔧 DEVELOPMENT to proper location at start of Step 2 where actual development begins
- Removed misplaced header from test function area
- Headers now correctly organize: Development → Auto Testing → Module Summary
- Added ## 🔧 DEVELOPMENT section before test functions
- Added ## 🤖 AUTO TESTING section before nbgrader block
- Updated to ## 🎯 MODULE SUMMARY: Setup Configuration
Improves notebook organization without changing any code logic or content.
CORRECTED PATTERN NOW:
1. ✅ Integration test (test_module_kernel_sequential_model) - BEFORE ## 🧪 Module Testing
2. ✅ ## 🧪 Module Testing (markdown section)
3. ✅ STANDARDIZED MODULE TESTING (nbgrader cell)
4. ✅ if __name__ == '__main__' block with run_module_tests_auto
5. ✅ ## 🎯 Module Summary (immediately after, no code between)
FIXES APPLIED:
✅ Moved integration test function from AFTER testing section to BEFORE it
✅ Removed duplicate integration test function and markdown section
✅ Added integration test to the if __name__ == '__main__' block
✅ Clean STANDARDIZED MODULE TESTING structure
Module 13_kernels now follows the exact pattern
CORRECTED PATTERN NOW:
1. ✅ Integration tests (test_compression_integration, test_comprehensive_compression_integration) - BEFORE ## 🧪 Module Testing
2. ✅ ## 🧪 Module Testing (markdown section)
3. ✅ STANDARDIZED MODULE TESTING (nbgrader cell)
4. ✅ if __name__ == '__main__' block with run_module_tests_auto
5. ✅ ## 🎯 Module Summary (immediately after, no code between)
FIXES APPLIED:
✅ Moved both integration test functions from AFTER testing section to BEFORE it
✅ Removed duplicate integration test functions and markdown sections
✅ Cleaned up multiple run_module_tests_auto calls - now only one clean call
✅ Proper STANDARDIZED MODULE TESTING structure
Module 12_compression now follows the exact pattern
CORRECTED PATTERN NOW:
1. ✅ Integration test (test_module_optimizer_autograd_compatibility) - BEFORE ## 🧪 Module Testing
2. ✅ ## 🧪 Module Testing (markdown section)
3. ✅ STANDARDIZED MODULE TESTING (nbgrader cell with proper structure)
4. ✅ if __name__ == '__main__' block with run_module_tests_auto
5. ✅ ## �� Module Summary (immediately after, no code between)
FIXES APPLIED:
✅ Moved integration test function from AFTER testing section to BEFORE it
✅ Removed duplicate integration test function
✅ Clean STANDARDIZED MODULE TESTING structure with proper nbgrader cell
✅ No extra code between run_module_tests_auto and Module Summary
Module 10_optimizers now follows the EXACT pattern the user specified
Module 02_tensor now follows the correct pattern learned from layers_dev:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Actual test functions (test_unit_tensor_creation, test_unit_tensor_properties, test_unit_tensor_arithmetic)
4. ## 🎯 Module Summary
✅ Moved test functions from end of file to proper location after standardized testing
✅ Removed duplicate test functions
✅ Students now see actual test implementations before the summary
✅ run_module_tests_auto will auto-discover and run all tests
Cleaned up duplicate/redundant nbgrader cells that were just comments referencing test functions. The actual test functions remain in their proper location after the standardized testing section.
Removed:
- Duplicate test-personal-info nbgrader cell (just a comment)
- Duplicate test-system-info nbgrader cell (just a comment)
- Redundant 'Inline Test Functions' section
This eliminates confusion and follows the clean pattern established by other modules.
Module 01_setup now follows correct pattern:
1. ## 🧪 Module Testing (explanation)
2. Standardized testing cell with run_module_tests_auto
3. Actual test functions (test_unit_personal_info_basic, test_unit_system_info_basic)
4. ## 🎯 Module Summary
This ensures students see actual test implementations before the summary.
Ensures consistent testing framework across all TinyTorch modules with:
✅ Added standardized testing sections to modules that were missing them:
- 01_setup: Added complete testing section + module summary
- 02_tensor: Added testing section + comprehensive module summary
- 15_mlops: Standardized existing testing section to match convention
✅ All modules now follow the consistent pattern:
1. ## 🧪 Module Testing (markdown explanation)
2. Locked nbgrader cell with standardized-testing ID
3. run_module_tests_auto call to discover and run all tests
4. ## 🎯 Module Summary (educational wrap-up)
✅ Benefits:
- Consistent testing experience across all 16 modules
- Automatic test discovery and execution before module completion
- Standardized educational flow: learn → implement → test → reflect
- Professional testing practices with locked testing framework
✅ Verification: All 16 modules now have both:
- '## 🧪 Module Testing' section ✓
- 'run_module_tests_auto' call ✓
This ensures students always verify their implementations work correctly
before moving to the next module, following TinyTorch's educational philosophy.
- Remove loose test code from nbgrader cells that ran automatically on import
- Keep only proper test_unit_personal_info_basic() and test_unit_system_info_basic() functions
- Prevents tests from running when module is imported as package
- Follows established test naming conventions (test_unit_*)
- Improves module reliability and reduces side effects
Fixed issues:
- NBGrader cells now reference test functions instead of running test code directly
- All assertions and test logic properly contained in named test functions
- Module can be imported without automatically executing tests
- Delete all 15 .ipynb files from modules/source directories
- Align with TinyTorch's Python-first development philosophy
- .py files are the source of truth, .ipynb files are temporary outputs
- Prevents version control conflicts with notebook metadata
- Students work directly with .py files using Jupytext format
- Notebooks can be regenerated when needed via 'tito nbdev generate'
Removed files:
- All *_dev.ipynb files across modules 01-15
- Keeps repository clean and focused on source code
- Replace hardcoded module names array with dynamic reading from module.yaml files
- Add get_module_names() function to read actual module structure
- Fix IndexError in get_prev_module_name() and get_next_module_name() functions
- Update navigation logic to use actual module count instead of hardcoded assumptions
- Successfully converts all 16 modules to chapters with proper navigation
- Book build now completes without errors
- Remove generic learning communities section
- Remove vague 'next steps' career advice
- Remove fluffy usage instructions
- Keep focused: academic courses, books, alternative implementations, production internals
- Result: curated reference for students who built ML systems from scratch
✂️ Reduced MLOps Focus:
- Renamed 'MLOps & Production' → 'Development Tools'
- Removed redundant 'MLOps Community' link
- Focuses on practical development tools instead
🎯 Made Framework Differentiations Distinct:
- Micrograd: 'shows you the math, TinyTorch shows you the systems'
- Tinygrad: 'optimizes for speed, TinyTorch optimizes for learning'
- NNFS: 'focuses on algorithms, TinyTorch focuses on complete systems engineering'
💡 Benefits:
- Each differentiation now highlights specific strengths vs repetitive vehicle analogy
- Less MLOps emphasis (appears in course already)
- More concise and memorable comparisons
Result: Cleaner resource organization with unique, specific differentiations
that avoid repetition and over-emphasis on any single topic.
🔥 Major Improvements:
- Removed research papers section (belongs in specific labs as context)
- Added clear differentiation for alternative implementations with vehicle analogy
- Moved ML Systems book to books section with prominent positioning
- Added actual book links (O'Reilly, deeplearningbook.org) where available
- Focused on maintainable, stable resources
🎯 Key Differentiations Added:
- 'Micrograd teaches engine parts, TinyTorch teaches you to design the whole vehicle'
- 'NNFS teaches engine parts, TinyTorch teaches the whole vehicle and drive it'
- 'Tinygrad optimizes for speed, TinyTorch optimizes for learning systems thinking'
🏭 Production Focus:
- Added industrial tools: W&B, MLOps Community, Papers with Code
- Reorganized into: Courses, Books, Alternative Implementations, Production Tools
- Removed quickly-outdated content, kept stable educational resources
📖 ML Systems Book Positioning:
- Moved Vijay's book from courses to books section
- Positioned as 'the perfect companion to TinyTorch'
- Added proper book links for maintainability
Result: Much more focused, maintainable resource page that complements
TinyTorch without duplicating content that belongs in specific labs.
🎓 Course Additions:
- Added CS 249r: Tiny Machine Learning (Harvard) to course list
- Covers TinyML systems, edge AI, and resource-constrained machine learning
- Complements existing MIT TinyML course with Harvard perspective
📖 Section Naming Fix:
- Changed 'Essential Books' → 'Recommended Books'
- Avoids prescriptive language and duplication issues
- More inclusive and less hierarchical phrasing
🔄 Organization Benefits:
- Eliminates potential confusion with ML Systems book already in courses
- Creates clearer separation between course materials and supplementary books
- Better reflects that these are helpful additions, not requirements
Result: More thoughtful resource organization with key Harvard tinyML
course addition and improved section naming.
🔧 Title Configuration Fix:
- Changed book/_config.yml title from long form to simple 'Tiny🔥Torch'
- Eliminates duplicate title in browser tab (was showing 'Tiny🔥Torch — Tiny🔥Torch')
- Now Chrome tab displays clean 'Tiny🔥Torch' once
Result: Clean, professional browser tab title without duplication.
🔄 Chapter File Reorganization:
- Renamed 05-networks.md → 05-dense.md
- Renamed 06-cnn.md → 06-spatial.md
- Created 07-attention.md with transformer-focused content
- Renumbered all subsequent chapters (7→8, 8→9, 9→10, etc.)
- Updated final module: 15-capstone.md → 16-capstone.md
📚 Attention Chapter Content:
- Added comprehensive attention module introduction
- Covers self-attention, multi-head attention, transformer foundations
- Explains Query-Key-Value mechanism and scaled dot-product attention
- Connects to previous modules (tensors, activations, layers, dense)
- Positions attention as foundation for modern AI (GPT, BERT, ViTs)
✅ Build Verification:
- Jupyter Book builds successfully with no missing file errors
- All 16 chapters now properly indexed in table of contents
- New structure: Foundation (1-3), Building Blocks (4-7), Training (8-11),
Inference & Serving (12-16)
Result: Complete alignment between repository structure, book chapters,
and table of contents. Students can now navigate the full 16-module course
with proper attention coverage and updated section organization.
📖 New Resources Page:
- Created book/resources.md with curated external learning materials
- Academic courses: Stanford CS329S, Harvard ML Systems, MIT TinyML
- Essential books: Chip Huyen, Andriy Burkov, Deep Learning textbook
- Framework deep dives: PyTorch/TensorFlow internals and architecture
- Research papers: Autograd, Adam, Attention, TensorFlow/PyTorch papers
- Implementation guides: micrograd, tinygrad, Neural Networks from Scratch
- Communities: MLOps, r/MachineLearning, technical blogs
- Next steps: Post-TinyTorch learning paths and advanced specializations
🔄 Updated Table of Contents:
- Fixed module names: networks → dense, cnn → spatial
- Added 07_attention to Building Blocks section
- Updated all numbering to reflect 16-module structure
- Renamed 'Production & Performance' → 'Inference & Serving'
- Added new 'Additional Resources' section with 📚 Learning Resources
🎯 Educational Value:
- Provides context for TinyTorch implementations
- Bridges from educational framework to production systems
- Offers multiple learning paths for different interests
- Connects TinyTorch concepts to broader ML systems ecosystem
Result: Students now have comprehensive resources to deepen their
understanding and apply TinyTorch knowledge to real-world systems.
📖 Enhanced Visual Design:
- Wrapped entire FAQ content in blockquotes (>) for consistent grey background
- All bullet points, headers, and content now have improved readability
- Code blocks within blockquotes maintain proper formatting
- Consistent visual styling across all 8 FAQ entries
✨ User Experience Benefits:
- Grey background makes content much easier to read when expanded
- Better visual separation from surrounding text
- Professional appearance with improved contrast
- Reduces eye strain and improves content scanning
🎯 Technical Implementation:
- Added > prefix to all content lines within FAQ answers
- Maintained proper markdown formatting for headers, lists, and code
- Preserved existing structure while enhancing visual presentation
Result: FAQ dropdowns now have beautiful, consistent grey styling
that makes expanded content significantly easier to read and scan.
✨ Title Formatting:
- Split title into main header and subtitle for better readability
- Enhanced visual hierarchy in book introduction
🚀 Content Updates:
- Changed 'rocket ship' to 'AI rocket ship' for more specific branding
- Added '(Harvard)' to Prof. Vijay Janapa Reddi reference for clarity
- Maintains professional attribution while being more informative
Result: Cleaner book intro formatting with improved readability and attribution.