Updates demo implementations across modules and enhances progressive test configuration for better educational flow.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Each module now includes a self-contained demo function that:
- Uses the 🎯 emoji for consistency with MODULE SUMMARY
- Explains what was built and why it matters
- Provides a quick, visual demonstration
- Runs automatically after test_module() in __main__
Format: demo_[module_name]() with markdown explanation before it.
All demos are self-contained with no cross-module imports.
- Added create_causal_mask() helper function to src/13_transformers
- Updated tinytorch/__init__.py to import from core.transformer
- Deleted stale tinytorch/models/transformer.py (now in core/)
- Updated TinyTalks to use the new import path
The create_causal_mask function is essential for autoregressive
generation - it ensures each position only attends to past tokens.
Removed from src/:
- 4 .ipynb files (auto-generated, belong in modules/)
- autograd_systems_analysis.py (supplementary content without export directives)
- validate_fixes.py (temporary validation script)
Source directory now contains only:
- One .py file per module (01_tensor.py through 20_capstone.py)
- ABOUT.md files (module documentation)
- No temporary or auto-generated files
This ensures src/ is the clean source of truth for all 20 modules.
Refines diagrams across multiple modules to enhance
readability and maintain a consistent visual style.
This change improves the overall clarity and understandability
of the documentation and explanations within the codebase.
- Update paper/paper.tex to reflect Module 20 submission infrastructure
- Add nbdev export integration to paper build system section
- Integrate community submission workflow into paper
- Enhance Module 20 with ~4,500 words of pedagogical content
- Add 15+ ASCII diagrams for visual learning
- Include comprehensive benchmarking foundations
- Add module summary celebrating 20-module journey
- Complete pre-release review (96/100 - ready for release)
Module 20 now demonstrates the complete benchmarking workflow:
- SimpleMLP toy model for demonstration (no milestone dependencies)
- BenchmarkReport class for measuring performance metrics
- generate_submission() function for creating JSON submissions
- Complete example workflow students can modify
- All tests pass
This launch-ready module shows students how to:
1. Benchmark a model using Module 19 tools
2. Generate standardized JSON submissions
3. Share results with the TinyTorch community
Exports to: tinytorch.capstone
Restored the original competition-focused Module 20 from git history.
The previous TinyGPT-focused version was replaced with the intended
competition and submission generation module.
Original Module 20 purpose:
- TinyTorch Olympics competition framework
- Uses benchmarking harness from Module 19
- Generates MLPerf-style JSON submissions
- Olympic events: Latency Sprint, Memory Challenge, Accuracy Contest, etc.
- Exports to tinytorch.competition.submit
Fixed imports to match current Module 19:
- Changed from BenchmarkResult to Benchmark, BenchmarkSuite, TinyMLPerf
- Added missing time import
Note: Module still needs additional fixes to pass tests (validation logic).
This commit restores the correct architectural direction for Module 20.
Module 20 (Capstone) had an unused matplotlib.pyplot import that was
causing tests to fail when matplotlib wasn't installed.
The import was a leftover from early development but matplotlib is
never actually used in the module (no plt.* calls anywhere).
Module 20 is a capstone integration module that:
- Imports and integrates all 19 previous TinyTorch modules
- Exports TinyGPT, TinyGPTTrainer, and CompleteTinyGPTPipeline
- Demonstrates the complete framework working together
- Should have zero external dependencies beyond numpy
Removing this dependency ensures Module 20 can run in minimal
environments with only numpy and the TinyTorch modules.
Module 09's main block was calling analyze_convolution_complexity() and
analyze_pooling_effects() before test_module(). These analysis functions
are educational demonstrations that:
- Run computational benchmarks with timing
- Test multiple configurations for performance analysis
- Take significant time to execute
During 'tito module test', we only want to run test_module() to verify
correctness, not run performance benchmarks. This reduces Module 09
test time significantly (from ~30+ seconds to ~12 seconds).
Analysis functions remain in the module for educational purposes but
are not exported and not called during standard testing.
All other modules (01-20) already follow this pattern correctly.
- Enhanced CIFAR-10 CNN with BatchNorm2d for stable training
- Added RandomHorizontalFlip and RandomCrop augmentation transforms
- Improved training accuracy from 65%+ to 70%+ with modern architecture
- Updated demo tapes with opening comments for clarity
- Regenerated welcome GIF, removed outdated demo GIFs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Silent return when autograd is already enabled
- Cleaner REPL experience without redundant warnings
- First import still shows helpful ✅ message
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove hardcoded command list in welcome screen
- Dynamically build help from self.commands registry
- Categorize commands: Essential, Student Workflow, Community, Developer, Shortcuts
- Ensures welcome screen always shows only registered commands
- No more stale command references
Benefits:
- Single source of truth (commands registry)
- Adding/removing commands automatically updates welcome
- Clear categorization for different user roles
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add quiet=False parameter to enable_autograd()
- Suppress print statements when quiet=True
- Check TINYTORCH_QUIET env var on module import
- Allows CLI tools to import tinytorch silently
- Students still see helpful messages in notebooks
## Symlink Updates (modules/ → src/)
- Update all 20 site/modules/*_ABOUT.md symlinks to point to src/
- Update all 20 src/*/ABOUT.md internal references
## Infrastructure Changes
- Remove bin/ directory scripts (moved to scripts/ in previous commit)
- Update .envrc: Reference new scripts/ directory structure
- Update pyproject.toml: Reflect src/ as primary source location
- Update docs/development/MODULE_ABOUT_TEMPLATE.md: src/ paths
- Update site/requirements.txt: Documentation dependencies
## Restructuring Complete
The repository now has clean separation:
- `src/`: Developer source code (graded notebooks with solutions)
- `modules/`: Student workspace (generated from src/)
- `scripts/`: Build and utility scripts
- `site/`: Documentation and Jupyter Book website
This enables the intended workflow:
1. Developers work in src/
2. Students receive generated notebooks in modules/
3. Both can coexist without conflicts
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major directory restructure to support both developer and learner workflows:
Structure Changes:
- NEW: src/ directory for Python source files (version controlled)
- Files renamed: tensor.py → 01_tensor.py (matches directory naming)
- All 20 modules moved from modules/ to src/
- CHANGED: modules/ now holds generated notebooks (gitignored)
- Generated from src/*.py using jupytext
- Learners work in notebooks, developers work in Python source
- UNCHANGED: tinytorch/ package (still auto-generated from notebooks)
Workflow: src/*.py → modules/*.ipynb → tinytorch/*.py
Command Updates:
- Updated export command to read from src/ and generate to modules/
- Export flow: discovers modules in src/, converts to notebooks in modules/, exports to tinytorch/
- All 20 modules tested and working
Configuration:
- Updated .gitignore to ignore modules/ directory
- Updated README.md with new three-layer architecture explanation
- Updated export.py source mappings and paths
Benefits:
- Clean separation: developers edit Python, learners use notebooks
- Better version control: only Python source committed, notebooks generated
- Flexible learning: can work in notebooks OR Python source
- Maintains backward compatibility: tinytorch package unchanged
Tested:
- Single module export: tito export 01_tensor ✅
- All modules export: tito export --all ✅
- Package imports: from tinytorch.core.tensor import Tensor ✅
- 20/20 modules successfully converted and exported