mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-04-29 22:47:56 -05:00
Restructure site navigation: modules-first, separate capstone, streamline sections
- Reorganized navigation to prioritize modules (Foundation/Architecture/Optimization tiers) - Separated Capstone Competition from Optimization tier - Removed Visual Learning Map from Course Orientation (broken mermaid diagrams) - Streamlined Using TinyTorch section - Redesigned landing page for professional, student-focused experience - Reduced emojis in navigation captions - Fixed build error by excluding modules directory patterns - Created symlinks for all module ABOUT files in site/modules/
This commit is contained in:
@@ -26,6 +26,8 @@ exclude_patterns:
|
||||
- "**/.venv/**"
|
||||
- "**/__pycache__/**"
|
||||
- "**/.DS_Store"
|
||||
- "modules/**/*.md"
|
||||
- "!modules/*_ABOUT.md"
|
||||
|
||||
# GitHub repository configuration for GitHub Pages
|
||||
repository:
|
||||
|
||||
133
site/_toc.yml
133
site/_toc.yml
@@ -6,99 +6,90 @@ root: intro
|
||||
title: "TinyTorch Course"
|
||||
|
||||
parts:
|
||||
- caption: 🚀 Getting Started
|
||||
- caption: Getting Started
|
||||
chapters:
|
||||
- file: quickstart-guide
|
||||
title: "Quick Start Guide"
|
||||
- file: student-workflow
|
||||
title: "Student Workflow"
|
||||
- file: usage-paths/classroom-use
|
||||
title: "For Instructors"
|
||||
|
||||
- caption: 🛠️ Using TinyTorch
|
||||
- caption: Foundation Tier (01-07)
|
||||
chapters:
|
||||
- file: tito-essentials
|
||||
title: "Essential Commands"
|
||||
- file: student-workflow
|
||||
title: "Student Workflow"
|
||||
- file: learning-progress
|
||||
title: "Track Your Progress"
|
||||
- file: datasets
|
||||
title: "Datasets Guide"
|
||||
- file: modules/01_tensor_ABOUT
|
||||
title: "01. Tensor"
|
||||
- file: modules/02_activations_ABOUT
|
||||
title: "02. Activations"
|
||||
- file: modules/03_layers_ABOUT
|
||||
title: "03. Layers"
|
||||
- file: modules/04_losses_ABOUT
|
||||
title: "04. Losses"
|
||||
- file: modules/05_autograd_ABOUT
|
||||
title: "05. Autograd"
|
||||
- file: modules/06_optimizers_ABOUT
|
||||
title: "06. Optimizers"
|
||||
- file: modules/07_training_ABOUT
|
||||
title: "07. Training"
|
||||
|
||||
- caption: 🧭 Course Orientation
|
||||
- caption: Architecture Tier (08-13)
|
||||
chapters:
|
||||
- file: modules/08_dataloader_ABOUT
|
||||
title: "08. DataLoader"
|
||||
- file: modules/09_spatial_ABOUT
|
||||
title: "09. Convolutions"
|
||||
- file: modules/10_tokenization_ABOUT
|
||||
title: "10. Tokenization"
|
||||
- file: modules/11_embeddings_ABOUT
|
||||
title: "11. Embeddings"
|
||||
- file: modules/12_attention_ABOUT
|
||||
title: "12. Attention"
|
||||
- file: modules/13_transformers_ABOUT
|
||||
title: "13. Transformers"
|
||||
|
||||
- caption: Optimization Tier (14-19)
|
||||
chapters:
|
||||
- file: modules/14_profiling_ABOUT
|
||||
title: "14. Profiling"
|
||||
- file: modules/15_quantization_ABOUT
|
||||
title: "15. Quantization"
|
||||
- file: modules/16_compression_ABOUT
|
||||
title: "16. Compression"
|
||||
- file: modules/17_memoization_ABOUT
|
||||
title: "17. Memoization"
|
||||
- file: modules/18_acceleration_ABOUT
|
||||
title: "18. Acceleration"
|
||||
- file: modules/19_benchmarking_ABOUT
|
||||
title: "19. Benchmarking"
|
||||
|
||||
- caption: Capstone Competition
|
||||
chapters:
|
||||
- file: modules/20_capstone_ABOUT
|
||||
title: "20. MLPerf® Edu Competition"
|
||||
|
||||
- caption: Course Orientation
|
||||
chapters:
|
||||
- file: chapters/00-introduction
|
||||
title: "Course Structure"
|
||||
- file: chapters/learning-journey
|
||||
title: "Learning Journey"
|
||||
- file: learning-journey-visual
|
||||
title: "Visual Learning Map"
|
||||
- file: chapters/milestones
|
||||
title: "Historical Milestones"
|
||||
- file: faq
|
||||
title: "FAQ"
|
||||
|
||||
- caption: 🏗️ Foundation Tier (01-07)
|
||||
- caption: Using TinyTorch
|
||||
chapters:
|
||||
- file: ../modules/01_tensor/ABOUT
|
||||
title: "01. Tensor"
|
||||
- file: ../modules/02_activations/ABOUT
|
||||
title: "02. Activations"
|
||||
- file: ../modules/03_layers/ABOUT
|
||||
title: "03. Layers"
|
||||
- file: ../modules/04_losses/ABOUT
|
||||
title: "04. Losses"
|
||||
- file: ../modules/05_autograd/ABOUT
|
||||
title: "05. Autograd"
|
||||
- file: ../modules/06_optimizers/ABOUT
|
||||
title: "06. Optimizers"
|
||||
- file: ../modules/07_training/ABOUT
|
||||
title: "07. Training"
|
||||
- file: tito-essentials
|
||||
title: "Essential Commands"
|
||||
- file: datasets
|
||||
title: "Datasets Guide"
|
||||
- file: testing-framework
|
||||
title: "Testing Guide"
|
||||
|
||||
- caption: 🏛️ Architecture Tier (08-13)
|
||||
chapters:
|
||||
- file: ../modules/08_dataloader/ABOUT
|
||||
title: "08. DataLoader"
|
||||
- file: ../modules/09_spatial/ABOUT
|
||||
title: "09. Convolutions"
|
||||
- file: ../modules/10_tokenization/ABOUT
|
||||
title: "10. Tokenization"
|
||||
- file: ../modules/11_embeddings/ABOUT
|
||||
title: "11. Embeddings"
|
||||
- file: ../modules/12_attention/ABOUT
|
||||
title: "12. Attention"
|
||||
- file: ../modules/13_transformers/ABOUT
|
||||
title: "13. Transformers"
|
||||
|
||||
- caption: ⚡ Optimization Tier (14-20)
|
||||
chapters:
|
||||
- file: ../modules/14_profiling/ABOUT
|
||||
title: "14. Profiling"
|
||||
- file: ../modules/15_quantization/ABOUT
|
||||
title: "15. Quantization"
|
||||
- file: ../modules/16_compression/ABOUT
|
||||
title: "16. Compression"
|
||||
- file: ../modules/17_memoization/ABOUT
|
||||
title: "17. Memoization"
|
||||
- file: ../modules/18_acceleration/ABOUT
|
||||
title: "18. Acceleration"
|
||||
- file: ../modules/19_benchmarking/ABOUT
|
||||
title: "19. Benchmarking"
|
||||
|
||||
- caption: 🏅 Capstone Project
|
||||
chapters:
|
||||
- file: ../modules/20_capstone/ABOUT
|
||||
title: "20. MLPerf® Edu Competition"
|
||||
|
||||
- caption: 🌍 Community
|
||||
- caption: Community
|
||||
chapters:
|
||||
- file: community
|
||||
title: "Ecosystem"
|
||||
|
||||
- caption: 🛠️ Resources & Tools
|
||||
chapters:
|
||||
- file: checkpoint-system
|
||||
title: "Progress Tracking"
|
||||
- file: testing-framework
|
||||
title: "Testing Guide"
|
||||
- file: resources
|
||||
title: "Additional Resources"
|
||||
|
||||
144
site/intro.md
144
site/intro.md
@@ -27,10 +27,46 @@ Don't just import it. Build it.
|
||||
|
||||
## What is TinyTorch?
|
||||
|
||||
TinyTorch is an educational ML systems course where you **build complete neural networks from scratch**. Instead of blindly using PyTorch or TensorFlow as black boxes, you implement every component yourself—from tensors and gradients to optimizers and attention mechanisms—gaining deep understanding of how modern ML frameworks actually work.
|
||||
TinyTorch is an educational ML systems course where you **build complete neural networks from scratch**. Instead of using PyTorch or TensorFlow as black boxes, you implement every component yourself—from tensors and gradients to optimizers and attention mechanisms—gaining deep understanding of how modern ML frameworks actually work.
|
||||
|
||||
**Core Learning Approach**: Build → Profile → Optimize. You'll implement each system component, measure its performance characteristics, and understand the engineering trade-offs that shape production ML systems.
|
||||
|
||||
## Your Learning Journey
|
||||
|
||||
TinyTorch organizes 20 modules through three tiers: **Foundation** (build mathematical infrastructure), **Architecture** (implement modern AI), and **Optimization** (deploy production systems).
|
||||
|
||||
**Browse all modules in the sidebar navigation** — organized by tier with clear learning objectives, time estimates, and implementation guides for each module.
|
||||
|
||||
### Foundation Tier (Modules 01-07)
|
||||
Build the mathematical infrastructure: tensors, activations, layers, losses, autograd, optimizers, and training loops. By the end, you'll train neural networks achieving 95%+ accuracy on MNIST using your own implementations.
|
||||
|
||||
### Architecture Tier (Modules 08-13)
|
||||
Implement modern AI architectures: data loading, convolutions for vision, tokenization, embeddings, attention, and transformers for language. Achieve 75%+ accuracy on CIFAR-10 with CNNs and generate coherent text with transformers.
|
||||
|
||||
### Optimization Tier (Modules 14-19)
|
||||
Deploy production systems: profiling, quantization, compression, memoization, acceleration, and benchmarking. Transform research models into production-ready systems.
|
||||
|
||||
### Capstone Competition (Module 20)
|
||||
Apply all optimizations in the MLPerf® Edu Competition—a standardized benchmark where you optimize models and compete fairly across different hardware platforms.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Ready to build ML systems from scratch? Here's your path:
|
||||
|
||||
**Quick Setup** (15 minutes):
|
||||
1. Clone the repository: `git clone https://github.com/mlsysbook/TinyTorch.git`
|
||||
2. Run setup: `./setup-environment.sh`
|
||||
3. Activate environment: `source activate.sh`
|
||||
4. Verify: `tito system doctor`
|
||||
|
||||
**Your First Module**:
|
||||
1. Start with Module 01 (Tensor) in `modules/source/01_tensor/`
|
||||
2. Implement the required functionality
|
||||
3. Export: `tito module complete 01`
|
||||
4. Validate: Run milestone scripts to prove your implementation works
|
||||
|
||||
See the [Quick Start Guide](quickstart-guide.md) for detailed setup instructions and the [Student Workflow](student-workflow.md) for the complete development cycle.
|
||||
|
||||
## The Simple Workflow
|
||||
|
||||
TinyTorch follows a simple three-step cycle:
|
||||
@@ -39,47 +75,11 @@ TinyTorch follows a simple three-step cycle:
|
||||
1. Edit modules → 2. Export to package → 3. Validate with milestones
|
||||
```
|
||||
|
||||
**📖 See [Student Workflow](student-workflow.html)** for the complete development cycle, best practices, and troubleshooting.
|
||||
**Edit**: Work on module source files in `modules/source/XX_name/`
|
||||
**Export**: Run `tito module complete XX` to make your code importable
|
||||
**Validate**: Run milestone scripts to prove your implementations work
|
||||
|
||||
## Three-Tier Learning Pathway
|
||||
|
||||
TinyTorch organizes 20 modules through three pedagogically-motivated tiers: **Foundation** (build mathematical infrastructure), **Architecture** (implement modern AI), and **Optimization** (deploy production systems).
|
||||
|
||||
**📖 See [Three-Tier Learning Structure](chapters/00-introduction.html#three-tier-learning-pathway-build-complete-ml-systems)** for detailed tier breakdown, module lists, time estimates, and learning outcomes.
|
||||
|
||||
## 🗺️ Understanding Your Complete Learning Journey
|
||||
|
||||
TinyTorch's 20 modules aren't arbitrary - they tell a carefully crafted story from building mathematical atoms to deploying production AI systems. Each module builds on previous foundations while setting up future capabilities.
|
||||
|
||||
<div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 1.5rem; margin: 2rem 0;">
|
||||
|
||||
<div style="background: #f0f9ff; border: 1px solid #7dd3fc; padding: 1.5rem; border-radius: 0.5rem;">
|
||||
<h4 style="margin: 0 0 1rem 0; color: #0284c7;">🏗️ Three-Tier Structure</h4>
|
||||
<p style="margin: 0; font-size: 0.9rem;">Organized navigation through Foundation → Architecture → Optimization</p>
|
||||
<p style="margin: 0.5rem 0 0 0; font-size: 0.85rem;"><a href="chapters/00-introduction.html">View Course Structure →</a></p>
|
||||
</div>
|
||||
|
||||
<div style="background: #fdf4ff; border: 1px solid #e9d5ff; padding: 1.5rem; border-radius: 0.5rem;">
|
||||
<h4 style="margin: 0 0 1rem 0; color: #7c3aed;">📖 Six-Act Narrative</h4>
|
||||
<p style="margin: 0; font-size: 0.9rem;">The learning story: Why modules flow from atomic components to intelligence</p>
|
||||
<p style="margin: 0.5rem 0 0 0; font-size: 0.85rem;"><a href="chapters/learning-journey.html">Read The Story →</a></p>
|
||||
</div>
|
||||
|
||||
<div style="background: #fef3c7; border: 1px solid #fde047; padding: 1.5rem; border-radius: 0.5rem;">
|
||||
<h4 style="margin: 0 0 1rem 0; color: #a16207;">🏆 Historical Milestones</h4>
|
||||
<p style="margin: 0; font-size: 0.9rem;">Prove mastery by recreating ML history with YOUR implementations</p>
|
||||
<p style="margin: 0.5rem 0 0 0; font-size: 0.85rem;"><a href="chapters/milestones.html">See Timeline →</a></p>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
**New to TinyTorch?** Start with the [Three-Tier Structure](chapters/00-introduction.html) to see what you'll build, then read [The Learning Journey](chapters/learning-journey.html) to understand the pedagogical progression that makes it all click.
|
||||
|
||||
## 🏆 Prove Your Mastery Through History
|
||||
|
||||
As you complete modules, unlock **historical milestone demonstrations** that prove what you've built works! Each milestone recreates a breakthrough using YOUR implementations—from Rosenblatt's 1957 perceptron to modern transformers and production optimization.
|
||||
|
||||
**📖 See [Journey Through ML History](chapters/milestones.html)** for complete timeline, requirements, and expected results.
|
||||
See [Student Workflow](student-workflow.md) for the complete development cycle, best practices, and troubleshooting.
|
||||
|
||||
## Why Build Instead of Use?
|
||||
|
||||
@@ -87,7 +87,7 @@ The difference between using a library and understanding a system is the differe
|
||||
|
||||
When you just use PyTorch or TensorFlow, you're stuck when things break—OOM errors, NaN losses, slow training. When you build TinyTorch from scratch, you understand exactly why these issues happen and how to fix them. You know the memory layouts, gradient flows, and performance bottlenecks because you implemented them yourself.
|
||||
|
||||
**📖 See [FAQ](faq.html)** for detailed comparisons with PyTorch, TensorFlow, micrograd, and nanoGPT, including code examples and architectural differences.
|
||||
See [FAQ](faq.md) for detailed comparisons with PyTorch, TensorFlow, micrograd, and nanoGPT, including code examples and architectural differences.
|
||||
|
||||
## Who Is This For?
|
||||
|
||||
@@ -103,59 +103,25 @@ When you just use PyTorch or TensorFlow, you're stuck when things break—OOM er
|
||||
|
||||
**ML Practitioners**: "Why does training slow down after epoch 10? How do I debug gradient explosions? When should I use mixed precision?" Even experienced engineers often treat frameworks as black boxes. By understanding the systems underneath, you'll debug faster, optimize better, and make informed architectural decisions.
|
||||
|
||||
## How to Choose Your Learning Path
|
||||
## Learning Paths
|
||||
|
||||
**Three Learning Approaches**: You can **build complete tiers** (implement all 20 modules), **focus on specific tiers** (target your skill gaps), or **explore selectively** (study key concepts). Each tier builds complete, working systems.
|
||||
|
||||
<div style="display: grid; grid-template-columns: repeat(2, 1fr); gap: 1.5rem; margin: 3rem 0;">
|
||||
**Quick Exploration** (2-4 weeks): Focus on Foundation Tier (Modules 01-07) to understand core ML systems
|
||||
**Complete Course** (14-18 weeks): Implement all three tiers for complete ML systems mastery
|
||||
**Focused Learning** (4-8 weeks): Pick specific tiers based on your goals
|
||||
|
||||
<!-- Top Row -->
|
||||
<div style="background: #f8f9fa; border: 1px solid #dee2e6; padding: 2rem; border-radius: 0.5rem; text-align: center;">
|
||||
<h3 style="margin: 0 0 1rem 0; font-size: 1.2rem; color: #495057;">🔬 Quick Start</h3>
|
||||
<p style="margin: 0 0 1.5rem 0; font-size: 0.95rem; color: #6c757d;">15 minutes setup • Try foundational modules • Hands-on experience</p>
|
||||
<a href="quickstart-guide.html" style="display: inline-block; background: #007bff; color: white; padding: 0.75rem 1.5rem; border-radius: 0.25rem; text-decoration: none; font-weight: 500; font-size: 1rem;">Start Building →</a>
|
||||
</div>
|
||||
## Prove Your Mastery Through History
|
||||
|
||||
<div style="background: #f0fff4; border: 1px solid #9ae6b4; padding: 2rem; border-radius: 0.5rem; text-align: center;">
|
||||
<h3 style="margin: 0 0 1rem 0; font-size: 1.2rem; color: #495057;">📚 Full Course</h3>
|
||||
<p style="margin: 0 0 1.5rem 0; font-size: 0.95rem; color: #6c757d;">8+ weeks study • Complete ML framework • Systems understanding</p>
|
||||
<a href="chapters/00-introduction.html" style="display: inline-block; background: #28a745; color: white; padding: 0.75rem 1.5rem; border-radius: 0.25rem; text-decoration: none; font-weight: 500; font-size: 1rem;">Course Overview →</a>
|
||||
</div>
|
||||
As you complete modules, unlock **historical milestone demonstrations** that prove what you've built works. Each milestone recreates a breakthrough using YOUR implementations—from Rosenblatt's 1957 perceptron to modern transformers and production optimization.
|
||||
|
||||
<!-- Bottom Row -->
|
||||
<div style="background: #faf5ff; border: 1px solid #b794f6; padding: 2rem; border-radius: 0.5rem; text-align: center;">
|
||||
<h3 style="margin: 0 0 1rem 0; font-size: 1.2rem; color: #495057;">🎓 Instructors</h3>
|
||||
<p style="margin: 0 0 1.5rem 0; font-size: 0.95rem; color: #6c757d;">Classroom-ready • NBGrader integration (coming soon)</p>
|
||||
<a href="usage-paths/classroom-use.html" style="display: inline-block; background: #6f42c1; color: white; padding: 0.75rem 1.5rem; border-radius: 0.25rem; text-decoration: none; font-weight: 500; font-size: 1rem;">Teaching Guide →</a>
|
||||
</div>
|
||||
See [Historical Milestones](chapters/milestones.md) for complete timeline, requirements, and expected results.
|
||||
|
||||
<div style="background: #fff8dc; border: 1px solid #daa520; padding: 2rem; border-radius: 0.5rem; text-align: center;">
|
||||
<h3 style="margin: 0 0 1rem 0; font-size: 1.2rem; color: #495057;">📊 Learning Community</h3>
|
||||
<p style="margin: 0 0 1.5rem 0; font-size: 0.95rem; color: #6c757d;">Track progress • Join competitions • Student leaderboard</p>
|
||||
<a href="leaderboard.html" style="display: inline-block; background: #b8860b; color: white; padding: 0.75rem 1.5rem; border-radius: 0.25rem; text-decoration: none; font-weight: 500; font-size: 1rem;">View Progress →</a>
|
||||
</div>
|
||||
## Next Steps
|
||||
|
||||
</div>
|
||||
- **New to TinyTorch**: Start with the [Quick Start Guide](quickstart-guide.md) for immediate hands-on experience
|
||||
- **Ready to Commit**: Begin Module 01: Tensor (see sidebar navigation) to start building
|
||||
- **Understand the Structure**: Read [Course Structure](chapters/00-introduction.md) for detailed tier breakdown and learning outcomes
|
||||
- **Teaching a Course**: Review [Instructor Guide](usage-paths/classroom-use.html) for classroom integration
|
||||
|
||||
## Getting Started
|
||||
|
||||
Ready to build ML systems from scratch? Here's how to start:
|
||||
|
||||
**Quick Setup** (15 minutes):
|
||||
1. Clone the repository
|
||||
2. Run `./setup-environment.sh`
|
||||
3. Start with Module 01 (Tensors)
|
||||
4. Export with `tito module complete 01`
|
||||
5. Validate by running milestone scripts
|
||||
|
||||
**📖 See [Quick Start Guide](quickstart-guide.html)** for detailed setup instructions.
|
||||
|
||||
**Understanding the Workflow**:
|
||||
- **📖 See [Student Workflow](student-workflow.html)** - The essential edit → export → validate cycle
|
||||
- **📖 See [Essential Commands](tito-essentials.html)** - Complete TITO command reference
|
||||
- **📖 See [Three-Tier Learning Structure](chapters/00-introduction.html)** - Detailed course structure
|
||||
|
||||
**Optional Progress Tracking**:
|
||||
- **[Progress Tracking](learning-progress.html)** - Monitor your journey with capability checkpoints (optional)
|
||||
|
||||
TinyTorch is more than a course—it's a community of learners building together. Join thousands exploring ML systems from the ground up.
|
||||
TinyTorch is more than a course—it's a community of learners building together. Join thousands exploring ML systems from the ground up.
|
||||
|
||||
Reference in New Issue
Block a user