From 2acd428cf62829b1a4763ea062a75d5b61fb0c96 Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Wed, 16 Jul 2025 12:10:37 -0400 Subject: [PATCH] Add focused FAQ to website intro - 4 key questions for students already interested in the course - Focus on practical learning concerns vs skepticism - Shorter than GitHub FAQ - appropriate for committed learners - Covers time investment, skill level, support, modern relevance --- book/intro.md | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/book/intro.md b/book/intro.md index 92d9e66b..f4e337eb 100644 --- a/book/intro.md +++ b/book/intro.md @@ -198,6 +198,57 @@ Want to see what TinyTorch feels like? **[Launch the Setup chapter](chapters/01- --- +## ❓ **Common Questions** + +
+⏰ "How much time should I plan for this course?" + +**Time investment:** ~40-60 hours for complete framework + +**Flexible pacing options:** +- **Quick exploration:** 1-2 modules to understand the approach +- **Focused learning:** Core modules (01-08) for solid foundations +- **Complete mastery:** All 15 modules for full framework expertise + +Each module is self-contained, so you can stop and start as needed. +
+ +
+🤔 "I'm already experienced with ML. Will this be too basic?" + +**Quick self-assessment:** +- Can you implement Adam optimizer from the original paper? +- Do you know why ReLU causes dying neurons and how to prevent it? +- Could you debug a mysterious 50% accuracy drop after deployment? + +**Experienced engineers often find TinyTorch fills the "implementation gap"** that most ML education skips - the deep understanding of how frameworks actually work under the hood. +
+ +
+🔄 "What if I get stuck on a module?" + +**Built-in support system:** +- **Progressive scaffolding:** Each implementation broken into guided steps +- **Comprehensive testing:** 200+ tests with educational error messages +- **Rich documentation:** Visual explanations and debugging tips +- **Modular design:** Skip ahead or go back without breaking progress + +**Philosophy:** You should feel challenged but never lost. +
+ +
+🚀 "How does this connect to modern architectures like Transformers?" + +**Transformers use the same foundations you'll build:** +- **Attention mechanism:** Matrix operations using your tensor implementations +- **LayerNorm:** Built on your activation and layer components +- **Training:** Powered by your Adam optimizer and autograd system + +**Understanding foundations makes you the engineer who can optimize and extend modern architectures,** not just use them through APIs. +
+ +--- + ## 🙏 **Acknowledgments** TinyTorch originated from CS249r: Tiny Machine Learning Systems at Harvard University. We're inspired by projects like [tinygrad](https://github.com/geohot/tinygrad), [micrograd](https://github.com/karpathy/micrograd), and [MiniTorch](https://minitorch.github.io/) that demonstrate the power of minimal implementations.