diff --git a/book/intro.md b/book/intro.md index f4e337eb..14ac6101 100644 --- a/book/intro.md +++ b/book/intro.md @@ -201,50 +201,50 @@ Want to see what TinyTorch feels like? **[Launch the Setup chapter](chapters/01- ## ❓ **Common Questions**
-⏰ "How much time should I plan for this course?" +🧮 "Do I need to know advanced math to succeed?" -**Time investment:** ~40-60 hours for complete framework +**You need high school math + willingness to learn.** We explain the math as we go: -**Flexible pacing options:** -- **Quick exploration:** 1-2 modules to understand the approach -- **Focused learning:** Core modules (01-08) for solid foundations -- **Complete mastery:** All 15 modules for full framework expertise +- **Linear algebra**: We explain matrix multiplication when we build Dense layers +- **Calculus**: We cover derivatives when implementing backpropagation +- **Statistics**: We introduce concepts like gradients in context of optimization -Each module is self-contained, so you can stop and start as needed. +**Key insight:** You learn the math by implementing it, not the other way around. Most students find this more intuitive than traditional math courses.
-🤔 "I'm already experienced with ML. Will this be too basic?" +✅ "How do I know if I'm implementing things correctly?" -**Quick self-assessment:** -- Can you implement Adam optimizer from the original paper? -- Do you know why ReLU causes dying neurons and how to prevent it? -- Could you debug a mysterious 50% accuracy drop after deployment? +**Built-in validation at every step:** +- **200+ automated tests** that check your implementations +- **Progressive testing**: Start simple, then add complexity +- **Clear error messages**: "Your ReLU should return 0 for negative inputs" +- **Expected outputs**: Know exactly what your code should produce -**Experienced engineers often find TinyTorch fills the "implementation gap"** that most ML education skips - the deep understanding of how frameworks actually work under the hood. +**Example:** Your tensor multiplication either passes the test or gets specific feedback about what went wrong.
-🔄 "What if I get stuck on a module?" +🔗 "Can I skip around or must I do modules in order?" -**Built-in support system:** -- **Progressive scaffolding:** Each implementation broken into guided steps -- **Comprehensive testing:** 200+ tests with educational error messages -- **Rich documentation:** Visual explanations and debugging tips -- **Modular design:** Skip ahead or go back without breaking progress +**Designed for flexibility with recommended paths:** +- **Core foundation**: Modules 1-4 build on each other (do in order) +- **Parallel tracks**: Modules 5-8 can be done in different sequences +- **Advanced modules**: Pick what interests you most -**Philosophy:** You should feel challenged but never lost. +**Smart approach:** Do Setup → Tensor → Activations in order, then choose your adventure.
-🚀 "How does this connect to modern architectures like Transformers?" +🚀 "Will this actually work with real data and real problems?" -**Transformers use the same foundations you'll build:** -- **Attention mechanism:** Matrix operations using your tensor implementations -- **LayerNorm:** Built on your activation and layer components -- **Training:** Powered by your Adam optimizer and autograd system +**Yes - we use production datasets from day one:** +- **CIFAR-10**: Train on 60,000 real images, not toy data +- **Standard benchmarks**: Compare your results to published papers +- **Real performance**: Your CNN will achieve 85%+ accuracy on image classification +- **Production patterns**: CLI tools, testing, packaging like professional frameworks -**Understanding foundations makes you the engineer who can optimize and extend modern architectures,** not just use them through APIs. +**This isn't a toy - it's a real framework that handles real problems.**
---