mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-06 04:38:53 -05:00
Replace FAQ with real student concerns
- Address math anxiety: explain math learning approach - Address validation fears: highlight testing and feedback - Address flexibility concerns: explain module dependencies - Address toy project skepticism: emphasize real data and results - Focus on actual questions students ask vs generic course info
This commit is contained in:
@@ -201,50 +201,50 @@ Want to see what TinyTorch feels like? **[Launch the Setup chapter](chapters/01-
|
||||
## ❓ **Common Questions**
|
||||
|
||||
<details>
|
||||
<summary><strong>⏰ "How much time should I plan for this course?"</strong></summary>
|
||||
<summary><strong>🧮 "Do I need to know advanced math to succeed?"</strong></summary>
|
||||
|
||||
**Time investment:** ~40-60 hours for complete framework
|
||||
**You need high school math + willingness to learn.** We explain the math as we go:
|
||||
|
||||
**Flexible pacing options:**
|
||||
- **Quick exploration:** 1-2 modules to understand the approach
|
||||
- **Focused learning:** Core modules (01-08) for solid foundations
|
||||
- **Complete mastery:** All 15 modules for full framework expertise
|
||||
- **Linear algebra**: We explain matrix multiplication when we build Dense layers
|
||||
- **Calculus**: We cover derivatives when implementing backpropagation
|
||||
- **Statistics**: We introduce concepts like gradients in context of optimization
|
||||
|
||||
Each module is self-contained, so you can stop and start as needed.
|
||||
**Key insight:** You learn the math by implementing it, not the other way around. Most students find this more intuitive than traditional math courses.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>🤔 "I'm already experienced with ML. Will this be too basic?"</strong></summary>
|
||||
<summary><strong>✅ "How do I know if I'm implementing things correctly?"</strong></summary>
|
||||
|
||||
**Quick self-assessment:**
|
||||
- Can you implement Adam optimizer from the original paper?
|
||||
- Do you know why ReLU causes dying neurons and how to prevent it?
|
||||
- Could you debug a mysterious 50% accuracy drop after deployment?
|
||||
**Built-in validation at every step:**
|
||||
- **200+ automated tests** that check your implementations
|
||||
- **Progressive testing**: Start simple, then add complexity
|
||||
- **Clear error messages**: "Your ReLU should return 0 for negative inputs"
|
||||
- **Expected outputs**: Know exactly what your code should produce
|
||||
|
||||
**Experienced engineers often find TinyTorch fills the "implementation gap"** that most ML education skips - the deep understanding of how frameworks actually work under the hood.
|
||||
**Example:** Your tensor multiplication either passes the test or gets specific feedback about what went wrong.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>🔄 "What if I get stuck on a module?"</strong></summary>
|
||||
<summary><strong>🔗 "Can I skip around or must I do modules in order?"</strong></summary>
|
||||
|
||||
**Built-in support system:**
|
||||
- **Progressive scaffolding:** Each implementation broken into guided steps
|
||||
- **Comprehensive testing:** 200+ tests with educational error messages
|
||||
- **Rich documentation:** Visual explanations and debugging tips
|
||||
- **Modular design:** Skip ahead or go back without breaking progress
|
||||
**Designed for flexibility with recommended paths:**
|
||||
- **Core foundation**: Modules 1-4 build on each other (do in order)
|
||||
- **Parallel tracks**: Modules 5-8 can be done in different sequences
|
||||
- **Advanced modules**: Pick what interests you most
|
||||
|
||||
**Philosophy:** You should feel challenged but never lost.
|
||||
**Smart approach:** Do Setup → Tensor → Activations in order, then choose your adventure.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>🚀 "How does this connect to modern architectures like Transformers?"</strong></summary>
|
||||
<summary><strong>🚀 "Will this actually work with real data and real problems?"</strong></summary>
|
||||
|
||||
**Transformers use the same foundations you'll build:**
|
||||
- **Attention mechanism:** Matrix operations using your tensor implementations
|
||||
- **LayerNorm:** Built on your activation and layer components
|
||||
- **Training:** Powered by your Adam optimizer and autograd system
|
||||
**Yes - we use production datasets from day one:**
|
||||
- **CIFAR-10**: Train on 60,000 real images, not toy data
|
||||
- **Standard benchmarks**: Compare your results to published papers
|
||||
- **Real performance**: Your CNN will achieve 85%+ accuracy on image classification
|
||||
- **Production patterns**: CLI tools, testing, packaging like professional frameworks
|
||||
|
||||
**Understanding foundations makes you the engineer who can optimize and extend modern architectures,** not just use them through APIs.
|
||||
**This isn't a toy - it's a real framework that handles real problems.**
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user