Files
TinyTorch/book/resources.md
Vijay Janapa Reddi 5f1d74c39c 🔧 Fix MLOps over-emphasis and repetitive differentiation statements
✂️ Reduced MLOps Focus:
- Renamed 'MLOps & Production' → 'Development Tools'
- Removed redundant 'MLOps Community' link
- Focuses on practical development tools instead

🎯 Made Framework Differentiations Distinct:
- Micrograd: 'shows you the math, TinyTorch shows you the systems'
- Tinygrad: 'optimizes for speed, TinyTorch optimizes for learning'
- NNFS: 'focuses on algorithms, TinyTorch focuses on complete systems engineering'

💡 Benefits:
- Each differentiation now highlights specific strengths vs repetitive vehicle analogy
- Less MLOps emphasis (appears in course already)
- More concise and memorable comparisons

Result: Cleaner resource organization with unique, specific differentiations
that avoid repetition and over-emphasis on any single topic.
2025-07-18 10:55:26 -04:00

5.5 KiB

📚 Additional Learning Resources

Complement your TinyTorch journey with these carefully selected resources.

While TinyTorch teaches you to build complete ML systems from scratch, these resources provide broader context, alternative perspectives, and production tools.


🎓 Academic Courses

Machine Learning Systems

Deep Learning Foundations


Systems & Engineering

Implementation & Theory

  • Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville
    Mathematical foundations - the theory behind what you implement in TinyTorch

  • Hands-On Machine Learning by Aurélien Géron
    Practical implementations using established frameworks


🛠️ Alternative Implementations

Different approaches to building ML systems from scratch - see how others tackle the same challenge:

Minimal Frameworks

  • Micrograd by Andrej Karpathy
    Minimal autograd engine in 100 lines. Micrograd shows you the math, TinyTorch shows you the systems.

  • Tinygrad by George Hotz
    Performance-focused educational framework. Tinygrad optimizes for speed, TinyTorch optimizes for learning.

  • Neural Networks from Scratch by Harrison Kinsley
    Math-heavy implementation approach. NNFS focuses on algorithms, TinyTorch focuses on complete systems engineering.


🏭 Production Tools & Platforms

Framework Deep Dives

Development Tools

  • Papers With Code
    Research papers with implementation code - apply your skills to reproduce results

  • Weights & Biases
    Experiment tracking and model management - scale your TinyTorch training


🌐 Learning Communities

Technical Discussion

  • r/MachineLearning
    Research discussions and paper releases

  • The Gradient
    Deep technical articles on ML research and systems

  • Distill.pub
    Interactive explanations of ML concepts with beautiful visualizations


🎯 Next Steps After TinyTorch

Apply Your Skills

  1. Reproduce Research: Use your TinyTorch foundation to implement papers from scratch
  2. Contribute to Open Source: PyTorch, TensorFlow, JAX - you now understand the internals
  3. Build Production Systems: Apply MLOps principles from your final modules
  4. Optimize for Edge: Use compression and kernel techniques for deployment

Advanced Specializations

  • Distributed Training: Scale your framework knowledge to multi-GPU systems
  • Compiler Design: Build domain-specific languages for ML (JAX, Triton style)
  • Hardware Acceleration: Custom kernels and specialized processors
  • Systems Research: Novel architectures and training techniques

💡 How to Use These Resources

:class: tip
**Parallel Learning**: Use these alongside TinyTorch modules for broader context

**Post-TinyTorch**: After completing the framework, dive into production systems

**Compare & Contrast**: Study alternative implementations to understand design trade-offs

Remember: You now have the implementation foundation that most ML engineers lack. These resources help you apply that knowledge to broader systems and production environments.


Building ML systems from scratch gives you superpowers. These resources help you use them wisely. 🚀