🔥 Major Improvements: - Removed research papers section (belongs in specific labs as context) - Added clear differentiation for alternative implementations with vehicle analogy - Moved ML Systems book to books section with prominent positioning - Added actual book links (O'Reilly, deeplearningbook.org) where available - Focused on maintainable, stable resources 🎯 Key Differentiations Added: - 'Micrograd teaches engine parts, TinyTorch teaches you to design the whole vehicle' - 'NNFS teaches engine parts, TinyTorch teaches the whole vehicle and drive it' - 'Tinygrad optimizes for speed, TinyTorch optimizes for learning systems thinking' 🏭 Production Focus: - Added industrial tools: W&B, MLOps Community, Papers with Code - Reorganized into: Courses, Books, Alternative Implementations, Production Tools - Removed quickly-outdated content, kept stable educational resources 📖 ML Systems Book Positioning: - Moved Vijay's book from courses to books section - Positioned as 'the perfect companion to TinyTorch' - Added proper book links for maintainability Result: Much more focused, maintainable resource page that complements TinyTorch without duplicating content that belongs in specific labs.
5.7 KiB
📚 Additional Learning Resources
Complement your TinyTorch journey with these carefully selected resources.
While TinyTorch teaches you to build complete ML systems from scratch, these resources provide broader context, alternative perspectives, and production tools.
🎓 Academic Courses
Machine Learning Systems
-
CS 329S: Machine Learning Systems Design (Stanford)
Production ML systems, infrastructure, and deployment at scale -
CS 6.S965: TinyML and Efficient Deep Learning (MIT)
Edge computing, model compression, and efficient ML algorithms -
CS 249r: Tiny Machine Learning (Harvard)
TinyML systems, edge AI, and resource-constrained machine learning
Deep Learning Foundations
-
CS 231n: Convolutional Neural Networks (Stanford)
Computer vision and CNN architectures - complements TinyTorch spatial modules -
CS 224n: Natural Language Processing (Stanford)
NLP and transformers - perfect follow-up to TinyTorch attention module
📖 Recommended Books
Systems & Engineering
-
Machine Learning Systems by Prof. Vijay Janapa Reddi (Harvard)
Comprehensive systems perspective on ML engineering and optimization - the perfect companion to TinyTorch -
Designing Machine Learning Systems by Chip Huyen
Production ML engineering, data pipelines, and system design -
Machine Learning Engineering by Andriy Burkov
End-to-end ML project lifecycle and best practices
Implementation & Theory
-
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville
Mathematical foundations - the theory behind what you implement in TinyTorch -
Hands-On Machine Learning by Aurélien Géron
Practical implementations using established frameworks
🛠️ Alternative Implementations
Different approaches to building ML systems from scratch - see how others tackle the same challenge:
Minimal Frameworks
-
Micrograd by Andrej Karpathy
Minimal autograd engine in 100 lines. Micrograd teaches engine parts, TinyTorch teaches you to design the whole vehicle and drive it. -
Tinygrad by George Hotz
Performance-focused educational framework. Tinygrad optimizes for speed, TinyTorch optimizes for learning systems thinking. -
Neural Networks from Scratch by Harrison Kinsley
Math-heavy implementation approach. NNFS teaches you the engine parts, TinyTorch teaches you to design the whole vehicle and drive it.
🏭 Production Tools & Platforms
Framework Deep Dives
-
PyTorch Internals by Edward Yang
How PyTorch actually works under the hood - see what you built in TinyTorch at production scale -
PyTorch Documentation: Extending PyTorch
Custom operators and autograd functions - apply your TinyTorch knowledge
MLOps & Production
-
Papers With Code
Research papers with implementation code - apply your skills to reproduce results -
MLOps Community
Production ML engineering discussions and best practices -
Weights & Biases
Experiment tracking and model management - scale your TinyTorch training
🌐 Learning Communities
Technical Discussion
-
r/MachineLearning
Research discussions and paper releases -
The Gradient
Deep technical articles on ML research and systems -
Distill.pub
Interactive explanations of ML concepts with beautiful visualizations
🎯 Next Steps After TinyTorch
Apply Your Skills
- Reproduce Research: Use your TinyTorch foundation to implement papers from scratch
- Contribute to Open Source: PyTorch, TensorFlow, JAX - you now understand the internals
- Build Production Systems: Apply MLOps principles from your final modules
- Optimize for Edge: Use compression and kernel techniques for deployment
Advanced Specializations
- Distributed Training: Scale your framework knowledge to multi-GPU systems
- Compiler Design: Build domain-specific languages for ML (JAX, Triton style)
- Hardware Acceleration: Custom kernels and specialized processors
- Systems Research: Novel architectures and training techniques
💡 How to Use These Resources
:class: tip
**Parallel Learning**: Use these alongside TinyTorch modules for broader context
**Post-TinyTorch**: After completing the framework, dive into production systems
**Compare & Contrast**: Study alternative implementations to understand design trade-offs
Remember: You now have the implementation foundation that most ML engineers lack. These resources help you apply that knowledge to broader systems and production environments.
Building ML systems from scratch gives you superpowers. These resources help you use them wisely. 🚀