mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-06 13:52:32 -05:00
📚 Complete resources page restructure for maintainability and focus
🔥 Major Improvements: - Removed research papers section (belongs in specific labs as context) - Added clear differentiation for alternative implementations with vehicle analogy - Moved ML Systems book to books section with prominent positioning - Added actual book links (O'Reilly, deeplearningbook.org) where available - Focused on maintainable, stable resources 🎯 Key Differentiations Added: - 'Micrograd teaches engine parts, TinyTorch teaches you to design the whole vehicle' - 'NNFS teaches engine parts, TinyTorch teaches the whole vehicle and drive it' - 'Tinygrad optimizes for speed, TinyTorch optimizes for learning systems thinking' 🏭 Production Focus: - Added industrial tools: W&B, MLOps Community, Papers with Code - Reorganized into: Courses, Books, Alternative Implementations, Production Tools - Removed quickly-outdated content, kept stable educational resources 📖 ML Systems Book Positioning: - Moved Vijay's book from courses to books section - Positioned as 'the perfect companion to TinyTorch' - Added proper book links for maintainability Result: Much more focused, maintainable resource page that complements TinyTorch without duplicating content that belongs in specific labs.
This commit is contained in:
@@ -1,8 +1,8 @@
|
||||
# 📚 Additional Learning Resources
|
||||
|
||||
**Complement your TinyTorch journey with these carefully curated resources.**
|
||||
**Complement your TinyTorch journey with these carefully selected resources.**
|
||||
|
||||
While TinyTorch teaches you to build ML systems from scratch, these resources provide broader context, alternative perspectives, and deeper dives into specific topics.
|
||||
While TinyTorch teaches you to build complete ML systems from scratch, these resources provide broader context, alternative perspectives, and production tools.
|
||||
|
||||
---
|
||||
|
||||
@@ -12,9 +12,6 @@ While TinyTorch teaches you to build ML systems from scratch, these resources pr
|
||||
- **[CS 329S: Machine Learning Systems Design](https://stanford-cs329s.github.io/)** (Stanford)
|
||||
*Production ML systems, infrastructure, and deployment at scale*
|
||||
|
||||
- **[Machine Learning Systems](https://mlsysbook.ai)** by Prof. Vijay Janapa Reddi (Harvard)
|
||||
*Comprehensive systems perspective on ML engineering and optimization*
|
||||
|
||||
- **[CS 6.S965: TinyML and Efficient Deep Learning](https://hanlab.mit.edu/courses/2024-fall-65940)** (MIT)
|
||||
*Edge computing, model compression, and efficient ML algorithms*
|
||||
|
||||
@@ -33,118 +30,88 @@ While TinyTorch teaches you to build ML systems from scratch, these resources pr
|
||||
## 📖 **Recommended Books**
|
||||
|
||||
### **Systems & Engineering**
|
||||
- **"Designing Machine Learning Systems"** by Chip Huyen
|
||||
- **[Machine Learning Systems](https://mlsysbook.ai)** by Prof. Vijay Janapa Reddi (Harvard)
|
||||
*Comprehensive systems perspective on ML engineering and optimization - the perfect companion to TinyTorch*
|
||||
|
||||
- **[Designing Machine Learning Systems](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/)** by Chip Huyen
|
||||
*Production ML engineering, data pipelines, and system design*
|
||||
|
||||
- **"Machine Learning Engineering"** by Andriy Burkov
|
||||
- **[Machine Learning Engineering](https://www.mlebook.com/wiki/doku.php)** by Andriy Burkov
|
||||
*End-to-end ML project lifecycle and best practices*
|
||||
|
||||
- **"Reliable Machine Learning"** by Cathy Chen, Niall Richard Murphy
|
||||
*SRE principles applied to ML systems and production reliability*
|
||||
|
||||
### **Deep Learning Implementation**
|
||||
- **"Deep Learning"** by Ian Goodfellow, Yoshua Bengio, Aaron Courville
|
||||
### **Implementation & Theory**
|
||||
- **[Deep Learning](https://www.deeplearningbook.org/)** by Ian Goodfellow, Yoshua Bengio, Aaron Courville
|
||||
*Mathematical foundations - the theory behind what you implement in TinyTorch*
|
||||
|
||||
- **"Hands-On Machine Learning"** by Aurélien Géron
|
||||
*Practical implementations using established frameworks like TensorFlow/PyTorch*
|
||||
- **[Hands-On Machine Learning](https://www.oreilly.com/library/view/hands-on-machine-learning/9781098125967/)** by Aurélien Géron
|
||||
*Practical implementations using established frameworks*
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ **Framework Deep Dives**
|
||||
## 🛠️ **Alternative Implementations**
|
||||
|
||||
### **PyTorch Internals**
|
||||
**Different approaches to building ML systems from scratch - see how others tackle the same challenge:**
|
||||
|
||||
### **Minimal Frameworks**
|
||||
- **[Micrograd](https://github.com/karpathy/micrograd)** by Andrej Karpathy
|
||||
*Minimal autograd engine in 100 lines. **Micrograd teaches engine parts, TinyTorch teaches you to design the whole vehicle and drive it.***
|
||||
|
||||
- **[Tinygrad](https://github.com/geohot/tinygrad)** by George Hotz
|
||||
*Performance-focused educational framework. **Tinygrad optimizes for speed, TinyTorch optimizes for learning systems thinking.***
|
||||
|
||||
- **[Neural Networks from Scratch](https://nnfs.io/)** by Harrison Kinsley
|
||||
*Math-heavy implementation approach. **NNFS teaches you the engine parts, TinyTorch teaches you to design the whole vehicle and drive it.***
|
||||
|
||||
---
|
||||
|
||||
## 🏭 **Production Tools & Platforms**
|
||||
|
||||
### **Framework Deep Dives**
|
||||
- **[PyTorch Internals](http://blog.ezyang.com/2019/05/pytorch-internals/)** by Edward Yang
|
||||
*How PyTorch actually works under the hood - see what you built in TinyTorch at production scale*
|
||||
|
||||
- **[PyTorch Documentation: Extending PyTorch](https://pytorch.org/docs/stable/notes/extending.html)**
|
||||
*Custom operators and autograd functions - apply your TinyTorch knowledge*
|
||||
|
||||
### **TensorFlow Architecture**
|
||||
- **[TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45166.pdf)**
|
||||
*Original TensorFlow paper - understand different architectural choices*
|
||||
### **MLOps & Production**
|
||||
- **[Papers With Code](https://paperswithcode.com/)**
|
||||
*Research papers with implementation code - apply your skills to reproduce results*
|
||||
|
||||
- **[XLA: Tensorflow Compiled](https://developers.google.com/machine-learning/xla)**
|
||||
*Compilation and optimization techniques for ML frameworks*
|
||||
- **[MLOps Community](https://mlops.community/)**
|
||||
*Production ML engineering discussions and best practices*
|
||||
|
||||
- **[Weights & Biases](https://wandb.ai/)**
|
||||
*Experiment tracking and model management - scale your TinyTorch training*
|
||||
|
||||
---
|
||||
|
||||
## 🔬 **Research Papers & Implementations**
|
||||
## 🌐 **Learning Communities**
|
||||
|
||||
### **Foundational Papers**
|
||||
- **["Automatic Differentiation in Machine Learning: A Survey"](https://jmlr.org/papers/v18/17-468.html)**
|
||||
*Comprehensive overview of autograd techniques you implemented*
|
||||
### **Technical Discussion**
|
||||
- **[r/MachineLearning](https://www.reddit.com/r/MachineLearning/)**
|
||||
*Research discussions and paper releases*
|
||||
|
||||
- **["Adam: A Method for Stochastic Optimization"](https://arxiv.org/abs/1412.6980)**
|
||||
*The optimizer paper - see the math behind your implementation*
|
||||
|
||||
- **["Attention Is All You Need"](https://arxiv.org/abs/1706.03762)**
|
||||
*The transformer paper that revolutionized ML - builds on your attention module*
|
||||
|
||||
### **Systems & Optimization**
|
||||
- **["TensorFlow: A System for Large-Scale Machine Learning"](https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi)**
|
||||
*Systems design decisions for distributed ML frameworks*
|
||||
|
||||
- **["PyTorch: An Imperative Style, High-Performance Deep Learning Library"](https://papers.nips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html)**
|
||||
*Design philosophy behind PyTorch's eager execution model*
|
||||
|
||||
---
|
||||
|
||||
## 💻 **Practical Implementation Guides**
|
||||
|
||||
### **From Scratch Implementations**
|
||||
- **[Neural Networks from Scratch](https://nnfs.io/)** by Harrison Kinsley
|
||||
*Another from-scratch approach with Python - different style from TinyTorch*
|
||||
|
||||
- **[Micrograd](https://github.com/karpathy/micrograd)** by Andrej Karpathy
|
||||
*Minimal autograd engine in 100 lines - see a different take on what you built*
|
||||
|
||||
- **[Tinygrad](https://github.com/geohot/tinygrad)** by George Hotz
|
||||
*Another educational ML framework - compare approaches and implementations*
|
||||
|
||||
### **Advanced Topics**
|
||||
- **[Quantization and Training of Neural Networks](https://arxiv.org/abs/1712.05877)**
|
||||
*Extends your compression module with cutting-edge techniques*
|
||||
|
||||
- **[Mixed Precision Training](https://arxiv.org/abs/1710.03740)**
|
||||
*Optimization techniques for faster training and inference*
|
||||
|
||||
---
|
||||
|
||||
## 🌐 **Online Communities & Blogs**
|
||||
|
||||
### **Technical Blogs**
|
||||
- **[The Gradient](https://thegradient.pub/)**
|
||||
*Deep technical articles on ML research and systems*
|
||||
|
||||
- **[Distill.pub](https://distill.pub/)**
|
||||
*Interactive explanations of ML concepts with beautiful visualizations*
|
||||
|
||||
- **[Papers With Code](https://paperswithcode.com/)**
|
||||
*Research papers with implementation code - apply your skills to reproduce results*
|
||||
|
||||
### **Discussion Forums**
|
||||
- **[r/MachineLearning](https://www.reddit.com/r/MachineLearning/)**
|
||||
*Research discussions and paper releases*
|
||||
|
||||
- **[MLOps Community](https://mlops.community/)**
|
||||
*Production ML engineering discussions and best practices*
|
||||
|
||||
---
|
||||
|
||||
## 🎯 **Next Steps After TinyTorch**
|
||||
|
||||
### **Apply Your Skills**
|
||||
1. **Reproduce a Research Paper**: Use your TinyTorch foundation to implement papers from scratch
|
||||
1. **Reproduce Research**: Use your TinyTorch foundation to implement papers from scratch
|
||||
2. **Contribute to Open Source**: PyTorch, TensorFlow, JAX - you now understand the internals
|
||||
3. **Build Production Systems**: Apply MLOps principles from your final modules to real projects
|
||||
4. **Optimize for Edge**: Use compression and kernel techniques for mobile/embedded deployment
|
||||
3. **Build Production Systems**: Apply MLOps principles from your final modules
|
||||
4. **Optimize for Edge**: Use compression and kernel techniques for deployment
|
||||
|
||||
### **Advanced Specializations**
|
||||
- **Distributed Training**: Scale your framework knowledge to multi-GPU systems
|
||||
- **Compiler Design**: Build domain-specific languages for ML (like JAX or Triton)
|
||||
- **Hardware Acceleration**: Custom CUDA kernels and specialized processors
|
||||
- **Research**: Novel architectures and training techniques
|
||||
- **Compiler Design**: Build domain-specific languages for ML (JAX, Triton style)
|
||||
- **Hardware Acceleration**: Custom kernels and specialized processors
|
||||
- **Systems Research**: Novel architectures and training techniques
|
||||
|
||||
---
|
||||
|
||||
@@ -152,14 +119,14 @@ While TinyTorch teaches you to build ML systems from scratch, these resources pr
|
||||
|
||||
```{admonition} 🎯 Strategic Learning Path
|
||||
:class: tip
|
||||
**Parallel Learning**: Use these resources alongside TinyTorch modules for deeper context
|
||||
**Parallel Learning**: Use these alongside TinyTorch modules for broader context
|
||||
|
||||
**Post-TinyTorch**: After completing the framework, dive into production systems and advanced topics
|
||||
**Post-TinyTorch**: After completing the framework, dive into production systems
|
||||
|
||||
**Project-Based**: Apply concepts from multiple resources to build real projects
|
||||
**Compare & Contrast**: Study alternative implementations to understand design trade-offs
|
||||
```
|
||||
|
||||
**Remember**: You now have the implementation foundation that most ML engineers lack. These resources will help you apply that knowledge to broader systems and cutting-edge research.
|
||||
**Remember**: You now have the implementation foundation that most ML engineers lack. These resources help you apply that knowledge to broader systems and production environments.
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user