mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-11 17:49:25 -05:00
docs(tinytorch): update ABOUT.md files for module renumbering
- 05_dataloader: now Foundation tier, prereqs 01-04 - 06_autograd: prereqs updated to 01-05 (includes DataLoader) - 07_optimizers: prereqs updated to 01-06 - 08_training: prereqs updated to 01-07 - Updated all Binder links, GitHub links, and audio references - Updated What's Next sections with correct module numbers
This commit is contained in:
@@ -1,11 +1,11 @@
|
||||
# Module 08: DataLoader
|
||||
# Module 05: DataLoader
|
||||
|
||||
:::{admonition} Module Info
|
||||
:class: note
|
||||
|
||||
**ARCHITECTURE TIER** | Difficulty: ●●○○ | Time: 3-5 hours | Prerequisites: 01-07
|
||||
**FOUNDATION TIER** | Difficulty: ●●○○ | Time: 3-5 hours | Prerequisites: 01-04
|
||||
|
||||
**Prerequisites:** You should be comfortable with tensors, activations, layers, losses, autograd, optimizers, and training loops from Modules 01-07. This module assumes you understand the training loop pattern and why batching matters for efficient gradient descent.
|
||||
**Prerequisites:** You should be comfortable with tensors, activations, layers, and losses from Modules 01-04. This module introduces data loading infrastructure that will be used by autograd, optimizers, and training loops in the following modules.
|
||||
:::
|
||||
|
||||
`````{only} html
|
||||
@@ -16,14 +16,14 @@
|
||||
|
||||
Run interactively in your browser.
|
||||
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F08_dataloader%2F08_dataloader.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F05_dataloader%2F05_dataloader.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 📄 View Source
|
||||
|
||||
Browse the source code on GitHub.
|
||||
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/08_dataloader/08_dataloader.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/05_dataloader/05_dataloader.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 🎧 Audio Overview
|
||||
@@ -31,7 +31,7 @@ Browse the source code on GitHub.
|
||||
Listen to an AI-generated overview.
|
||||
|
||||
<audio controls style="width: 100%; height: 54px; margin-top: auto;">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/08_dataloader.mp3" type="audio/mpeg">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/05_dataloader.mp3" type="audio/mpeg">
|
||||
</audio>
|
||||
```
|
||||
|
||||
@@ -630,25 +630,25 @@ For students who want to understand the academic foundations and engineering dec
|
||||
|
||||
## What's Next
|
||||
|
||||
```{seealso} Coming Up: Module 09 - Convolutions
|
||||
```{seealso} Coming Up: Module 06 - Autograd
|
||||
|
||||
Implement Conv2d, MaxPool2d, and Flatten layers to build convolutional neural networks. You'll apply your DataLoader to image datasets and discover why CNNs revolutionized computer vision.
|
||||
Implement automatic differentiation that computes gradients through computation graphs. Your DataLoader will feed batches to models, and autograd will enable learning from those batches.
|
||||
```
|
||||
|
||||
**Preview - How Your DataLoader Gets Used in Future Modules:**
|
||||
|
||||
| Module | What It Does | Your DataLoader In Action |
|
||||
|--------|--------------|--------------------------|
|
||||
| **06: Autograd** | Automatic differentiation | Tensors from DataLoader flow through computation graphs |
|
||||
| **08: Training** | Complete training loops | `for batch in loader:` orchestrates the full training process |
|
||||
| **09: Convolutions** | Convolutional layers for images | `for images, labels in loader:` feed batches to CNNs |
|
||||
| **10: Tokenization** | Text processing | `DataLoader(text_dataset)` batch sentences |
|
||||
| **13: Transformers** | Attention mechanisms | Large batch sizes enabled by efficient data loading |
|
||||
|
||||
## Get Started
|
||||
|
||||
```{tip} Interactive Options
|
||||
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/08_dataloader/08_dataloader.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/08_dataloader/08_dataloader.py)** - Browse the implementation code
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/05_dataloader/05_dataloader.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/05_dataloader/05_dataloader.py)** - Browse the implementation code
|
||||
```
|
||||
|
||||
```{warning} Save Your Progress
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
# Module 05: Autograd
|
||||
# Module 06: Autograd
|
||||
|
||||
:::{admonition} Module Info
|
||||
:class: note
|
||||
|
||||
**FOUNDATION TIER** | Difficulty: ●●●○ | Time: 6-8 hours | Prerequisites: 01-04
|
||||
**FOUNDATION TIER** | Difficulty: ●●●○ | Time: 6-8 hours | Prerequisites: 01-05
|
||||
|
||||
**Prerequisites: Modules 01-04** means you need:
|
||||
**Prerequisites: Modules 01-05** means you need:
|
||||
- Tensor operations (matmul, broadcasting, reductions)
|
||||
- Activation functions (understanding non-linearity)
|
||||
- Neural network layers (what gradients flow through)
|
||||
- Loss functions (the "why" behind gradients)
|
||||
- DataLoader for efficient batch processing
|
||||
|
||||
If you can compute a forward pass through a neural network manually and understand why we need to minimize loss, you're ready.
|
||||
:::
|
||||
@@ -22,14 +23,14 @@ If you can compute a forward pass through a neural network manually and understa
|
||||
|
||||
Run interactively in your browser.
|
||||
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F05_autograd%2F05_autograd.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F06_autograd%2F06_autograd.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 📄 View Source
|
||||
|
||||
Browse the source code on GitHub.
|
||||
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/05_autograd/05_autograd.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/06_autograd/06_autograd.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 🎧 Audio Overview
|
||||
@@ -37,7 +38,7 @@ Browse the source code on GitHub.
|
||||
Listen to an AI-generated overview.
|
||||
|
||||
<audio controls style="width: 100%; height: 54px; margin-top: auto;">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/05_autograd.mp3" type="audio/mpeg">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/06_autograd.mp3" type="audio/mpeg">
|
||||
</audio>
|
||||
```
|
||||
|
||||
@@ -596,7 +597,7 @@ For students who want to understand the academic foundations and mathematical un
|
||||
|
||||
## What's Next
|
||||
|
||||
```{seealso} Coming Up: Module 06 - Optimizers
|
||||
```{seealso} Coming Up: Module 07 - Optimizers
|
||||
|
||||
Implement SGD, Adam, and other optimization algorithms that use your autograd gradients to update parameters and train neural networks. You'll complete the training loop and make your networks learn from data.
|
||||
```
|
||||
@@ -605,16 +606,16 @@ Implement SGD, Adam, and other optimization algorithms that use your autograd gr
|
||||
|
||||
| Module | What It Does | Your Autograd In Action |
|
||||
|--------|--------------|------------------------|
|
||||
| **06: Optimizers** | Update parameters using gradients | `optimizer.step()` uses `param.grad` computed by backward() |
|
||||
| **07: Training** | Complete training loops | `loss.backward()` → `optimizer.step()` → repeat |
|
||||
| **07: Optimizers** | Update parameters using gradients | `optimizer.step()` uses `param.grad` computed by backward() |
|
||||
| **08: Training** | Complete training loops | `loss.backward()` → `optimizer.step()` → repeat |
|
||||
| **12: Attention** | Multi-head self-attention | Gradients flow through Q, K, V projections automatically |
|
||||
|
||||
## Get Started
|
||||
|
||||
```{tip} Interactive Options
|
||||
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/05_autograd/05_autograd.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/05_autograd/05_autograd.py)** - Browse the implementation code
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/06_autograd/06_autograd.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/06_autograd/06_autograd.py)** - Browse the implementation code
|
||||
```
|
||||
|
||||
```{warning} Save Your Progress
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
# Module 06: Optimizers
|
||||
# Module 07: Optimizers
|
||||
|
||||
:::{admonition} Module Info
|
||||
:class: note
|
||||
|
||||
**FOUNDATION TIER** | Difficulty: ●●○○ | Time: 3-5 hours | Prerequisites: 01-05
|
||||
**FOUNDATION TIER** | Difficulty: ●●○○ | Time: 3-5 hours | Prerequisites: 01-06
|
||||
|
||||
**Prerequisites: Modules 01-05** means you need:
|
||||
**Prerequisites: Modules 01-06** means you need:
|
||||
- Tensor operations and parameter storage
|
||||
- DataLoader for efficient batch processing
|
||||
- Understanding of forward/backward passes (autograd)
|
||||
- Why gradients point toward higher loss
|
||||
|
||||
@@ -21,14 +22,14 @@ If you understand how `loss.backward()` computes gradients and why we need to up
|
||||
|
||||
Run interactively in your browser.
|
||||
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F06_optimizers%2F06_optimizers.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F07_optimizers%2F07_optimizers.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 📄 View Source
|
||||
|
||||
Browse the source code on GitHub.
|
||||
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/06_optimizers/06_optimizers.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/07_optimizers/07_optimizers.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 🎧 Audio Overview
|
||||
@@ -36,7 +37,7 @@ Browse the source code on GitHub.
|
||||
Listen to an AI-generated overview.
|
||||
|
||||
<audio controls style="width: 100%; height: 54px; margin-top: auto;">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/06_optimizers.mp3" type="audio/mpeg">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/07_optimizers.mp3" type="audio/mpeg">
|
||||
</audio>
|
||||
```
|
||||
|
||||
@@ -104,12 +105,12 @@ optimizer.zero_grad() # Clear gradients for next iteration
|
||||
|
||||
To keep this module focused, you will **not** implement:
|
||||
|
||||
- Learning rate schedules (that's Module 07: Training)
|
||||
- Learning rate schedules (that's Module 08: Training)
|
||||
- Gradient clipping (PyTorch provides this via `torch.nn.utils.clip_grad_norm_`)
|
||||
- Second-order optimizers like L-BFGS (rarely used in deep learning due to memory cost)
|
||||
- Distributed optimizer sharding (production frameworks use techniques like ZeRO)
|
||||
|
||||
**You are building the core optimization algorithms.** Advanced training techniques come in Module 07.
|
||||
**You are building the core optimization algorithms.** Advanced training techniques come in Module 08.
|
||||
|
||||
## API Reference
|
||||
|
||||
@@ -532,7 +533,7 @@ For students who want to understand the academic foundations and mathematical un
|
||||
|
||||
## What's Next
|
||||
|
||||
```{seealso} Coming Up: Module 07 - Training
|
||||
```{seealso} Coming Up: Module 08 - Training
|
||||
|
||||
Combine optimizers with training loops to actually train neural networks. You'll implement learning rate scheduling, checkpointing, and the complete training/validation workflow that makes everything work together.
|
||||
```
|
||||
@@ -541,16 +542,16 @@ Combine optimizers with training loops to actually train neural networks. You'll
|
||||
|
||||
| Module | What It Does | Your Optimizers In Action |
|
||||
|--------|--------------|---------------------------|
|
||||
| **07: Training** | Complete training loops | `for epoch in range(10): loss.backward(); optimizer.step()` |
|
||||
| **08: DataLoader** | Batch data processing | `optimizer.step()` updates after each batch of data |
|
||||
| **08: Training** | Complete training loops | `for epoch in range(10): loss.backward(); optimizer.step()` |
|
||||
| **09: Convolutions** | Convolutional networks | `AdamW` optimizes millions of CNN parameters efficiently |
|
||||
| **13: Transformers** | Attention mechanisms | Large models require careful optimizer selection |
|
||||
|
||||
## Get Started
|
||||
|
||||
```{tip} Interactive Options
|
||||
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/06_optimizers/06_optimizers.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/06_optimizers/06_optimizers.py)** - Browse the implementation code
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/07_optimizers/07_optimizers.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/07_optimizers/07_optimizers.py)** - Browse the implementation code
|
||||
```
|
||||
|
||||
```{warning} Save Your Progress
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
# Module 07: Training
|
||||
# Module 08: Training
|
||||
|
||||
:::{admonition} Module Info
|
||||
:class: note
|
||||
|
||||
**FOUNDATION TIER** | Difficulty: ●●○○ | Time: 5-7 hours | Prerequisites: 01-06
|
||||
**FOUNDATION TIER** | Difficulty: ●●○○ | Time: 5-7 hours | Prerequisites: 01-07
|
||||
|
||||
By completing Modules 01-06, you've built all the fundamental components: tensors, activations, layers, losses, autograd, and optimizers. Each piece works perfectly in isolation, but real machine learning requires orchestrating these components into a cohesive training process. This module provides that orchestration.
|
||||
By completing Modules 01-07, you've built all the fundamental components: tensors, activations, layers, losses, dataloader, autograd, and optimizers. Each piece works perfectly in isolation, but real machine learning requires orchestrating these components into a cohesive training process. This module provides that orchestration.
|
||||
:::
|
||||
|
||||
`````{only} html
|
||||
@@ -16,14 +16,14 @@ By completing Modules 01-06, you've built all the fundamental components: tensor
|
||||
|
||||
Run interactively in your browser.
|
||||
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F07_training%2F07_training.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
<a href="https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?labpath=tinytorch%2Fmodules%2F08_training%2F08_training.ipynb" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #f97316; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">Open in Binder →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 📄 View Source
|
||||
|
||||
Browse the source code on GitHub.
|
||||
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/07_training/07_training.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
<a href="https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/08_training/08_training.py" target="_blank" style="display: flex; align-items: center; justify-content: center; width: 100%; height: 54px; margin-top: auto; background: #6b7280; color: white; text-align: center; text-decoration: none; border-radius: 27px; font-size: 14px; box-sizing: border-box;">View on GitHub →</a>
|
||||
```
|
||||
|
||||
```{grid-item-card} 🎧 Audio Overview
|
||||
@@ -31,7 +31,7 @@ Browse the source code on GitHub.
|
||||
Listen to an AI-generated overview.
|
||||
|
||||
<audio controls style="width: 100%; height: 54px; margin-top: auto;">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/07_training.mp3" type="audio/mpeg">
|
||||
<source src="https://github.com/harvard-edge/cs249r_book/releases/download/tinytorch-audio-v0.1.1/08_training.mp3" type="audio/mpeg">
|
||||
</audio>
|
||||
```
|
||||
|
||||
@@ -114,12 +114,11 @@ for epoch in range(100):
|
||||
|
||||
To keep this module focused, you will **not** implement:
|
||||
|
||||
- DataLoader for efficient batching (that's Module 08: DataLoader)
|
||||
- Distributed training across multiple GPUs (PyTorch uses `DistributedDataParallel`)
|
||||
- Mixed precision training (PyTorch Automatic Mixed Precision requires specialized tensor types)
|
||||
- Advanced schedulers like warmup or cyclical learning rates (production frameworks offer dozens of variants)
|
||||
|
||||
**You are building the core training orchestration.** Efficient data loading comes next.
|
||||
**You are building the core training orchestration.** Spatial operations for computer vision come next.
|
||||
|
||||
## API Reference
|
||||
|
||||
@@ -679,25 +678,25 @@ For students who want to understand the academic foundations and advanced traini
|
||||
|
||||
## What's Next
|
||||
|
||||
```{seealso} Coming Up: Module 08 - DataLoader
|
||||
```{seealso} Coming Up: Module 09 - Convolutions
|
||||
|
||||
Implement efficient data loading with batching, shuffling, and iteration. Your Trainer currently requires pre-batched data. Module 08 adds automatic batching from raw datasets, completing the training infrastructure needed for the MLP milestone.
|
||||
Implement Conv2d, MaxPool2d, and Flatten layers to build convolutional neural networks. Your Trainer will orchestrate training CNNs on image datasets, enabling the CNN milestone.
|
||||
```
|
||||
|
||||
**Preview - How Your Training Infrastructure Gets Used:**
|
||||
|
||||
| Module | What It Does | Your Trainer In Action |
|
||||
|--------|--------------|------------------------|
|
||||
| **08: DataLoader** | Efficient batching and shuffling | `trainer.train_epoch(dataloader)` with automatic batching |
|
||||
| **09: Convolutions** | Convolutional layers for images | Train CNNs with same `trainer.train_epoch()` loop |
|
||||
| **Milestone: MLP** | Complete MNIST digit recognition | `trainer` orchestrates full training pipeline |
|
||||
| **Milestone: CNN** | Complete CIFAR-10 classification | Train vision models with your training infrastructure |
|
||||
|
||||
## Get Started
|
||||
|
||||
```{tip} Interactive Options
|
||||
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/07_training/07_training.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/07_training/07_training.py)** - Browse the implementation code
|
||||
- **[Launch Binder](https://mybinder.org/v2/gh/harvard-edge/cs249r_book/main?urlpath=lab/tree/tinytorch/modules/08_training/08_training.ipynb)** - Run interactively in browser, no setup required
|
||||
- **[View Source](https://github.com/harvard-edge/cs249r_book/blob/main/tinytorch/src/08_training/08_training.py)** - Browse the implementation code
|
||||
```
|
||||
|
||||
```{warning} Save Your Progress
|
||||
|
||||
Reference in New Issue
Block a user