mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-06 09:32:39 -05:00
✅ Renamed modules for clearer pedagogical flow: - 05_networks → 05_dense (multi-layer dense/fully connected networks) - 06_cnn → 06_spatial (convolutional networks for spatial patterns) - 06_attention → 07_attention (attention mechanisms for sequences) ✅ Shifted remaining modules down by 1: - 07_dataloader → 08_dataloader - 08_autograd → 09_autograd - 09_optimizers → 10_optimizers - 10_training → 11_training - 11_compression → 12_compression - 12_kernels → 13_kernels - 13_benchmarking → 14_benchmarking - 14_mlops → 15_mlops - 15_capstone → 16_capstone ✅ Updated module metadata (module.yaml files): - Updated names, descriptions, dependencies - Fixed prerequisite chains and enables relationships - Updated export paths to match new names New learner progression: Foundation → Individual Layers → Dense Networks → Spatial Networks → Attention Networks → Training Pipeline Perfect pedagogical flow: Build one layer → Stack dense layers → Add spatial patterns → Add attention mechanisms → Learn to train them all.
31 lines
806 B
YAML
31 lines
806 B
YAML
# TinyTorch Module Metadata
|
|
# Essential system information for CLI tools and build systems
|
|
|
|
name: "optimizers"
|
|
title: "Optimizers"
|
|
description: "Gradient-based parameter optimization algorithms"
|
|
|
|
# Dependencies - Used by CLI for module ordering and prerequisites
|
|
dependencies:
|
|
prerequisites: ["setup", "tensor", "autograd"]
|
|
enables: ["training", "compression", "mlops"]
|
|
|
|
# Package Export - What gets built into tinytorch package
|
|
exports_to: "tinytorch.core.optimizers"
|
|
|
|
# File Structure - What files exist in this module
|
|
files:
|
|
dev_file: "optimizers_dev.py"
|
|
readme: "README.md"
|
|
tests: "inline"
|
|
|
|
# Educational Metadata
|
|
difficulty: "⭐⭐⭐⭐"
|
|
time_estimate: "6-8 hours"
|
|
|
|
# Components - What's implemented in this module
|
|
components:
|
|
- "SGD"
|
|
- "Adam"
|
|
- "StepLR"
|
|
- "gradient_descent_step" |