mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-06 17:57:31 -05:00
- Removed 01_setup module (archived to archive/setup_module) - Renumbered all modules: tensor is now 01, activations is 02, etc. - Added tito setup command for environment setup and package installation - Added numeric shortcuts: tito 01, tito 02, etc. for quick module access - Fixed view command to find dev files correctly - Updated module dependencies and references - Improved user experience: immediate ML learning instead of boring setup
24 lines
435 B
YAML
24 lines
435 B
YAML
components:
|
|
- SGD
|
|
- Adam
|
|
- StepLR
|
|
- gradient_descent_step
|
|
dependencies:
|
|
enables:
|
|
- training
|
|
- compression
|
|
- mlops
|
|
prerequisites:
|
|
- tensor
|
|
- autograd
|
|
description: Gradient-based parameter optimization algorithms
|
|
difficulty: "\u2B50\u2B50\u2B50\u2B50"
|
|
exports_to: tinytorch.core.optimizers
|
|
files:
|
|
dev_file: optimizers_dev.py
|
|
readme: README.md
|
|
tests: inline
|
|
name: optimizers
|
|
time_estimate: 6-8 hours
|
|
title: Optimizers
|