Files
TinyTorch/modules/source/07_attention/module.yaml
Vijay Janapa Reddi 59d58718f9 refactor: Implement learner-focused module progression with better naming
 Renamed modules for clearer pedagogical flow:
- 05_networks → 05_dense (multi-layer dense/fully connected networks)
- 06_cnn → 06_spatial (convolutional networks for spatial patterns)
- 06_attention → 07_attention (attention mechanisms for sequences)

 Shifted remaining modules down by 1:
- 07_dataloader → 08_dataloader
- 08_autograd → 09_autograd
- 09_optimizers → 10_optimizers
- 10_training → 11_training
- 11_compression → 12_compression
- 12_kernels → 13_kernels
- 13_benchmarking → 14_benchmarking
- 14_mlops → 15_mlops
- 15_capstone → 16_capstone

 Updated module metadata (module.yaml files):
- Updated names, descriptions, dependencies
- Fixed prerequisite chains and enables relationships
- Updated export paths to match new names

New learner progression:
Foundation → Individual Layers → Dense Networks → Spatial Networks → Attention Networks → Training Pipeline

Perfect pedagogical flow: Build one layer → Stack dense layers → Add spatial patterns → Add attention mechanisms → Learn to train them all.
2025-07-18 00:12:50 -04:00

32 lines
919 B
YAML

# TinyTorch Module Metadata
# Essential system information for CLI tools and build systems
name: "attention"
title: "Attention"
description: "Core attention mechanism and masking utilities"
# Dependencies - Used by CLI for module ordering and prerequisites
dependencies:
prerequisites: ["setup", "tensor", "activations", "layers", "dense", "spatial"]
enables: ["training", "transformers", "nlp", "multimodal"]
# Package Export - What gets built into tinytorch package
exports_to: "tinytorch.core.attention"
# File Structure - What files exist in this module
files:
dev_file: "attention_dev.py"
readme: "README.md"
tests: "inline"
# Educational Metadata
difficulty: "⭐⭐⭐"
time_estimate: "4-5 hours"
# Components - What's implemented in this module
components:
- "scaled_dot_product_attention"
- "SelfAttention"
- "create_causal_mask"
- "create_padding_mask"
- "create_bidirectional_mask"