mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-04-29 17:20:21 -05:00
- Add CHANGELOG.md for tracking releases - Update publish workflow with release_type (patch/minor/major) - Add version badge to website footer - Auto-bump version in __init__.py, pyproject.toml, and version-badge.js - Tags created automatically after successful deploy (tinytorch-vX.Y.Z)
2.3 KiB
2.3 KiB
Changelog
All notable changes to TinyTorch will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Unreleased
Added
- Dynamic
tito --versioncommand showing current TinyTorch version - CHANGELOG.md for tracking releases
- Updated publish workflow with release_type (patch/minor/major)
Changed
- Version now managed in
tinytorch/__init__.pyandpyproject.toml
0.1.1 - 2025-01-13
Fixed
- Module 03 (Layers): Removed premature
requires_gradfromLinearlayer initialization- Aligns with progressive disclosure model where
requires_gradis introduced in Module 06 - Fixes issue where students running modules in sequence encountered undefined parameters
- Aligns with progressive disclosure model where
Added
tinydigitsdataset: 8x8 handwritten digits for educational CNN trainingtinytalksdataset: Q&A pairs for transformer training examples
0.1.0 - 2024-12-12
Added
- Initial public release of TinyTorch
- 20 progressive modules covering ML fundamentals to advanced topics
titoCLI for guided learning experience- Milestone projects demonstrating historical ML breakthroughs
- Comprehensive test suite
- Jupyter Book documentation site
Modules
- 01: Tensor (NumPy wrapper with ML semantics)
- 02: Activations (Sigmoid, ReLU, Tanh, GELU, Softmax)
- 03: Layers (Linear, Dropout)
- 04: Losses (MSE, CrossEntropy)
- 05: DataLoader (batching, shuffling)
- 06: Autograd (automatic differentiation)
- 07: Optimizers (SGD, Adam)
- 08: Training (training loops)
- 09: Convolutions (Conv2D, pooling)
- 10: Tokenization (BPE, character level)
- 11: Embeddings (word embeddings)
- 12: Attention (self attention, multi head)
- 13: Transformers (encoder, decoder)
- 14: Profiling (timing, memory)
- 15: Quantization (INT8, dynamic)
- 16: Compression (pruning, distillation)
- 17: Acceleration (SIMD, parallelism)
- 18: Memoization (caching, checkpointing)
- 19: Benchmarking (MLPerf style)
- 20: Capstone (integration project)