mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-04-29 17:20:21 -05:00
2.5 KiB
2.5 KiB
Changelog
All notable changes to TinyTorch will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Unreleased
[0.1.5] - 2026-01-27
Added
- Windows/Git Bash support for installer script (thanks @rnjema, @joeswagson, @Kobra299!)
- Platform detection (
get_platform()) for OS-specific guidance - Cross-platform line endings via
.gitattributes
Changed
- Installer uses
$PYTHON_CMD -m pipfor more reliable pip invocation - Dynamic
tito --versioncommand showing current TinyTorch version
Fixed
- Virtual environment activation now works correctly on Windows Git Bash
0.1.1 - 2025-01-13
Fixed
- Module 03 (Layers): Removed premature
requires_gradfromLinearlayer initialization- Aligns with progressive disclosure model where
requires_gradis introduced in Module 06 - Fixes issue where students running modules in sequence encountered undefined parameters
- Aligns with progressive disclosure model where
Added
tinydigitsdataset: 8x8 handwritten digits for educational CNN trainingtinytalksdataset: Q&A pairs for transformer training examples
0.1.0 - 2024-12-12
Added
- Initial public release of TinyTorch
- 20 progressive modules covering ML fundamentals to advanced topics
titoCLI for guided learning experience- Milestone projects demonstrating historical ML breakthroughs
- Comprehensive test suite
- Jupyter Book documentation site
Modules
- 01: Tensor (NumPy wrapper with ML semantics)
- 02: Activations (Sigmoid, ReLU, Tanh, GELU, Softmax)
- 03: Layers (Linear, Dropout)
- 04: Losses (MSE, CrossEntropy)
- 05: DataLoader (batching, shuffling)
- 06: Autograd (automatic differentiation)
- 07: Optimizers (SGD, Adam)
- 08: Training (training loops)
- 09: Convolutions (Conv2D, pooling)
- 10: Tokenization (BPE, character level)
- 11: Embeddings (word embeddings)
- 12: Attention (self attention, multi head)
- 13: Transformers (encoder, decoder)
- 14: Profiling (timing, memory)
- 15: Quantization (INT8, dynamic)
- 16: Compression (pruning, distillation)
- 17: Acceleration (SIMD, parallelism)
- 18: Memoization (caching, checkpointing)
- 19: Benchmarking (MLPerf style)
- 20: Capstone (integration project)