Files
cs249r_book/tinytorch/CHANGELOG.md
2026-01-27 12:58:34 -05:00

2.5 KiB

Changelog

All notable changes to TinyTorch will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Unreleased

[0.1.5] - 2026-01-27

Added

  • Windows/Git Bash support for installer script (thanks @rnjema, @joeswagson, @Kobra299!)
  • Platform detection (get_platform()) for OS-specific guidance
  • Cross-platform line endings via .gitattributes

Changed

  • Installer uses $PYTHON_CMD -m pip for more reliable pip invocation
  • Dynamic tito --version command showing current TinyTorch version

Fixed

  • Virtual environment activation now works correctly on Windows Git Bash

0.1.1 - 2025-01-13

Fixed

  • Module 03 (Layers): Removed premature requires_grad from Linear layer initialization
    • Aligns with progressive disclosure model where requires_grad is introduced in Module 06
    • Fixes issue where students running modules in sequence encountered undefined parameters

Added

  • tinydigits dataset: 8x8 handwritten digits for educational CNN training
  • tinytalks dataset: Q&A pairs for transformer training examples

0.1.0 - 2024-12-12

Added

  • Initial public release of TinyTorch
  • 20 progressive modules covering ML fundamentals to advanced topics
  • tito CLI for guided learning experience
  • Milestone projects demonstrating historical ML breakthroughs
  • Comprehensive test suite
  • Jupyter Book documentation site

Modules

  • 01: Tensor (NumPy wrapper with ML semantics)
  • 02: Activations (Sigmoid, ReLU, Tanh, GELU, Softmax)
  • 03: Layers (Linear, Dropout)
  • 04: Losses (MSE, CrossEntropy)
  • 05: DataLoader (batching, shuffling)
  • 06: Autograd (automatic differentiation)
  • 07: Optimizers (SGD, Adam)
  • 08: Training (training loops)
  • 09: Convolutions (Conv2D, pooling)
  • 10: Tokenization (BPE, character level)
  • 11: Embeddings (word embeddings)
  • 12: Attention (self attention, multi head)
  • 13: Transformers (encoder, decoder)
  • 14: Profiling (timing, memory)
  • 15: Quantization (INT8, dynamic)
  • 16: Compression (pruning, distillation)
  • 17: Acceleration (SIMD, parallelism)
  • 18: Memoization (caching, checkpointing)
  • 19: Benchmarking (MLPerf style)
  • 20: Capstone (integration project)