Commit Graph

9511 Commits

Author SHA1 Message Date
Vijay Janapa Reddi
261578bfa2 Merge feature/tinytorch-core: fix TITO reference docs 2026-02-16 14:30:54 -05:00
Vijay Janapa Reddi
9f2cce16d2 fix(docs): update TITO reference docs to match actual CLI commands
- Fix setup flow across all docs (activate.sh → source .venv/bin/activate,
  setup-environment.sh → tito setup)
- Add missing commands to overview (tito setup, system update/reset,
  module list/view/test, milestone test/demo, nbgrader commands)
- Add module list, view, and test sections to modules.md
- Fix phantom AlexNet milestone → XOR Crisis in data.md
- Fix Module 10 name (Normalization → Tokenization) in data.md
- Fix milestones.md prereq check output (missing Module 08)
- Fix troubleshooting.md paths, permission fix, and quick-reference table
2026-02-16 14:30:44 -05:00
Vijay Janapa Reddi
c2ed4a92a9 Merge feature/tinytorch-core: add navbar version badge with CI auto-update 2026-02-16 12:43:28 -05:00
Vijay Janapa Reddi
53066bf5af feat(site): add version badge to navbar with auto-update on release
Display version number and release date (e.g., "v0.1.8 · Feb 7, 2026")
next to the "Under Construction" badge in the top nav bar. Version and
date are declared as top-of-file constants in wip-banner.js for easy
CI sed updates. Publish workflow now bumps 6 files instead of 5.
2026-02-16 12:43:09 -05:00
Vijay Janapa Reddi
09de699545 fix(exports): add missing #| export directives across 10 modules
Systematic audit of all 20 modules against module-developer agent rules
found 9 standalone helper functions missing #| export — these are called
by exported code at runtime but were excluded from the generated package,
causing NameError/AttributeError in CI.

Modules fixed:
- 05_dataloader: _pad_image, _random_crop_region (used by RandomCrop)
- 06_autograd: _stable_softmax, _one_hot_encode (prior session)
- 07_optimizers: 5 mixin classes + monkey-patches (prior session)
- 08_training: 7 monkey-patched Trainer methods (prior session)
- 10_tokenization: _count_byte_pairs, _merge_pair (used by BPETokenizer)
- 11_embeddings: _compute_sinusoidal_table (prior session)
- 12_attention: _compute_attention_scores, _scale_scores, _apply_mask (prior)
- 15_quantization: _collect_layer_inputs, _quantize_single_layer (used by quantize_model)
- 18_memoization: _cached_generation_step, _create_cache_storage, _cached_attention_forward (used by enable_kv_cache)
- 19_benchmarking: rename TinyMLPerf→MLPerf, fix monkey-patch naming (prior)

Also includes: vscode-ext icon refactor (ThemeIcon migration).

All 789 tests pass (unit, integration, e2e, CLI).
2026-02-15 17:38:03 -05:00
Vijay Janapa Reddi
a6bd9496e3 style(vscode-ext): increase indentation for action tree rows
Offset action labels in the sidebar tree so child commands are visually nested more clearly under each category across all views.
2026-02-15 14:12:03 -05:00
Vijay Janapa Reddi
58a1391431 Merge remote-tracking branch 'origin/dev' into feature/tinytorch-core 2026-02-15 14:09:07 -05:00
Vijay Janapa Reddi
d33e61b8cc chore(vscode-ext): ignore generated extension artifacts
Ignore extension build outputs and packaged artifacts so the branch stays clean after local development and commit hooks only evaluate source files.
2026-02-15 14:02:41 -05:00
Vijay Janapa Reddi
910240a3f6 chore(tinytorch): add VS Code extension and sync module updates
Add the TinyTorch VS Code extension source package and align module code/docs references so APIs, milestones, and progression notes remain consistent across the curriculum.
2026-02-15 14:02:09 -05:00
Vijay Janapa Reddi
fd7cab5c75 refactor(modules): decompose oversized solution blocks for pedagogical consistency
Break down large BEGIN/END SOLUTION blocks (45+ lines) into smaller,
unit-testable pieces following the one-concept-per-block principle:

- Module 08: train_epoch (58 lines) -> _process_batch + _optimizer_update + train_epoch
- Module 09: BatchNorm2d.forward (70 lines) -> _validate_input + _get_stats + forward
- Module 11: Embedding (62 lines) -> __init__ + forward; PositionalEncoding (82 lines) -> __init__ + forward
- Module 19: generate_report (108 lines), generate_compliance_report (86 lines),
  _run_accuracy_test (52 lines) each decomposed into helper + composition

Each new helper has its own unit test, all test_module() functions updated,
naming conventions enforced (_prefix for private monkey-patch targets).
2026-02-15 10:37:39 -05:00
Vijay Janapa Reddi
9c1ca3e441 style(tinytorch): standardize formatting and conventions across all 20 modules
Audit and fix consistency issues across all module source files:

- Standardize ML Systems header to "ML Systems Reflection Questions" (01, 13)
- Fix section ordering: test_module before ML Systems in modules 16, 17
- Rename demo_spatial() to demo_convolutions() to match module name (09)
- Rename demo_*_with_profiler() to explore_*_with_profiler() (15, 16, 17)
- Fix test naming to use test_unit_* prefix consistently (03, 05, 11, 12)
- Add missing emojis in test_module/demo patterns (02, 15)
- Standardize tito command format to number-only (01, 03, 06, 07, 18)
- Fix implementation headers: hyphen to colon separator (09, 12)
- Add missing "Where This Code Lives" package section (13)
- Fix export command in module summary (05, 06)
2026-02-15 09:37:25 -05:00
Vijay Janapa Reddi
81cdbba67b refactor(tinytorch): function decomposition, naming conventions, and progressive disclosure
Three categories of changes across 17 modules:

1. Function decomposition (Modules 01,03,05-15,18-19): Break large
   monolithic functions into focused _helper + orchestrator pattern.
   Each helper teaches one concept with its own unit test.

2. Naming convention fixes (Modules 08,09,11,18,19): Ensure underscore
   convention is consistent — standalone _func in export cells renamed
   to func (public API), monkey-patched method names match target
   visibility, removed unnecessary #| export from internal helpers.

3. Progressive disclosure (Modules 02-05,08,11-15): Remove forward
   references to future modules. Replace "you'll learn in Module N"
   with concrete descriptions. Trim connection maps to only show
   current and prior modules. Keep end-of-module "Next" teasers
   as motivational breadcrumbs.

All 17 modified modules pass their test suites.
2026-02-14 16:52:15 -05:00
Vijay Janapa Reddi
b03c32b67a docs(paper): add intra-module scaffolding subsection to progressive disclosure
Add new subsection describing function decomposition pattern used within
modules. Documents how complex operations (attention, convolution, training)
are split into focused helper functions with individual unit tests before
composition into exported functions. Updates pedagogical justification to
cover both inter-module and intra-module progressive disclosure.
2026-02-14 16:51:57 -05:00
Vijay Janapa Reddi
d74a485cfc Merge pull request #1172 from harvard-edge/fix-google-login-iframe
fixed google auth allow iframe, slow index.html
2026-02-14 15:39:52 -05:00
github-actions[bot]
170dcfb3db docs: add @harishb00a as tinytorch contributor for doc 2026-02-14 14:59:20 +00:00
Vijay Janapa Reddi
f652692f58 fix(tinytorch): fix broken paths and references in CONTRIBUTING.md and INSTRUCTOR.md
- Fix clone path: cd TinyTorch → cd cs249r_book/tinytorch
- Remove all CLAUDE.md references (internal AI config, not contributor-facing)
- Fix docs/INSTRUCTOR_GUIDE.md → INSTRUCTOR.md (actual file location)
- Remove phantom docs/development/ references (directory doesn't exist)
- Fix *_dev.py → src/NN_name/NN_name.py (actual source file convention)
- Fix *_dev.ipynb → modules/NN_name/name.ipynb (actual notebook convention)
- Fix test commands to use pytest and actual test directory structure
- Replace nonexistent examples/ references with milestones/ scripts

Closes #1173
2026-02-14 09:54:11 -05:00
Vijay Janapa Reddi
99b0eb1387 fix(tinytorch): correct INT8 zero-point values in Module 15 quantization docs
Documentation examples were computed using UINT8 (0-255) zero-point
formula but the code implements signed INT8 (-128 to 127). Fixed all
hardcoded diagram values and docstring examples to match the actual
code output. The code logic was always correct; only the documentation
numbers were wrong.

Fixes: zero-point 88 -> -39, 64 -> -64, 42 -> -43
Fixes: quantized result [-128, 12, 127] -> [-128, -27, 127]
Fixes: dequantize docstring example with correct parameters
Ref: https://github.com/harvard-edge/cs249r_book/issues/1150
2026-02-13 17:06:29 -05:00
kai
ed39f89d5a fixed google auth allow iframe, slow index.html 2026-02-13 16:13:53 -05:00
Vijay Janapa Reddi
a9c2ba0180 fix(tinytorch): enforce progressive disclosure and move EmbeddingBackward to Module 11
Audit all 20 modules for progressive disclosure violations and fix ~50 issues:

- Module 01: Replace "neural network" framing with "linear transformation" in
  ASCII tables, docstrings, test names, and reflection questions
- Modules 02-04: Remove gradient/neural-network terminology before M06 teaches it
- Module 06: Remove EmbeddingBackward (moved to Module 11 where embeddings are taught)
- Module 11: Add EmbeddingBackward with pedagogical gather/scatter ASCII diagram,
  remove runtime import from autograd, fix "Attention-Ready" forward references
- Modules 12-13: Replace FlashAttention references with generic efficiency language
- Module 17: Fix profiler module number (15 → 14)
- Module 19: Remove Module 20 forward dependency from OlympicEvent
- Module 20: Fix pipeline diagram module numbering and demo ordering

Zero changes to executable logic — all edits target docstrings, comments,
ASCII art, class placement, and test descriptions.
2026-02-13 13:15:24 -05:00
Vijay Janapa Reddi
af2214eede Merge pull request #1171 from harvard-edge/feature/tinytorch-core
TinyTorch: progressive disclosure + Windows install cleanup
2026-02-13 12:54:24 -05:00
Vijay Janapa Reddi
173f28f88d fix(tinytorch): clean up Windows install fix comments from PR #1169
Polish the contributor's Windows fix with proper comments explaining
the Microsoft Store alias issue and WinError 32 file lock. Move
is_windows check closer to usage site for clarity.
2026-02-13 11:39:28 -05:00
Vijay Janapa Reddi
a26ee9fff1 Merge dev into feature/tinytorch-core (includes PR #1169 Windows fix) 2026-02-13 11:38:52 -05:00
Vijay Janapa Reddi
8d8ff38399 Merge pull request #1169 from adil-mubashir-ch/fix/windows-install-issues
Merging Windows install fixes from first-time contributor @adil-mubashir-ch. Follow-up cleanup patch incoming.
2026-02-13 11:38:25 -05:00
github-actions[bot]
8dd73eebd6 Update contributors list [skip ci] 2026-02-13 16:28:11 +00:00
Kristian Radoš
09f4d0a71e Fix typo in SocratiQ introduction (#1170) 2026-02-13 11:22:39 -05:00
Vijay Janapa Reddi
3947b2defa fix(tinytorch): enforce progressive disclosure across 9 modules
Audit found docstrings/comments revealing concepts from later modules.
All edits are docstring/comment-only — no code, imports, or tests changed.

Module 01: Replace neural network terminology with generic math examples
Module 02: Remove gradient flow references, reframe layer terminology
Module 05: Remove optimizer/backward from pipeline diagrams
Module 06: Replace transformer/embedding references with general patterns
Module 07: Replace embedding/transformer terminology with generic terms
Module 10: Replace detailed embedding analysis with brief Module 11 teaser
Module 13: Fix swapped dependency numbers, trim KV-cache explanation
Module 14: Remove quantization/compression references from docstrings
Module 17: Fix factually wrong Module 18 teaser description

231 tests pass across all modified modules.
2026-02-13 10:00:50 -05:00
Adil Mubashir Chaudhry
f2975daa67 Fix Windows install issues
- Prefer python over python3 in Git Bash to avoid Microsoft Store alias and incorrect venv paths
- Skip TinyTorch self-reinstall on Windows if already installed (prevents WinError 32 file lock)
2026-02-12 14:47:35 +05:00
Vijay Janapa Reddi
0630674a71 fix(module16): correct sparsity percentage bugs in compression module
- Fix incorrect percentile claim in pruning ASCII diagram (rewrote with
  20 values and correct 50th percentile threshold)
- Fix 7000% sparsity display in demo_compression_with_profiler where
  measure_sparsity() returns percentage (0-100) but code treated it as
  fraction (0-1), causing double multiplication

Closes harvard-edge/cs249r_book#1168
2026-02-11 18:55:12 -05:00
Vijay Janapa Reddi
7b43dc5ff5 Merge dev into feature/tinytorch-core 2026-02-10 13:10:48 -05:00
kai
f05bb12cb2 updated the iframe login issues 2026-02-10 10:25:11 -05:00
kai
1178d21600 flow with particles as new timeline opening 2026-02-09 17:46:03 -05:00
kai
e95c9d96c7 updating antialiasing ant crawl issue...arxiv 2026-02-09 08:15:18 -05:00
github-actions[bot]
2ac790601e chore(tinytorch): bump version to tinytorch-v0.1.8 2026-02-07 20:55:09 +00:00
github-actions[bot]
b01b83506b docs: add @Takosaga as tinytorch contributor for doc, bug 2026-02-06 12:10:07 +00:00
Dang Truong
af23c13999 fix small typo (#1163) 2026-02-06 02:09:30 -05:00
Vijay Janapa Reddi
956e7277c8 feat(site): auto-generate team page from .all-contributorsrc
Add generate_team.py script that reads .all-contributorsrc and generates
the team.md page automatically. This keeps the website team page in sync
with the README contributors.

- Add tinytorch/site/scripts/generate_team.py
- Update both deploy workflows to run the script before building
- Contributors added via @all-contributors now appear on the website

The script runs during site build, so team.md stays fresh with each deploy.
2026-02-05 20:25:02 -05:00
github-actions[bot]
6c33a3e3ab docs: add @oscarf189 as tinytorch contributor for doc 2026-02-06 01:20:45 +00:00
Vijay Janapa Reddi
c1c8c11eec fix(layers): correct Xavier/Glorot initialization terminology
The formula sqrt(1/fan_in) is actually LeCun initialization (1998),
not Xavier/Glorot. True Xavier uses sqrt(2/(fan_in+fan_out)).

- Rename XAVIER_SCALE_FACTOR → INIT_SCALE_FACTOR
- Update all comments to say "LeCun-style initialization"
- Add note explaining difference between LeCun, Xavier, and He init
- Keep the simpler formula for pedagogical clarity

Fixes #1161
2026-02-05 20:11:50 -05:00
Vijay Janapa Reddi
852bc5c2fc fix(ci): download slide decks from release during deployment
Slides were not loading on the live site because the PDFs exist in a
GitHub Release (tinytorch-slides-v0.1.0) but were never downloaded
during the build process. The .gitignore has *.pdf which prevents
slides from being committed to git.

Add a step to both deployment workflows to download all slide PDFs
from the release and inject them into _static/slides/ before deploy.

Fixes harvard-edge/cs249r_book#1162
2026-02-05 15:43:55 -05:00
kai
0cfebb6f42 index.html comm site with arxiv 2026-02-04 22:23:11 -05:00
Vijay Janapa Reddi
118df45d41 ci: trigger validation workflow after bibtex tidy fix 2026-02-04 11:59:29 -05:00
github-actions[bot]
1c0fac8aae Update contributors list [skip ci] 2026-02-04 16:42:46 +00:00
Vijay Janapa Reddi
ddca8652ce fix(contributors): merge duplicate Andrea entries in TinyTorch
AndreaMattiaGaravagn (truncated) and AndreaMattiaGaravagno were listed
as separate contributors. Merged into a single entry with the correct
username, avatar, and combined contributions (code + doc).
2026-02-04 11:29:34 -05:00
Vijay Janapa Reddi
299fcc14e1 style(paper): tidy references.bib for pre-commit compliance
Fix line wrapping and add trailing newline to pass bibtex-tidy
pre-commit hook.
2026-02-04 10:44:02 -05:00
github-actions[bot]
ee0c1a75f3 Update contributors list [skip ci] 2026-02-04 15:35:29 +00:00
Vijay Janapa Reddi
d01b1acd4b Merge feature/tinytorch-core: fix Jupyter kernel mismatch (#1147) 2026-02-04 10:28:52 -05:00
Vijay Janapa Reddi
25fc9e4848 fix(tito): resolve Jupyter kernel mismatch causing ModuleNotFoundError (#1147)
Students hit "No module named 'tinytorch.core.tensor'" in notebooks because
the Jupyter kernel used a different Python than where tinytorch was installed.

- setup: install ipykernel + nbdev, register named kernel during tito setup
- health: add Notebook Readiness checks (import, kernel, Python match)
- export: verify exported file exists and has content (fail loudly)
- Windows: add get_venv_bin_dir() helper for cross-platform venv paths
2026-02-04 10:24:37 -05:00
Vijay Janapa Reddi
3a2b1bf482 fix(ci): use head_ref for PR branch in fresh install test
On PR events, github.ref_name resolves to the merge ref (e.g.
"1159/merge") which doesn't exist on raw.githubusercontent.com,
causing a 404. Use github.head_ref (the actual source branch)
for PRs, falling back to ref_name for push events.

Also adds -f flag to curl so HTTP errors fail immediately with
a clear message instead of silently saving the 404 HTML page.
2026-02-04 10:06:25 -05:00
Vijay Janapa Reddi
24ab7599c6 fix(paper): escape special LaTeX characters breaking PDF build
Escape unescaped & characters in references.bib (Taylor & Francis,
AI & Machine-Learning) and replace Unicode em-dashes (U+2014) with
LaTeX --- ligatures in paper.tex for T1 font compatibility.
2026-02-04 10:05:32 -05:00
Vijay Janapa Reddi
0be9325fbe fix(workflow): deterministic project detection in all-contributors
LLM now only parses username + contribution types (strict 2-field JSON).
Project detection is fully deterministic from file paths:
  tinytorch/ → tinytorch, book/ → book, kits/ → kits, labs/ → labs

If project cannot be determined, the bot asks the user instead of
silently defaulting to book. Also removes @AndreaMattiaGaravagno
from book/.all-contributorsrc (was incorrectly added there by the
old workflow — their PR only touched tinytorch/ files).
2026-02-04 10:03:20 -05:00