[PR #1387] [MERGED] fix(labs): lab_02 Part A scenario alignment + extract tabs-cell widgets (#1332) #5165

Closed
opened 2026-04-19 12:51:04 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/harvard-edge/cs249r_book/pull/1387
Author: @profvjreddi
Created: 4/17/2026
Status: Merged
Merged: 4/17/2026
Merged by: @profvjreddi

Base: devHead: fix/labs-1332-polish


📝 Commits (1)

  • 3690a6e fix(labs): lab_02 part A option alignment + extract tabs-cell widgets (#1332)

📊 Changes

17 files changed (+205 additions, -91 deletions)

View changed files

📝 labs/vol1/lab_02_ml_systems.py (+25 -19)
📝 labs/vol1/lab_04_data_engr.py (+12 -5)
📝 labs/vol1/lab_05_nn_compute.py (+13 -6)
📝 labs/vol1/lab_06_nn_arch.py (+10 -4)
📝 labs/vol1/lab_07_ml_frameworks.py (+12 -5)
📝 labs/vol1/lab_08_model_train.py (+12 -6)
📝 labs/vol1/lab_09_data_selection.py (+11 -5)
📝 labs/vol1/lab_10_model_compress.py (+8 -1)
📝 labs/vol1/lab_11_hw_accel.py (+10 -3)
📝 labs/vol1/lab_12_perf_bench.py (+9 -2)
📝 labs/vol1/lab_13_model_serving.py (+13 -6)
📝 labs/vol1/lab_14_ml_ops.py (+12 -5)
📝 labs/vol1/lab_15_responsible_engr.py (+12 -5)
📝 labs/vol1/lab_16_ml_conclusion.py (+14 -7)
📝 labs/vol2/lab_02_compute_infra.py (+10 -4)
📝 labs/vol2/lab_03_communication.py (+11 -4)
📝 labs/vol2/lab_04_data_storage.py (+11 -4)

📄 Description

what

two follow-ups to the #1332 sweep, now closing out everything concrete in Peter's report.

1. lab_02 Part A options now match the hardware registry

the scenario claimed "6x compute increase" and marked option D (<1.1x speedup) correct. actual mlsysim.Hardware:

A100 H100 ratio
compute (TFLOPS) 312 989 3.17x
memory BW (GB/s) 2039 3350 1.64x

at AI=5 both GPUs are deeply memory-bound so the speedup collapses to the BW ratio (1.64x, not <1.1x). peter observed exactly this in #1332: ''Latency improvement value / speedup calculations are a bit mismatched. Selecting 1.1x above leads to a correct 1.64 result below.''

aligned to reality:

  • scenario: ''6x compute increase'' -> ''3x compute increase''
  • option A: ''~6x'' -> ''~3x (proportional to compute increase)''
  • option C: ''~1.5x (modest improvement)'' -> ''~1.6x (approximately the BW ratio)'' [CORRECT]
  • option D: kept as ''<1.1x'' for the ''compute is totally wasted'' distractor
  • feedback callout, ledger memory_wall_correct check, key-takeaway text, roofline ''ideal Nx'' annotation all updated
  • pedagogy preserved: ''compute upgrade is wasted for memory-bound workloads'', now with numbers that actually compute

2. extracted 42 widgets from tabs cells into their own widget cells

every mo.ui.* still defined inside a tabs cell body got moved into a new @app.cell immediately before it, with the widget name added to the tabs cell signature. 16 labs, 42 widgets. this is what @asgalon asked for in lab_02 (''move partD_data_size and partD_wireless one cell up'') extended to every lab that had the same shape.

done via a mechanical codemod (lived at /tmp/extract_tabs_widgets.py, not committed - single-shot utility).

audit results

stage offending labs unreturned widgets
before #1386 17 175
after #1386 0 real (16 'tabs-cell internal') 42
this PR 0 0

test plan

  • labs/tests/test_static.py + test_engine.py: 825 passed 4 skipped 1 xfailed
  • marimo check on lab_05, lab_10, lab_16, vol2/lab_03: exit 0
  • audit script: 0 offending labs
  • CI green on this pr

addresses remaining #1332 items. the browser smoke from #1374 will continue to guard against regressions.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/harvard-edge/cs249r_book/pull/1387 **Author:** [@profvjreddi](https://github.com/profvjreddi) **Created:** 4/17/2026 **Status:** ✅ Merged **Merged:** 4/17/2026 **Merged by:** [@profvjreddi](https://github.com/profvjreddi) **Base:** `dev` ← **Head:** `fix/labs-1332-polish` --- ### 📝 Commits (1) - [`3690a6e`](https://github.com/harvard-edge/cs249r_book/commit/3690a6e6ab475448a013c95826e1def7410cecab) fix(labs): lab_02 part A option alignment + extract tabs-cell widgets (#1332) ### 📊 Changes **17 files changed** (+205 additions, -91 deletions) <details> <summary>View changed files</summary> 📝 `labs/vol1/lab_02_ml_systems.py` (+25 -19) 📝 `labs/vol1/lab_04_data_engr.py` (+12 -5) 📝 `labs/vol1/lab_05_nn_compute.py` (+13 -6) 📝 `labs/vol1/lab_06_nn_arch.py` (+10 -4) 📝 `labs/vol1/lab_07_ml_frameworks.py` (+12 -5) 📝 `labs/vol1/lab_08_model_train.py` (+12 -6) 📝 `labs/vol1/lab_09_data_selection.py` (+11 -5) 📝 `labs/vol1/lab_10_model_compress.py` (+8 -1) 📝 `labs/vol1/lab_11_hw_accel.py` (+10 -3) 📝 `labs/vol1/lab_12_perf_bench.py` (+9 -2) 📝 `labs/vol1/lab_13_model_serving.py` (+13 -6) 📝 `labs/vol1/lab_14_ml_ops.py` (+12 -5) 📝 `labs/vol1/lab_15_responsible_engr.py` (+12 -5) 📝 `labs/vol1/lab_16_ml_conclusion.py` (+14 -7) 📝 `labs/vol2/lab_02_compute_infra.py` (+10 -4) 📝 `labs/vol2/lab_03_communication.py` (+11 -4) 📝 `labs/vol2/lab_04_data_storage.py` (+11 -4) </details> ### 📄 Description ## what two follow-ups to the #1332 sweep, now closing out everything concrete in Peter's report. ### 1. lab_02 Part A options now match the hardware registry the scenario claimed "6x compute increase" and marked option D (<1.1x speedup) correct. actual `mlsysim.Hardware`: | | A100 | H100 | ratio | |-|-|-|-| | compute (TFLOPS) | 312 | 989 | **3.17x** | | memory BW (GB/s) | 2039 | 3350 | **1.64x** | at AI=5 both GPUs are deeply memory-bound so the speedup collapses to the BW ratio (1.64x, not <1.1x). peter observed exactly this in #1332: ''Latency improvement value / speedup calculations are a bit mismatched. Selecting 1.1x above leads to a correct 1.64 result below.'' aligned to reality: - scenario: ''6x compute increase'' -> ''3x compute increase'' - option A: ''~6x'' -> ''~3x (proportional to compute increase)'' - option C: ''~1.5x (modest improvement)'' -> ''~1.6x (approximately the BW ratio)'' [**CORRECT**] - option D: kept as ''<1.1x'' for the ''compute is totally wasted'' distractor - feedback callout, ledger `memory_wall_correct` check, key-takeaway text, roofline ''ideal Nx'' annotation all updated - pedagogy preserved: ''compute upgrade is wasted for memory-bound workloads'', now with numbers that actually compute ### 2. extracted 42 widgets from tabs cells into their own widget cells every `mo.ui.*` still defined inside a tabs cell body got moved into a new `@app.cell` immediately before it, with the widget name added to the tabs cell signature. 16 labs, 42 widgets. this is what @asgalon asked for in lab_02 (''move partD_data_size and partD_wireless one cell up'') extended to every lab that had the same shape. done via a mechanical codemod (lived at /tmp/extract_tabs_widgets.py, not committed - single-shot utility). ## audit results | stage | offending labs | unreturned widgets | |-|-|-| | before #1386 | 17 | 175 | | after #1386 | 0 real (16 'tabs-cell internal') | 42 | | this PR | 0 | 0 | ## test plan - [x] `labs/tests/test_static.py` + `test_engine.py`: 825 passed 4 skipped 1 xfailed - [x] `marimo check` on lab_05, lab_10, lab_16, vol2/lab_03: exit 0 - [x] audit script: 0 offending labs - [ ] CI green on this pr addresses remaining #1332 items. the browser smoke from #1374 will continue to guard against regressions. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 12:51:04 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/cs249r_book#5165