[PR #1551] [MERGED] fix(book): widen quantization-impact prose range to cover ResNet's 4.3× #8259

Closed
opened 2026-04-27 17:37:06 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/harvard-edge/cs249r_book/pull/1551
Author: @profvjreddi
Created: 4/26/2026
Status: Merged
Merged: 4/26/2026
Merged by: @profvjreddi

Base: devHead: fix/quantization-prose-range


📝 Commits (1)

  • 01b37dc fix(book): widen quantization-impact prose range to cover ResNet's 4.3×

📊 Changes

1 file changed (+1 additions, -1 deletions)

View changed files

📝 book/quarto/contents/vol1/optimizations/model_compression.qmd (+1 -1)

📄 Description

Follow-up to #1318. The figure data on dev shows ResNet_v2 inference reduction at 4.3×, which exceeds the body-prose claim of "1.5--4×". The other ratios (Inception 1.6×, model-size pair 1.85--1.9×) are comfortably inside the original range.

Widening the upper bound from 4× → 5× keeps the same narrative (Inception/ResNet see typical few-fold gains, MobileNet sees order-of-magnitude) while making the numeric claim honest against the figure.

Test plan

  • Recompute ratios from the TikZ data; ResNet inference is the only number that requires the widened range
  • Caption text already uses qualitative "modest gains" so does not require a parallel edit

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/harvard-edge/cs249r_book/pull/1551 **Author:** [@profvjreddi](https://github.com/profvjreddi) **Created:** 4/26/2026 **Status:** ✅ Merged **Merged:** 4/26/2026 **Merged by:** [@profvjreddi](https://github.com/profvjreddi) **Base:** `dev` ← **Head:** `fix/quantization-prose-range` --- ### 📝 Commits (1) - [`01b37dc`](https://github.com/harvard-edge/cs249r_book/commit/01b37dc7f72dd13e5134886bf9e52110e3a37b7c) fix(book): widen quantization-impact prose range to cover ResNet's 4.3× ### 📊 Changes **1 file changed** (+1 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `book/quarto/contents/vol1/optimizations/model_compression.qmd` (+1 -1) </details> ### 📄 Description Follow-up to #1318. The figure data on dev shows ResNet_v2 inference reduction at **4.3×**, which exceeds the body-prose claim of "1.5--4×". The other ratios (Inception 1.6×, model-size pair 1.85--1.9×) are comfortably inside the original range. Widening the upper bound from 4× → 5× keeps the same narrative (Inception/ResNet see typical few-fold gains, MobileNet sees order-of-magnitude) while making the numeric claim honest against the figure. ## Test plan - [x] Recompute ratios from the TikZ data; ResNet inference is the only number that requires the widened range - [x] Caption text already uses qualitative "modest gains" so does not require a parallel edit --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-27 17:37:06 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/cs249r_book#8259