Clarifies notation for units and precision

Refines the notation section to explicitly state the use
of decimal SI prefixes for data, memory, and compute.

Updates wording for clarity and consistency, specifically
addressing units, storage, and compute contexts.

Ensures that the book uses only decimal SI prefixes and
specifies the formatting of numbers and units.
This commit is contained in:
Vijay Janapa Reddi
2026-02-16 10:39:14 -05:00
parent dcc8afb4cc
commit 67b4d48470

View File

@@ -86,9 +86,9 @@ We follow standard deep learning conventions (@goodfellow2016deep) with explicit
## Units and Precision {#sec-notation-conventions-units-precision-fdaf}
* **Physical Units**: This book uses SI (metric) units throughout---meters, kilograms, seconds, watts, °C---consistent with standard engineering and scientific practice. Where source data was originally reported in imperial units, we convert to SI and note the original values parenthetically.
* **Storage**: We use decimal prefixes (1 KB = 1000 bytes, 1 GB = $10^9$ bytes) throughout for consistency with ML literature. In memory contexts, industry convention uses these same symbols for binary values ($2^{10}$, $2^{30}$); the difference is negligible for our purposes.
* **Compute**: We use decimal prefixes for operations.
* **Physical Units**: This book uses SI (metric) units throughout---meters, kilograms, seconds, watts, °C---consistent with standard engineering and scientific practice. Where source data was originally reported in imperial units, we convert to SI and note the original values parenthetically. A space always separates the number from the unit (e.g., 100 ms, 2 TB/s).
* **Data and memory**: We use **decimal SI prefixes only**: KB = $10^3$ bytes, MB = $10^6$, GB = $10^9$, TB = $10^{12}$. We do not use binary units (KiB, MiB, GiB) in prose; all capacities, throughputs, and model sizes are reported in decimal units (e.g., 80 GB, 2 TB/s, 102 MB).
* **Compute**: We use decimal prefixes for operations (e.g., GFLOPs, TFLOPs).
* 1 TFLOP = $10^{12}$ FLOPs
* **Precision**:
* **FP32**: Single precision (4 bytes)