mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-11 17:49:25 -05:00
Clarifies notation for units and precision
Refines the notation section to explicitly state the use of decimal SI prefixes for data, memory, and compute. Updates wording for clarity and consistency, specifically addressing units, storage, and compute contexts. Ensures that the book uses only decimal SI prefixes and specifies the formatting of numbers and units.
This commit is contained in:
@@ -86,9 +86,9 @@ We follow standard deep learning conventions (@goodfellow2016deep) with explicit
|
||||
|
||||
## Units and Precision {#sec-notation-conventions-units-precision-fdaf}
|
||||
|
||||
* **Physical Units**: This book uses SI (metric) units throughout---meters, kilograms, seconds, watts, °C---consistent with standard engineering and scientific practice. Where source data was originally reported in imperial units, we convert to SI and note the original values parenthetically.
|
||||
* **Storage**: We use decimal prefixes (1 KB = 1000 bytes, 1 GB = $10^9$ bytes) throughout for consistency with ML literature. In memory contexts, industry convention uses these same symbols for binary values ($2^{10}$, $2^{30}$); the difference is negligible for our purposes.
|
||||
* **Compute**: We use decimal prefixes for operations.
|
||||
* **Physical Units**: This book uses SI (metric) units throughout---meters, kilograms, seconds, watts, °C---consistent with standard engineering and scientific practice. Where source data was originally reported in imperial units, we convert to SI and note the original values parenthetically. A space always separates the number from the unit (e.g., 100 ms, 2 TB/s).
|
||||
* **Data and memory**: We use **decimal SI prefixes only**: KB = $10^3$ bytes, MB = $10^6$, GB = $10^9$, TB = $10^{12}$. We do not use binary units (KiB, MiB, GiB) in prose; all capacities, throughputs, and model sizes are reported in decimal units (e.g., 80 GB, 2 TB/s, 102 MB).
|
||||
* **Compute**: We use decimal prefixes for operations (e.g., GFLOPs, TFLOPs).
|
||||
* 1 TFLOP = $10^{12}$ FLOPs
|
||||
* **Precision**:
|
||||
* **FP32**: Single precision (4 bytes)
|
||||
|
||||
Reference in New Issue
Block a user