mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-11 17:49:25 -05:00
footnotes: trim fn-mlperf (benchmarking) — remove three-suite recap that duplicated body prose
This commit is contained in:
@@ -342,7 +342,7 @@ DAWNBench [@coleman2017dawnbench] emerged as an early ML benchmark that pioneere
|
||||
\index{Patterson, David!MLPerf leadership}
|
||||
\index{MLCommons!benchmark organization}
|
||||
|
||||
[^fn-mlperf]: **MLPerf**: Founded in 2018 by researchers from Google, NVIDIA, Intel, Harvard, Stanford, and UC Berkeley, the name combines "ML" with "Perf" (performance), echoing SPEC's benchmarking tradition. MLPerf's design principles -- representative workloads, full-system measurement, and open submission -- directly address the gaming that plagued Whetstone and LINPACK. Its three suites (Training, Inference, Power) force vendors to report end-to-end numbers rather than cherry-picked kernel throughput. \index{MLPerf!founding}
|
||||
[^fn-mlperf]: **MLPerf**: Founded in 2018 by researchers from Google, NVIDIA, Intel, Harvard, Stanford, and UC Berkeley, the name combines "ML" with "Perf" (performance), echoing SPEC's benchmarking tradition. MLPerf's design principles — representative workloads, full-system measurement, and open submission — directly address the gaming that plagued Whetstone and LINPACK: vendors who could previously report peak kernel throughput on cherry-picked problem sizes must now report end-to-end system performance on standardized tasks. \index{MLPerf!founding}
|
||||
|
||||
### Energy Benchmarks {#sec-benchmarking-energy-benchmarks-709a}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user