footnotes: trim fn-mlperf (benchmarking) — remove three-suite recap that duplicated body prose

This commit is contained in:
Vijay Janapa Reddi
2026-02-24 08:52:03 -05:00
parent a4243a6c9a
commit 24cd07b347

View File

@@ -342,7 +342,7 @@ DAWNBench [@coleman2017dawnbench] emerged as an early ML benchmark that pioneere
\index{Patterson, David!MLPerf leadership}
\index{MLCommons!benchmark organization}
[^fn-mlperf]: **MLPerf**: Founded in 2018 by researchers from Google, NVIDIA, Intel, Harvard, Stanford, and UC Berkeley, the name combines "ML" with "Perf" (performance), echoing SPEC's benchmarking tradition. MLPerf's design principles -- representative workloads, full-system measurement, and open submission -- directly address the gaming that plagued Whetstone and LINPACK. Its three suites (Training, Inference, Power) force vendors to report end-to-end numbers rather than cherry-picked kernel throughput. \index{MLPerf!founding}
[^fn-mlperf]: **MLPerf**: Founded in 2018 by researchers from Google, NVIDIA, Intel, Harvard, Stanford, and UC Berkeley, the name combines "ML" with "Perf" (performance), echoing SPEC's benchmarking tradition. MLPerf's design principles representative workloads, full-system measurement, and open submission directly address the gaming that plagued Whetstone and LINPACK: vendors who could previously report peak kernel throughput on cherry-picked problem sizes must now report end-to-end system performance on standardized tasks. \index{MLPerf!founding}
### Energy Benchmarks {#sec-benchmarking-energy-benchmarks-709a}