mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-05-06 01:28:35 -05:00
89 lines
2.9 KiB
Plaintext
89 lines
2.9 KiB
Plaintext
# Benchmarking AI
|
|
|
|
::: {.callout-note collapse="true"}
|
|
## Learning Objectives
|
|
|
|
* coming soon.
|
|
|
|
:::
|
|
|
|
## Introduction
|
|
|
|
Explanation: Introducing the concept and importance of benchmarking sets the stage for the reader to understand why it is crucial in the evaluation and optimization of AI systems, especially in resource-constrained embedded environments where it is even more important!
|
|
|
|
- Importance of benchmarking in AI
|
|
- Objectives of benchmarking
|
|
|
|
## Types of Benchmarks
|
|
|
|
Explanation: Understanding the different types of benchmarks will help our readers tailor their performance evaluation activities to specific needs, whether they are evaluating low-level operations or entire application performance.
|
|
|
|
- System benchmarks
|
|
+ Micro-benchmarks
|
|
+ Macro-benchmarks
|
|
+ Application-specific benchmarks
|
|
- Data benchmarks
|
|
|
|
## Benchmarking Metrics
|
|
|
|
Explanation: Metrics are the yardsticks used to measure performance. This section is vital for understanding what aspects of an AI system's performance are being evaluated, such as accuracy, speed, or resource utilization.
|
|
|
|
- Accuracy
|
|
- Latency
|
|
- Throughput
|
|
- Power Consumption
|
|
- Memory Footprint
|
|
- End to end Metrics (User vs. System)
|
|
|
|
## Benchmarking Tools
|
|
|
|
Explanation: Tools are the practical means to carry out benchmarking. Discussing available software and hardware tools equips readers with the resources they need to perform effective benchmarking.
|
|
|
|
- Software tools
|
|
- Hardware tools
|
|
|
|
## Benchmarking Process
|
|
|
|
Explanation: Outlining the step-by-step process of benchmarking provides a structured approach for readers, ensuring that they can conduct benchmarks in a systematic and repeatable manner.e
|
|
|
|
- Dataset Limitation/Sources
|
|
- Model Selection
|
|
- Test Environment Setup
|
|
- Running the Benchmarks
|
|
- Run Rules
|
|
|
|
## Interpreting Results
|
|
|
|
Explanation: Benchmarking is only as valuable as the insights gained from it. This section teaches readers how to analyze the collected data, identify bottlenecks, and make meaningful comparisons.
|
|
|
|
- Analyzing the Data
|
|
- Identifying Bottlenecks
|
|
- Making Comparisons
|
|
|
|
## Optimizing Based on Benchmarks
|
|
|
|
Explanation: The ultimate goal of benchmarking is to improve system performance. This section guides readers on how to use benchmark data for optimization, making it a critical part of the benchmarking lifecycle.
|
|
|
|
- Tweaking Parameters
|
|
- Hardware Acceleration
|
|
- Software Optimization
|
|
|
|
## Challenges and Limitations
|
|
|
|
Explanation: Every methodology has its limitations, and benchmarking is no exception. Discussing these challenges helps readers set realistic expectations and interpret results with a critical mindset.
|
|
|
|
- Variability in Results
|
|
- Benchmarking Ethics
|
|
|
|
## Emerging Trends in Benchmarking
|
|
|
|
- Data-centric AI
|
|
- DataPerf
|
|
- DataComp
|
|
|
|
## Conclusion
|
|
|
|
Explanation: Summarizing the key takeaways and looking at future trends provides closure to the chapter and gives readers a sense of the evolving landscape of AI benchmarking.
|
|
|
|
- Summary
|
|
- Future Trends in AI Benchmarking |