mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-04-29 09:08:54 -05:00
Added a benchmarking chapter
This commit is contained in:
@@ -38,6 +38,7 @@ book:
|
||||
- optimizations.qmd
|
||||
- frameworks.qmd
|
||||
- hw_acceleration.qmd
|
||||
- benchmarking.qmd
|
||||
- ondevice_learning.qmd
|
||||
- mlops.qmd
|
||||
- privacy_security.qmd
|
||||
|
||||
73
benchmarking.qmd
Normal file
73
benchmarking.qmd
Normal file
@@ -0,0 +1,73 @@
|
||||
# Benchmarking AI
|
||||
|
||||
## Introduction
|
||||
|
||||
Explanation: Introducing the concept and importance of benchmarking sets the stage for the reader to understand why it is crucial in the evaluation and optimization of AI systems, especially in resource-constrained embedded environments.
|
||||
|
||||
- Importance of benchmarking in AI
|
||||
- Objectives of benchmarking
|
||||
|
||||
## Types of Benchmarks
|
||||
|
||||
Explanation: Understanding the different types of benchmarks helps readers tailor their benchmarking activities to specific needs, whether they are evaluating low-level operations or entire application performance.
|
||||
|
||||
- Micro-benchmarks
|
||||
- Macro-benchmarks
|
||||
- Application-specific benchmarks
|
||||
|
||||
## Benchmarking Metrics
|
||||
|
||||
Explanation: Metrics are the yardsticks used to measure performance. This section is vital for understanding what aspects of an AI system's performance are being evaluated, such as accuracy, speed, or resource utilization.
|
||||
|
||||
- Accuracy
|
||||
- Latency
|
||||
- Throughput
|
||||
- Power Consumption
|
||||
- Memory Footprint
|
||||
|
||||
## Benchmarking Tools
|
||||
|
||||
Explanation: Tools are the practical means to carry out benchmarking. Discussing available software and hardware tools equips readers with the resources they need to perform effective benchmarking.
|
||||
|
||||
- Software tools
|
||||
- Hardware tools
|
||||
|
||||
## Benchmarking Process
|
||||
|
||||
Explanation: Outlining the step-by-step process of benchmarking provides a structured approach for readers, ensuring that they can conduct benchmarks in a systematic and repeatable manner.
|
||||
|
||||
- Data Preparation
|
||||
- Model Selection
|
||||
- Test Environment Setup
|
||||
- Running the Benchmarks
|
||||
- Data Collection
|
||||
|
||||
## Interpreting Results
|
||||
|
||||
Explanation: Benchmarking is only as valuable as the insights gained from it. This section teaches readers how to analyze the collected data, identify bottlenecks, and make meaningful comparisons.
|
||||
|
||||
- Analyzing the Data
|
||||
- Identifying Bottlenecks
|
||||
- Making Comparisons
|
||||
|
||||
## Optimizing Based on Benchmarks
|
||||
|
||||
Explanation: The ultimate goal of benchmarking is to improve system performance. This section guides readers on how to use benchmark data for optimization, making it a critical part of the benchmarking lifecycle.
|
||||
|
||||
- Tweaking Parameters
|
||||
- Hardware Acceleration
|
||||
- Software Optimization
|
||||
|
||||
## Challenges and Limitations
|
||||
|
||||
Explanation: Every methodology has its limitations, and benchmarking is no exception. Discussing these challenges helps readers set realistic expectations and interpret results with a critical mindset.
|
||||
|
||||
- Variability in Results
|
||||
- Benchmarking Ethics
|
||||
|
||||
## Conclusion
|
||||
|
||||
Explanation: Summarizing the key takeaways and looking at future trends provides closure to the chapter and gives readers a sense of the evolving landscape of AI benchmarking.
|
||||
|
||||
- Summary
|
||||
- Future Trends in AI Benchmarking
|
||||
Reference in New Issue
Block a user