mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-03-11 17:49:25 -05:00
Simplified the training section and added transfer learning
This commit is contained in:
committed by
Ikechukwu Uchendu
parent
eeefade391
commit
5ab11a756d
133
training.qmd
133
training.qmd
@@ -1,85 +1,88 @@
|
||||
# AI Training
|
||||
|
||||
<!--
|
||||
|
||||
## Model Selection and Development
|
||||
- Overview of ML Models
|
||||
- Criteria for Model Selection
|
||||
- Model Development Considerations in Embedded Systems
|
||||
- Scalability and Resource Optimization
|
||||
|
||||
## Hyperparameter Tuning
|
||||
- Understanding Hyperparameters
|
||||
- Techniques for Hyperparameter Tuning
|
||||
- Tuning for Embedded Systems
|
||||
- Grid Search and Randomized Search Methods
|
||||
|
||||
## Limited training data - transfer learning
|
||||
## Federated learning
|
||||
##
|
||||
|
||||
-->
|
||||
|
||||
## Introduction
|
||||
|
||||
- Importance of ML Training
|
||||
- Overview of ML Training Process
|
||||
Explanation: An introductory section sets the stage for the reader, explaining what AI training is and why it's crucial, especially in the context of embedded systems. It helps to align the reader's expectations and prepares them for the upcoming content.
|
||||
|
||||
- Brief overview of what AI training entails
|
||||
- Importance of training in the context of embedded AI
|
||||
|
||||
## Types of Training
|
||||
|
||||
Explanation: Understanding the different types of training methods is foundational. It allows the reader to appreciate the diversity of approaches and to select the most appropriate one for their specific embedded AI application.
|
||||
|
||||
- Supervised Learning
|
||||
- Unsupervised Learning
|
||||
- Reinforcement Learning
|
||||
- Semi-supervised Learning
|
||||
|
||||
## Data Preparation
|
||||
|
||||
- Data Collection
|
||||
- Data Cleaning
|
||||
Explanation: Data is the fuel for AI. This section is essential because it guides the reader through the initial steps of gathering and preparing data, which is a prerequisite for effective training.
|
||||
|
||||
- Data Collection
|
||||
- Data Annotation
|
||||
- Data Augmentation
|
||||
- Feature Engineering
|
||||
- Splitting the Data (Training, Validation, and Test Sets)
|
||||
|
||||
## Model Selection
|
||||
|
||||
- Overview of Model Types
|
||||
- Criteria for Model Selection
|
||||
- Model Complexity and Capacity
|
||||
- Data Preprocessing
|
||||
|
||||
## Training Algorithms
|
||||
|
||||
Explanation: This section delves into the algorithms that power the training process. It's crucial for understanding how models learn from data and how to implement these algorithms efficiently in embedded systems.
|
||||
|
||||
- Gradient Descent
|
||||
- Batch Gradient Descent
|
||||
- Stochastic Gradient Descent
|
||||
- Mini-Batch Gradient Descent
|
||||
- Optimization Algorithms
|
||||
- Adam
|
||||
- RMSprop
|
||||
- Momentum
|
||||
- Backpropagation
|
||||
- Optimizers (SGD, Adam, RMSprop, etc.)
|
||||
|
||||
## Loss Functions
|
||||
- Mean Squared Error (MSE)
|
||||
- Cross-Entropy Loss
|
||||
- Huber Loss
|
||||
- Custom Loss Functions
|
||||
## Training Environments
|
||||
|
||||
## Regularization Techniques
|
||||
- L1 and L2 Regularization
|
||||
- Dropout
|
||||
- Batch Normalization
|
||||
- Early Stopping
|
||||
Explanation: Different training environments have their own pros and cons. This section helps the reader make informed decisions about where to train their models, considering factors like computational resources and latency.
|
||||
|
||||
## Model Evaluation
|
||||
- Evaluation Metrics
|
||||
- Accuracy
|
||||
- Precision and Recall
|
||||
- F1-Score
|
||||
- Confusion Matrix
|
||||
- ROC and AUC
|
||||
- Local vs. Cloud
|
||||
- Specialized Hardware (GPUs, TPUs, etc.)
|
||||
|
||||
## Hyperparameter Tuning
|
||||
- Grid Search
|
||||
- Random Search
|
||||
- Bayesian Optimization
|
||||
|
||||
## Scaling Up Training
|
||||
- Parallel Training
|
||||
- Distributed Training
|
||||
- Training with GPUs
|
||||
Explanation: Hyperparameters can significantly impact the performance of a trained model. This section educates the reader on how to fine-tune these settings for optimal results, which is especially important for resource-constrained embedded systems.
|
||||
|
||||
## Model Cards
|
||||
- Learning Rate
|
||||
- Batch Size
|
||||
- Number of Epochs
|
||||
- Regularization Techniques
|
||||
|
||||
## Conclusion
|
||||
## Evaluation Metrics
|
||||
|
||||
Explanation: Knowing how to evaluate a model's performance is crucial. This section introduces metrics that help in assessing how well the model will perform in real-world embedded applications.
|
||||
|
||||
- Accuracy
|
||||
- Precision and Recall
|
||||
- F1 Score
|
||||
- ROC and AUC
|
||||
|
||||
## Overfitting and Underfitting
|
||||
|
||||
Explanation: Overfitting and underfitting are common pitfalls in AI training. This section is vital for teaching strategies to avoid these issues, ensuring that the model generalizes well to new, unseen data.
|
||||
|
||||
- Techniques to Avoid Overfitting (Dropout, Early Stopping, etc.)
|
||||
- Understanding Underfitting and How to Address It
|
||||
|
||||
## Transfer Learning
|
||||
|
||||
Explanation: Transfer learning can save time and computational resources, which is particularly beneficial for embedded systems. This section explains how to leverage pre-trained models for new tasks.
|
||||
|
||||
- Basics of Transfer Learning
|
||||
- Applications in Embedded AI
|
||||
|
||||
## Challenges and Best Practices
|
||||
|
||||
Explanation: Every technology comes with its own set of challenges. This section prepares the reader for potential hurdles in AI training, offering best practices to navigate them effectively.
|
||||
|
||||
- Computational Constraints
|
||||
- Data Privacy
|
||||
- Ethical Considerations
|
||||
|
||||
## Conclusion
|
||||
|
||||
Explanation: A summary helps to consolidate the key points of the chapter, aiding in better retention and understanding of the material.
|
||||
|
||||
- Key Takeaways
|
||||
- Future Trends in AI Training for Embedded Systems
|
||||
Reference in New Issue
Block a user