mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-04-29 00:59:07 -05:00
Fixing typos in labs section (#1038)
This commit is contained in:
@@ -4,7 +4,7 @@
|
||||
|
||||
## Introduction {#sec-visionlanguage-models-vlm-introduction-4272}
|
||||
|
||||
In this hands-on lab, we will continuously explore AI applications at the Edge, from the basic setup of the Florence-2, Microsoft's state-of-the-art vision foundation model, to advanced implementations on devices like the Raspberry Pi. We will learn to use Vision-Languageor Models (VLMs) for tasks such as captioning, object detection, grounding, segmentation, and OCR on a Raspberry Pi.
|
||||
In this hands-on lab, we will continuously explore AI applications at the Edge, from the basic setup of the Florence-2, Microsoft's state-of-the-art vision foundation model, to advanced implementations on devices like the Raspberry Pi. We will learn to use Vision Language Models (VLMs) for tasks such as captioning, object detection, grounding, segmentation, and OCR on a Raspberry Pi.
|
||||
|
||||
### Why Florence-2 at the Edge? {#sec-visionlanguage-models-vlm-florence2-edge-0534}
|
||||
|
||||
|
||||
@@ -216,7 +216,7 @@ Also, the model can be deployed again to the device at any time. Automatically,
|
||||
|
||||
The primary objective of our project is to train a model and perform inference on the XIAO ESP32S3 Sense. For training, we should find some data **(in fact, tons of data!)**.
|
||||
|
||||
*But as we alheady know, first of all, we need a goal! What do we want to classify?*
|
||||
*But as we already know, first of all, we need a goal! What do we want to classify?*
|
||||
|
||||
With TinyML, a set of techniques associated with machine learning inference on embedded devices, we should limit the classification to three or four categories due to limitations (mainly memory). We can, for example, train the images captured for the Box versus Wheel, which can be downloaded from the SenseCraft AI Studio.
|
||||
|
||||
|
||||
@@ -683,7 +683,7 @@ And, of course, some "anomaly", for example, putting the XIAO upside-down. The a
|
||||
\noindent
|
||||
{width=90% fig-align="center"}
|
||||
|
||||
## Post-Prossessing {#sec-motion-classification-anomaly-detection-postprossessing-ef66}
|
||||
## Post-Processing {#sec-motion-classification-anomaly-detection-postprossessing-ef66}
|
||||
|
||||
Now that we know the model is working, we suggest modifying the code to see the result with the Kit completely offline (disconnected from the PC and powered by a battery, a power bank, or an independent 5V power supply).
|
||||
|
||||
|
||||
Reference in New Issue
Block a user