Fixing typos in labs section (#1038)

This commit is contained in:
Didier Durand
2025-11-06 14:52:55 +01:00
committed by GitHub
parent 1bd42eba6b
commit 86e2f25a0b
3 changed files with 3 additions and 3 deletions

View File

@@ -4,7 +4,7 @@
## Introduction {#sec-visionlanguage-models-vlm-introduction-4272}
In this hands-on lab, we will continuously explore AI applications at the Edge, from the basic setup of the Florence-2, Microsoft's state-of-the-art vision foundation model, to advanced implementations on devices like the Raspberry Pi. We will learn to use Vision-Languageor Models (VLMs) for tasks such as captioning, object detection, grounding, segmentation, and OCR on a Raspberry Pi.
In this hands-on lab, we will continuously explore AI applications at the Edge, from the basic setup of the Florence-2, Microsoft's state-of-the-art vision foundation model, to advanced implementations on devices like the Raspberry Pi. We will learn to use Vision Language Models (VLMs) for tasks such as captioning, object detection, grounding, segmentation, and OCR on a Raspberry Pi.
### Why Florence-2 at the Edge? {#sec-visionlanguage-models-vlm-florence2-edge-0534}

View File

@@ -216,7 +216,7 @@ Also, the model can be deployed again to the device at any time. Automatically,
The primary objective of our project is to train a model and perform inference on the XIAO ESP32S3 Sense. For training, we should find some data **(in fact, tons of data!)**.
*But as we alheady know, first of all, we need a goal! What do we want to classify?*
*But as we already know, first of all, we need a goal! What do we want to classify?*
With TinyML, a set of techniques associated with machine learning inference on embedded devices, we should limit the classification to three or four categories due to limitations (mainly memory). We can, for example, train the images captured for the Box versus Wheel, which can be downloaded from the SenseCraft AI Studio.

View File

@@ -683,7 +683,7 @@ And, of course, some "anomaly", for example, putting the XIAO upside-down. The a
\noindent
![](./images/png/inf-ano.png){width=90% fig-align="center"}
## Post-Prossessing {#sec-motion-classification-anomaly-detection-postprossessing-ef66}
## Post-Processing {#sec-motion-classification-anomaly-detection-postprossessing-ef66}
Now that we know the model is working, we suggest modifying the code to see the result with the Kit completely offline (disconnected from the PC and powered by a battery, a power bank, or an independent 5V power supply).