Files
cs249r_book/kits/index.qmd
Vijay Janapa Reddi 1f951c2120 Improve kits section table formatting
Update tables in hardware kits documentation with proper formatting.
2026-01-21 17:12:14 -05:00

200 lines
9.3 KiB
Plaintext

---
title: "Hardware Kits"
subtitle: "Hands-On Embedded ML Labs for Real-World Deployment"
---
::: {.content-visible when-format="html"}
```{=html}
<!-- Hardware Carousel -->
<div id="hardwareCarousel" class="carousel slide mb-4" data-bs-ride="carousel">
<div class="carousel-inner">
<div class="carousel-item active">
<img src="contents/seeed/xiao_esp32s3/images/jpeg/xiao_esp32s3_decked.jpeg" class="d-block mx-auto" alt="XIAO ESP32S3">
</div>
<div class="carousel-item">
<img src="contents/arduino/nicla_vision/images/jpg/nicla_vision.jpeg" class="d-block mx-auto" alt="Arduino Nicla Vision">
</div>
<div class="carousel-item">
<img src="contents/seeed/grove_vision_ai_v2/images/jpeg/grove_vision_ai_v2.jpeg" class="d-block mx-auto" alt="Grove Vision AI V2">
</div>
<div class="carousel-item">
<img src="contents/raspi/images/jpeg/raspis.jpg" class="d-block mx-auto" alt="Raspberry Pi">
</div>
</div>
<button class="carousel-control-prev" type="button" data-bs-target="#hardwareCarousel" data-bs-slide="prev">
<span class="carousel-control-prev-icon"></span>
</button>
<button class="carousel-control-next" type="button" data-bs-target="#hardwareCarousel" data-bs-slide="next">
<span class="carousel-control-next-icon"></span>
</button>
<div class="carousel-indicators">
<button type="button" data-bs-target="#hardwareCarousel" data-bs-slide-to="0" class="active"></button>
<button type="button" data-bs-target="#hardwareCarousel" data-bs-slide-to="1"></button>
<button type="button" data-bs-target="#hardwareCarousel" data-bs-slide-to="2"></button>
<button type="button" data-bs-target="#hardwareCarousel" data-bs-slide-to="3"></button>
</div>
</div>
<div class="carousel-caption-bar">
<p><strong>Embedded ML Hardware Platforms</strong></p>
<p>From $20 microcontrollers to powerful edge devices</p>
</div>
```
:::
::: {.content-visible when-format="pdf"}
![Hardware platforms for embedded ML labs](contents/seeed/grove_vision_ai_v2/images/jpeg/grove_vision_ai_v2.jpeg){width=80%}
:::
These hands-on laboratories accompany the [Machine Learning Systems](https://mlsysbook.ai) textbook, bringing theory to life on real hardware. Deploy machine learning on embedded devices you can hold in your hand, from image classification to voice recognition to motion detection. Professional development boards costing $25-100 provide immediate, tangible feedback: LEDs light up, motors spin, and buzzers sound when your model runs successfully.
Working within the resource constraints of embedded devices (typically 2MB of RAM and 1MB of flash) forces you to confront the same engineering trade-offs that define large-scale ML systems, but in a tangible environment where every optimization decision has immediate, observable consequences.
::: {.callout-note}
## Laboratory Development
These hands-on laboratories were co-designed by [Prof. Vijay Janapa Reddi](https://vijay.seas.harvard.edu) and [Marcelo Rovai](https://github.com/Mjrovai), with Marcelo leading their development. His decades of embedded systems expertise shaped accessible, practical learning experiences that bridge theory with real-world implementation.
:::
## Hardware Platforms
::: {.content-visible when-format="html"}
```{=html}
<div class="row g-4 mb-4">
<div class="col-md-6 col-lg-3">
<a href="contents/seeed/grove_vision_ai_v2/grove_vision_ai_v2.html" class="text-decoration-none">
<div class="platform-card">
<img src="contents/seeed/grove_vision_ai_v2/images/jpeg/grove_vision_ai_v2.jpeg" alt="Grove Vision AI V2">
<h4>Grove Vision AI V2</h4>
<p class="price">~$25</p>
<p class="text-muted small">Best for beginners. Plug & play vision AI.</p>
</div>
</a>
</div>
<div class="col-md-6 col-lg-3">
<a href="contents/seeed/xiao_esp32s3/xiao_esp32s3.html" class="text-decoration-none">
<div class="platform-card">
<img src="contents/seeed/xiao_esp32s3/images/jpeg/xiao_esp32s3_decked.jpeg" alt="XIAOML Kit">
<h4>XIAOML Kit</h4>
<p class="price">~$40</p>
<p class="text-muted small">Best value. Vision, audio, motion.</p>
</div>
</a>
</div>
<div class="col-md-6 col-lg-3">
<a href="contents/raspi/raspi.html" class="text-decoration-none">
<div class="platform-card">
<img src="contents/raspi/images/jpeg/raspis.jpg" alt="Raspberry Pi">
<h4>Raspberry Pi</h4>
<p class="price">~$60-80</p>
<p class="text-muted small">Advanced. LLMs, VLMs, edge AI.</p>
</div>
</a>
</div>
<div class="col-md-6 col-lg-3">
<a href="contents/arduino/nicla_vision/nicla_vision.html" class="text-decoration-none">
<div class="platform-card">
<img src="contents/arduino/nicla_vision/images/jpg/nicla_vision.jpeg" alt="Arduino Nicla Vision">
<h4>Nicla Vision</h4>
<p class="price">~$95</p>
<p class="text-muted small">Professional. Dual sensors, compact.</p>
</div>
</a>
</div>
</div>
```
:::
::: {.content-visible when-format="pdf"}
+----------------------+---------+--------------+-----------------------+
| Platform | Price | Best For | Capabilities |
+======================+=========+==============+=======================+
| Grove Vision AI V2 | ~$25 | Beginners | Vision, Plug & Play |
+----------------------+---------+--------------+-----------------------+
| XIAOML Kit | ~$40 | Best Value | Vision, Audio, Motion |
+----------------------+---------+--------------+-----------------------+
| Raspberry Pi | ~$60-80 | Advanced | Vision, LLM, VLM |
+----------------------+---------+--------------+-----------------------+
| Arduino Nicla Vision | ~$95 | Professional | Vision, Audio, Motion |
+----------------------+---------+--------------+-----------------------+
: Hardware platform comparison {.striped .hover}
:::
## What You Will Build
::: {.content-visible when-format="html"}
```{=html}
<div class="capability-grid">
<div class="capability-item">
<h4>👁️ Computer Vision</h4>
<p>Image classification and object detection on microcontrollers. Train models to recognize objects, detect faces, or classify scenes.</p>
</div>
<div class="capability-item">
<h4>🎤 Audio Processing</h4>
<p>Keyword spotting and voice command recognition. Build wake-word detectors and voice interfaces that run entirely on-device.</p>
</div>
<div class="capability-item">
<h4>🏃 Motion Classification</h4>
<p>Activity and gesture recognition from IMU data. Create wearable-style applications using accelerometer and gyroscope sensors.</p>
</div>
<div class="capability-item">
<h4>🤖 Large Language Models</h4>
<p>Run LLMs and VLMs on edge devices. Experience the frontier of on-device AI with models that understand and generate text.</p>
</div>
</div>
```
:::
::: {.content-visible when-format="pdf"}
**Computer Vision:** Image classification and object detection on microcontrollers. Train models to recognize objects, detect faces, or classify scenes, then deploy them to devices running on battery power.
**Audio Processing:** Keyword spotting and voice command recognition. Build wake-word detectors and simple voice interfaces that run entirely on-device without cloud connectivity.
**Motion Classification:** Activity and gesture recognition from IMU data. Create wearable-style applications that detect walking, running, or custom gestures using accelerometer and gyroscope sensors.
**Large Language Models:** Run LLMs and VLMs on edge devices using Raspberry Pi. Experience the frontier of on-device AI with models that can understand and generate text.
:::
## Getting Started
1. **Choose Hardware:** Select a platform based on your budget and learning goals. See [Platforms](contents/platforms.qmd) for detailed comparisons.
2. **Set Up Environment:** Install Arduino IDE or platform-specific tools. Follow the [IDE Setup Guide](contents/ide-setup.qmd) for step-by-step instructions.
3. **Build & Deploy:** Work through the labs for your chosen platform. Start with [Getting Started](contents/getting-started.qmd) for an overview of available exercises.
## Part of the MLSysBook Ecosystem
::: {.content-visible when-format="html"}
```{=html}
<div class="row g-3 mt-3">
<div class="col-md-4">
<div class="p-3 border rounded h-100 ecosystem-link">
<h5><a href="https://mlsysbook.ai">Textbook</a></h5>
<p class="small text-muted mb-0">Comprehensive theory and concepts covering the full ML systems stack.</p>
</div>
</div>
<div class="col-md-4">
<div class="p-3 border rounded h-100 bg-light ecosystem-current">
<h5>Hardware Kits</h5>
<p class="small text-muted mb-0">Hands-on embedded deployment. <strong>You are here.</strong></p>
</div>
</div>
<div class="col-md-4">
<div class="p-3 border rounded h-100 ecosystem-link">
<h5><a href="https://mlsysbook.ai/tinytorch">TinyTorch</a></h5>
<p class="small text-muted mb-0">Build your own ML framework from scratch.</p>
</div>
</div>
</div>
```
:::
::: {.content-visible when-format="pdf"}
These hardware labs complement the broader ML Systems learning experience:
- **Textbook:** Comprehensive theory and concepts covering the full ML systems stack
- **Hardware Kits:** Hands-on embedded deployment (you are here)
- **TinyTorch:** Build your own ML framework from scratch
:::