docs: reframe README as AI engineering discipline flagship

- Add punch line and mission statement establishing AI engineering
- Rename "About This Project" to "About the Ecosystem"
- Update Learning Stack diagram with systems-focused descriptions
- Move "Start Here" earlier for immediate action orientation
- Add Research to Teaching Loop section under What Makes This Different
- Expand Support section with mission tracking and impact explanation
- Update book/README.md with detailed learning content
- Update kits/README.md with consistent template structure
- Fix all kits links to use mlsysbook.ai/kits
- Change voice from "I" to "we" throughout
This commit is contained in:
Vijay Janapa Reddi
2025-12-29 23:53:05 -05:00
parent e6ecf090f3
commit 22ae3d92a9
3 changed files with 222 additions and 115 deletions

View File

@@ -1,32 +1,59 @@
# Hardware Kits
*Hands-on embedded ML labs for the MLSysBook*
*Hands-on Embedded ML Labs for Real Devices*
[![Build](https://img.shields.io/github/actions/workflow/status/harvard-edge/cs249r_book/kits-publish-dev.yml?branch=dev&label=Build&logo=githubactions)](https://github.com/harvard-edge/cs249r_book/actions/workflows/kits-publish-dev.yml)
[![Website](https://img.shields.io/badge/Read-mlsysbook.ai/kits-blue)](https://mlsysbook.ai/kits)
This directory contains hands-on embedded ML labs using Arduino, Raspberry Pi, and other microcontroller platforms.
[![PDF](https://img.shields.io/badge/Download-PDF-red)](https://mlsysbook.ai/kits/assets/downloads/Hardware-Kits.pdf)
**[Read Online](https://mlsysbook.ai/kits)** | **[PDF](https://mlsysbook.ai/kits/assets/downloads/Hardware-Kits.pdf)**
---
## Platforms
## What This Is
| Platform | Description |
|----------|-------------|
| **Arduino Nicla Vision** | Compact AI camera board with STM32H7 |
| **Seeed XIAO ESP32S3** | Tiny ESP32-S3 with camera support |
| **Grove Vision AI V2** | No-code AI vision module |
| **Raspberry Pi** | Full Linux SBC for edge AI |
The Hardware Kits teach you how to deploy ML models to real embedded devices. You will face actual hardware constraints: limited memory, power budgets, and latency requirements that do not exist in cloud environments.
This is where AI systems meet the physical world.
---
## What You Will Learn
| Concept | What You Do |
|---------|-------------|
| **Image Classification** | Deploy CNN models to classify images in real-time on microcontrollers |
| **Object Detection** | Run YOLO-style detection on camera-equipped boards |
| **Keyword Spotting** | Build always-on wake word detection with audio DSP |
| **Motion Classification** | Use IMU sensors for gesture and activity recognition |
| **Model Optimization** | Quantize and compress models to fit in KB of RAM |
| **Power Management** | Balance accuracy vs battery life for edge deployment |
### Hardware Platforms
| Platform | Description | Best For |
|----------|-------------|----------|
| **Arduino Nicla Vision** | Compact AI camera board with STM32H7 | Vision projects, ultra-low power |
| **Seeed XIAO ESP32S3** | Tiny ESP32-S3 with camera support | WiFi-connected vision |
| **Grove Vision AI V2** | No-code AI vision module | Rapid prototyping |
| **Raspberry Pi** | Full Linux SBC for edge AI | Complex pipelines, prototyping |
---
## Quick Start
### For Learners
1. Pick a platform from the [labs](https://mlsysbook.ai/kits)
2. Follow the setup guide for your hardware
3. Complete the labs in order: Setup → Image Classification → Object Detection → Keyword Spotting
### For Contributors
```bash
# Build HTML site
cd kits
# Build HTML site
ln -sf config/_quarto-html.yml _quarto.yml
quarto render
@@ -40,6 +67,20 @@ quarto preview
---
## Labs Overview
Each platform includes progressive labs:
| Lab | What You Build | Skills |
|-----|----------------|--------|
| **Setup** | Hardware setup and environment configuration | Toolchain, flashing, debugging |
| **Image Classification** | CNN-based image recognition | Model deployment, inference |
| **Object Detection** | Real-time object detection | YOLO, bounding boxes |
| **Keyword Spotting** | Audio wake word detection | DSP, MFCC features |
| **Motion Classification** | IMU-based gesture recognition | Sensor fusion, time series |
---
## Directory Structure
```
@@ -60,23 +101,34 @@ kits/
---
## Labs Overview
## Documentation
Each platform includes labs covering:
| Audience | Resources |
|----------|-----------|
| **Learners** | [Online Labs](https://mlsysbook.ai/kits) ・ [PDF](https://mlsysbook.ai/kits/assets/downloads/Hardware-Kits.pdf) |
| **Contributors** | See build instructions above |
- **Setup** - Hardware setup and environment configuration
- **Image Classification** - CNN-based image recognition
- **Object Detection** - Real-time object detection
- **Keyword Spotting** - Audio wake word detection
- **Motion Classification** - IMU-based gesture recognition
---
## Contributing
We welcome contributions to the hardware labs! To contribute:
1. Fork and clone the repository
2. Add or improve lab content in `contents/`
3. Test your changes with `quarto preview`
4. Submit a PR with a clear description
---
## Related
- **[MLSysBook](../README.md)** - Main textbook
- **[TinyTorch](../tinytorch/)** - Build ML frameworks from scratch
- **[Website](https://mlsysbook.ai/kits)** - Read labs online
| Component | Description |
|-----------|-------------|
| **[Main README](../README.md)** | Project overview and ecosystem |
| **[Textbook](../book/)** | ML Systems concepts and theory |
| **[TinyTorch](../tinytorch/)** | Build ML frameworks from scratch |
| **[Website](https://mlsysbook.ai/kits)** | Read labs online |
---