feat(video): enhance video formatting with margin-video extension

- Convert all 16 videos from verbose callout blocks to clean shortcode syntax
- Create margin-video Quarto extension with comprehensive features
- Remove text dependencies on videos for self-contained content
- Implement auto-numbering for HTML and QR codes for PDF

Changes:
• New margin-video extension with robust YouTube support
• Clean {{< margin-video "URL" "Title" "Author" >}} syntax
• Enhanced error handling and URL validation
• Support for video options (aspect-ratio, start, autoplay)
• Complete documentation with README and CHANGELOG
• Convert videos in: ai_for_good, ops, dl_primer, privacy_security, introduction
• Remove @vid-* text dependencies to improve content accessibility
• Professional extension structure for maintainability

Technical improvements:
• YouTube URL validation with clear error messages
• Multiple URL format support (youtube.com, youtu.be, embed)
• Format-specific rendering (HTML iframe vs PDF QR codes)
• CSS auto-numbering integration
• Configurable video parameters via kwargs
This commit is contained in:
Vijay Janapa Reddi
2025-08-04 17:07:29 -04:00
parent 82d3131801
commit 823613d900
12 changed files with 259 additions and 467 deletions

View File

@@ -0,0 +1,35 @@
# Changelog
All notable changes to the Margin Video extension will be documented in this file.
## [1.0.0] - 2024-12-08
### Added
- Initial release of margin-video extension
- YouTube video embedding as margin notes
- Automatic video numbering in HTML output
- QR code generation for PDF output
- Format-specific rendering (HTML vs PDF)
- YouTube URL validation with clear error messages
- Support for multiple YouTube URL formats:
- `youtube.com/watch?v=ID`
- `youtu.be/ID`
- `youtube.com/embed/ID`
- Configuration options via kwargs:
- `aspect-ratio`: Custom video aspect ratio (default: "16/9")
- `start`: Start time in seconds
- `autoplay`: Enable autoplay (default: false)
- Comprehensive documentation and examples
- Error handling for missing or invalid arguments
### Features
- **HTML Output**: Responsive iframe with configurable aspect ratio
- **PDF Output**: QR code with margin note and clickable link
- **Validation**: YouTube-only support with helpful error messages
- **Flexibility**: Configurable video parameters and styling
- **Documentation**: Complete README with usage examples
### Technical Details
- Quarto >= 1.2.0 required
- MIT licensed
- Self-contained extension with no external dependencies

View File

@@ -0,0 +1,73 @@
# Margin Video Extension
A Quarto extension for embedding YouTube videos as margin notes with automatic numbering and format-specific rendering.
## Usage
```markdown
{{< margin-video "YOUTUBE_URL" "VIDEO_TITLE" "AUTHOR" >}}
```
### Basic Examples
```markdown
{{< margin-video "https://youtu.be/aircAruvnKk" "Neural Networks" "3Blue1Brown" >}}
{{< margin-video "https://www.youtube.com/watch?v=FwFduRA_L6Q" "CNN Demo" "Yann LeCun" >}}
```
### Advanced Usage with Options
```markdown
<!-- Custom aspect ratio -->
{{< margin-video "https://youtu.be/aircAruvnKk" "Neural Networks" "3Blue1Brown" aspect-ratio="4/3" >}}
<!-- Start at specific time (in seconds) -->
{{< margin-video "https://youtu.be/aircAruvnKk" "Neural Networks" "3Blue1Brown" start="120" >}}
<!-- Enable autoplay (use sparingly) -->
{{< margin-video "https://youtu.be/aircAruvnKk" "Neural Networks" "3Blue1Brown" autoplay="true" >}}
```
## Supported Options
| Option | Description | Default | Example |
|--------|-------------|---------|---------|
| `aspect-ratio` | Video aspect ratio | `"16/9"` | `aspect-ratio="4/3"` |
| `start` | Start time in seconds | none | `start="120"` |
| `autoplay` | Enable autoplay | `false` | `autoplay="true"` |
## Features
- **HTML**: Renders as margin video with iframe embed and auto-numbering
- **PDF**: Generates QR code with margin note for mobile scanning
- **YouTube validation**: Only accepts YouTube URLs with clear error messages
- **Responsive design**: Videos adapt to available space
## Requirements
- Quarto >= 1.2.0
- YouTube URLs only (youtu.be or youtube.com)
## Output
### HTML
- Places video in `.column-margin` with auto-numbered caption
- Uses CSS counters for automatic "Video 1:", "Video 2:" numbering
- Responsive iframe with 16:9 aspect ratio
### PDF
- QR code linking to video
- Formatted margin note with title and author
- FontAwesome TV icon with clickable link
## Installation
This extension is bundled with the MLSysBook project. For standalone use:
```bash
quarto add path/to/margin-video
```
## License
Part of the MLSysBook project.

View File

@@ -0,0 +1,15 @@
title: Margin Video
author: MLSysBook Team
version: 1.0.0
description: "Embed YouTube videos as responsive margin notes with auto-numbering"
quarto-required: ">=1.2.0"
license: MIT
keywords:
- video
- youtube
- margin
- education
url: https://github.com/YourRepo/MLSysBook
contributes:
shortcodes:
- margin-video.lua

View File

@@ -0,0 +1,112 @@
-- Video insertion shortcode for MLSysBook
-- Usage: {{< margin-video "URL" "Title" "Author" >}}
return {
['margin-video'] = function(args, kwargs, meta)
-- Validate arguments
if not args[1] then
error("ERROR: margin-video requires at least a URL argument.\nUsage: {{< margin-video \"URL\" \"Title\" \"Author\" >}}")
end
local url = pandoc.utils.stringify(args[1]) or ""
local title = pandoc.utils.stringify(args[2]) or "Video"
local author = pandoc.utils.stringify(args[3]) or ""
-- Optional configuration via kwargs
local aspect_ratio = pandoc.utils.stringify(kwargs["aspect-ratio"]) or "16/9"
local autoplay = pandoc.utils.stringify(kwargs["autoplay"]) == "true"
local start_time = pandoc.utils.stringify(kwargs["start"]) or nil
-- Validate URL is not empty
if url == "" then
error("ERROR: margin-video URL cannot be empty.\nUsage: {{< margin-video \"URL\" \"Title\" \"Author\" >}}")
end
-- Check if it's a YouTube URL with better validation
if not (string.match(url, "youtube%.com") or string.match(url, "youtu%.be")) then
error("ERROR: margin-video currently only supports YouTube URLs.\nGot: " .. url .. "\nSupported formats:\n - https://www.youtube.com/watch?v=VIDEO_ID\n - https://youtu.be/VIDEO_ID")
end
-- Extract YouTube video ID (handles various URL formats and parameters)
local video_id = nil
-- Handle youtube.com/watch?v=ID format (with optional additional parameters)
video_id = string.match(url, "youtube%.com/watch%?.*v=([%w_-]+)")
-- Handle youtu.be/ID format (with optional parameters)
if not video_id then
video_id = string.match(url, "youtu%.be/([%w_-]+)")
end
-- Handle youtube.com/embed/ID format
if not video_id then
video_id = string.match(url, "youtube%.com/embed/([%w_-]+)")
end
if not video_id then
error("ERROR: Could not extract YouTube video ID from URL: " .. url .. "\nPlease check the URL format is correct.")
end
if FORMAT:match("html") then
-- HTML: Margin video with auto-numbering
local caption = title
if author ~= "" then
caption = caption .. " - " .. author
end
-- Build iframe URL with optional parameters
local iframe_url = "https://www.youtube.com/embed/" .. video_id
local url_params = {}
if autoplay then
table.insert(url_params, "autoplay=1")
end
if start_time then
table.insert(url_params, "start=" .. start_time)
end
if #url_params > 0 then
iframe_url = iframe_url .. "?" .. table.concat(url_params, "&")
end
local html_output = [[
<div class="column-margin">
<div class="margin-video">
<iframe src="]] .. iframe_url .. [["
style="width:100%; aspect-ratio: ]] .. aspect_ratio .. [[; border:0;"
allowfullscreen>
</iframe>
</div>
<p><em>]] .. caption .. [[</em></p>
</div>
]]
return pandoc.RawBlock("html", html_output)
elseif FORMAT:match("pdf") then
-- PDF: QR code and margin note
local pdf_output = [[
\marginnote{\centering\\\vspace*{5mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
]] .. title .. [[\\
]] .. (author ~= "" and author .. "\\[1mm]" or "") .. [[
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{]] .. url .. [[}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](]] .. url .. [[)
]]
return pandoc.RawBlock("latex", pdf_output)
else
-- Fallback for other formats (e.g., just a link)
return pandoc.Link(pandoc.Str(title), url)
end
end
}

View File

@@ -379,8 +379,7 @@ filters:
#- ../config/lua/inject_crossrefs.lua # ⚠️ WARNING: This must come before custom-numbered-blocks (relies on \ref{...})
- custom-numbered-blocks
- ../config/lua/margin-connections.lua # ⚠️ WARNING: This filter must come after custom-numbered-blocks
- ../config/lua/insert_video.lua
- ../config/lua/insert_video.lua
# Filter configurations and metadata
filter-metadata:

View File

@@ -246,7 +246,7 @@ filters:
- ../config/lua/inject_crossrefs.lua # This must come before custom-numbered-blocks (relies on \ref{...})
- custom-numbered-blocks
- ../config/lua/margin-connections.lua # This filter must come after custom-numbered-blocks
- ../config/lua/insert_video.lua
# Filter configurations and metadata
filter-metadata:

View File

@@ -73,31 +73,7 @@ AI technologies, such as Cloud ML, Mobile ML, Edge ML, and Tiny ML, are unlockin
### Agriculture {#sec-ai-good-agriculture-7412}
:::{#vid-plantvillage .callout-important title="Plant Village Nuru"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/MD61bddZtbg?si=Ake2uP8vC_lsvYhd >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-15mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Plant Village Nuru\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://youtu.be/MD61bddZtbg?si=Ake2uP8vC_lsvYhd}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://youtu.be/MD61bddZtbg?si=Ake2uP8vC_lsvYhd)
:::
:::
{{< margin-video "https://youtu.be/MD61bddZtbg?si=Ake2uP8vC_lsvYhd" "Plant Village Nuru" "PlantVillage" >}}
![**Mobile Disease Detection**: Example of edge machine learning, where a smartphone app uses a trained model to classify plant diseases directly on the device, enabling real-time feedback in resource-constrained environments. this deployment reduces reliance on network connectivity and allows for localized, accessible agricultural support.](images/png/plantvillage.png){#fig-plantvillage}
@@ -119,54 +95,11 @@ In parallel, Cloud ML is advancing healthcare research and diagnostics on a broa
### Disaster Response {#sec-ai-good-disaster-response-4034}
In disaster zones, where every second counts, AI technologies are providing tools to accelerate response efforts and enhance safety. Tiny, autonomous drones equipped with Tiny ML algorithms are making their way into collapsed buildings, navigating obstacles to detect signs of life. By analyzing thermal imaging and acoustic signals locally, these drones can identify survivors and hazards without relying on cloud connectivity [@duisterhof2021sniffy]. @vid-l2seek and @vid-sniffybug show how Tiny ML algorithms can be used to enable drones to autonomously seek light and gas sources.
In disaster zones, where every second counts, AI technologies are providing tools to accelerate response efforts and enhance safety. Tiny, autonomous drones equipped with Tiny ML algorithms are making their way into collapsed buildings, navigating obstacles to detect signs of life. By analyzing thermal imaging and acoustic signals locally, these drones can identify survivors and hazards without relying on cloud connectivity [@duisterhof2021sniffy]. These drones can autonomously seek light sources (which often indicate survivors) and detect dangerous gas leaks, making search and rescue operations both faster and safer for human responders.
:::{#vid-l2seek .callout-important title="Light Seeking"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=wmVKbX7MOnU >}}
:::
{{< margin-video "https://www.youtube.com/watch?v=wmVKbX7MOnU" "Light Seeking" "TU Delft" >}}
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-20mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Light Seeking\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://www.youtube.com/watch?v=wmVKbX7MOnU}
\endgroup
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=wmVKbX7MOnU)
:::
:::
:::{#vid-sniffybug .callout-important title="Gas Seeking"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=hj_SBSpK5qg >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-12mm}%
\parbox{30mm}{\centering\footnotesize%
%\textbf{Watch on YouTube}\\
Gas Seeking\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://www.youtube.com/watch?v=hj_SBSpK5qg}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=hj_SBSpK5qg)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=hj_SBSpK5qg" "Gas Seeking" "TU Delft" >}}
At a broader level, platforms like Google's [AI for Disaster Response](https://crisisresponse.google/) are leveraging Cloud ML to process satellite imagery and predict flood zones. These systems provide real-time insights to help governments allocate resources more effectively and save lives during emergencies.
@@ -176,33 +109,9 @@ Mobile ML applications are also playing a critical role by delivering real-time
Conservationists face immense challenges in monitoring and protecting biodiversity across vast and often remote landscapes. AI technologies are offering scalable solutions to these problems, combining local autonomy with global coordination.
:::{#vid-ee .callout-important title="Elephant Edge"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/ci95eyvTyXo?si=iD8TZiVAfuci4QeN >}}
:::
{{< margin-video "https://youtu.be/ci95eyvTyXo?si=iD8TZiVAfuci4QeN" "Elephant Edge" "ElephantEdge" >}}
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-15mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Elephant Edge\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://youtu.be/ci95eyvTyXo?si=iD8TZiVAfuci4QeN}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://youtu.be/ci95eyvTyXo?si=iD8TZiVAfuci4QeN)
:::
:::
EdgeML-powered collars are being used to unobtrusively track animal behavior, such as elephant movements and vocalizations (@vid-ee). By processing data on the collar itself, these devices minimize power consumption and reduce the need for frequent battery changes [@verma2022elephant]. Meanwhile, Tiny ML systems are enabling anti-poaching efforts by detecting threats like gunshots or human activity and relaying alerts to rangers in real time [@bamoumen2022tinyml].
EdgeML-powered collars are being used to unobtrusively track animal behavior, such as elephant movements and vocalizations, helping researchers understand migration patterns and social behaviors. By processing data on the collar itself, these devices minimize power consumption and reduce the need for frequent battery changes [@verma2022elephant]. Meanwhile, Tiny ML systems are enabling anti-poaching efforts by detecting threats like gunshots or human activity and relaying alerts to rangers in real time [@bamoumen2022tinyml].
At a global scale, Cloud ML is being used to monitor illegal fishing activities. Platforms like [Global Fishing Watch](https://globalfishingwatch.org/) analyze satellite data to detect anomalies, helping governments enforce regulations and protect marine ecosystems. These examples highlight how AI technologies are enabling real-time monitoring and decision-making, advancing conservation efforts in profound ways.
@@ -412,31 +321,7 @@ In machine learning applications, this pattern requires careful consideration of
#### Google's Flood Forecasting {#sec-ai-good-googles-flood-forecasting-7eca}
:::{#vid-g_forecasting .callout-important title="AI for Flood Forecasting"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/ET04pDj-RvM?si=l7P0nBv1h2rXOzIE >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-20mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
AI for Flood Forecasting\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://youtu.be/ET04pDj-RvM?si=l7P0nBv1h2rXOzIE}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://youtu.be/ET04pDj-RvM?si=l7P0nBv1h2rXOzIE)
:::
:::
{{< margin-video "https://youtu.be/ET04pDj-RvM?si=l7P0nBv1h2rXOzIE" "AI for Flood Forecasting" "Google" >}}
<!-- VJ: For PDF, maybe we could do something like this... or improve on it.-->

View File

@@ -762,33 +762,9 @@ Where:
[^fn-pre-activation-output]: **Pre-activation output**: The output produced by a neuron in a neural network before the activation function is applied.
Now that we have covered the basics, @vid-nn provides a great overview of how neural networks work using handwritten digit recognition. It introduces some new concepts that we will explore in more depth soon, but it serves as an excellent introduction.
Now that we have covered the basics, let's look at how these concepts come together in practice. Neural networks excel at tasks like handwritten digit recognition, where they learn to identify patterns in pixel data and classify images into different categories. This practical example introduces some new concepts that we will explore in more depth soon.
:::{#vid-nn .callout-important title="Neural Network"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/aircAruvnKk?si=P7aT71L_uGT4xUz6 >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{0mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Neural Network\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://www.youtube.com/watch?v=FwFduRA_L6Q}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=FwFduRA_L6Q)
:::
:::
{{< margin-video "https://youtu.be/aircAruvnKk?si=P7aT71L_uGT4xUz6" "Neural Network" "3Blue1Brown" >}}
### Weights and Biases {#sec-dl-primer-weights-biases-1bee}
@@ -1565,54 +1541,11 @@ The process begins at the network's output, where we compare the predicted digit
[^fn-chain-rule]: **Chain rule of calculus**: A basic theorem in calculus stating that the derivative of a composite function is the product of the derivative of the outer function and the derivative of the inner function.
@vid-gd1 and @vid-gd2 give a good high level overview of cost functions help neural networks learn
Cost functions play a crucial role in helping neural networks learn by providing a measurable way to evaluate how well the network is performing and guide the optimization process.
:::{#vid-gd1 .callout-important title="Gradient descent Part 1"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/IHZwWFHWa-w?si=_MpUFVskdVHYztkz >}}
:::
{{< margin-video "https://youtu.be/IHZwWFHWa-w?si=_MpUFVskdVHYztkz" "Gradient descent Part 1" "3Blue1Brown" >}}
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{2mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Gradient descent Part 1\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://youtu.be/IHZwWFHWa-w?si=_MpUFVskdVHYztkz}
\endgroup
}
\faTv{} [Watch on YouTube](https://youtu.be/IHZwWFHWa-w?si=_MpUFVskdVHYztkz)
:::
:::
:::{#vid-gd2 .callout-important title="Gradient descent Part 2"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/Ilg3gGewQ5U?si=YXVP3tm_ZBY9R-Hg >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{9mm}%
\parbox{30mm}{\centering\footnotesize%
%\textbf{Watch on YouTube}\\
Gradient descent Part 2\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://youtu.be/Ilg3gGewQ5U?si=YXVP3tm_ZBY9R-Hg}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://youtu.be/Ilg3gGewQ5U?si=YXVP3tm_ZBY9R-Hg)
:::
:::
{{< margin-video "https://youtu.be/Ilg3gGewQ5U?si=YXVP3tm_ZBY9R-Hg" "Gradient descent Part 2" "3Blue1Brown" >}}
#### Gradient Flow {#sec-dl-primer-gradient-flow-25ed}
@@ -1696,33 +1629,9 @@ where $\theta$ represents any network parameter (weights or biases), $\alpha$ is
For our MNIST example, this means adjusting weights to improve digit classification accuracy. If the network frequently confuses "7"s with "1"s, gradient descent will modify the weights to better distinguish between these digits. The learning rate $\alpha$ controls how large these adjustments are—too large, and the network might overshoot optimal values; too small, and training will progress very slowly.
@vid-bp demonstrates how the backpropagation math works in neural networks for those inclined towards a more theoretical foundation.
The mathematical foundation of backpropagation involves computing partial derivatives through the chain rule, allowing each weight to understand its specific contribution to the overall error.
:::{#vid-bp .callout-important title="Backpropagation"}
::: {.content-visible when-format="html"}
{{< video https://youtu.be/tIeHLnjs5U8?si=Uckr8YPwwAZ_UI6t >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-15mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Backpropagation\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode{https://www.youtube.com/watch?v=FwFduRA_L6Q}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=FwFduRA_L6Q)
:::
:::
{{< margin-video "https://youtu.be/tIeHLnjs5U8?si=Uckr8YPwwAZ_UI6t" "Backpropagation" "3Blue1Brown" >}}
#### Batch Processing {#sec-dl-primer-batch-processing-7227}

View File

@@ -841,35 +841,7 @@ The deep learning revolution of 2012 didn't emerge from nowhere, as it was found
Yet these networks largely languished through the 1990s and 2000s, not because the ideas were wrong, but because they were ahead of their time. The field lacked three important ingredients: sufficient data to train complex networks, enough computational power to process this data, and the technical innovations needed to train very deep networks effectively.
::: {.content-visible when-format="html"}
::: {.column-margin}
::: {.margin-video}
{{< video https://www.youtube.com/watch?v=FwFduRA_L6Q&ab_channel=YannLeCun >}}
:::
*Convolutional Network Demo from 1989 - Yann LeCun*
:::
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{5mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Convolutional Net Demo\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=FwFduRA_L6Q}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=FwFduRA_L6Q&ab_channel=YannLeCun)
:::
{{< margin-video "https://www.youtube.com/watch?v=FwFduRA_L6Q&ab_channel=YannLeCun" "Convolutional Network Demo from 1989" "Yann LeCun" >}}
::: {.callout-note title="Alternative Video Formatting Options"}
Here are several better ways to format videos in your textbook:

View File

@@ -238,31 +238,7 @@ Consider a predictive maintenance application in an industrial setting. A contin
Effective data management in MLOps is not limited to ensuring data quality. It also establishes the operational backbone that enables model reproducibility, auditability, and sustained deployment at scale. Without robust data management, the integrity of downstream training, evaluation, and serving processes cannot be maintained.
:::{#vid-datapipe .callout-important title="Data Pipelines"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=gz-44N3MMOA&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=33 >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-35mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Data Pipelines\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=gz-44N3MMOA&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=33}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=gz-44N3MMOA&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=33)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=gz-44N3MMOA&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=33" "Data Pipelines" "MIT 6.S191" >}}
#### Feature Stores {#sec-ml-operations-feature-stores-75c2}
@@ -712,31 +688,7 @@ Proactive alerting mechanisms are configured to notify teams when anomalies or t
Ultimately, robust monitoring enables teams to detect problems before they escalate, maintain high service availability, and preserve the reliability and trustworthiness of machine learning systems. In the absence of such practices, models may silently degrade or systems may fail under load, undermining the effectiveness of the ML pipeline as a whole.
:::{#vid-monitoring .callout-important title="Model Monitoring"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=hq_XyP9y0xg&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=7 >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-25mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Model Monitoring\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=hq_XyP9y0xg&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=7}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=hq_XyP9y0xg&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=7)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=hq_XyP9y0xg&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=7" "Model Monitoring" "MIT 6.S191" >}}
### Governance and Collaboration {#sec-ml-operations-governance-collaboration-60b2}
@@ -768,31 +720,7 @@ For example, a data scientist working on an anomaly detection model may use Weig
By integrating collaborative tools, standardized documentation, and transparent experiment tracking, MLOps removes communication barriers that have traditionally slowed down ML workflows. It enables distributed teams to operate cohesively, accelerating iteration cycles and improving the reliability of deployed systems.
:::{#vid-deploy .callout-important title="Deployment Challenges"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=UyEtTyeahus&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=5 >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-25mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Deployment Challenges\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=UyEtTyeahus&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=5}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=UyEtTyeahus&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=5)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=UyEtTyeahus&list=PLkDaE6sCZn6GMoA0wbpJLi3t34Gd8l0aK&index=5" "Deployment Challenges" "MIT 6.S191" >}}
## Hidden Technical Debt {#sec-ml-operations-hidden-technical-debt-e77e}

View File

@@ -127,31 +127,7 @@ In 2015, security researchers publicly demonstrated a remote cyberattack on a Je
This demonstration served as a wake-up call for the automotive industry. It highlighted the risks posed by the growing connectivity of modern vehicles. Traditionally isolated automotive control systems, such as those managing steering and braking, were shown to be vulnerable when exposed through externally accessible software interfaces. The ability to remotely manipulate safety-critical functions raised serious concerns about passenger safety, regulatory oversight, and industry best practices.
:::{#vid-jeephack .callout-important title="Jeep Cherokee Hack"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=MK0SrxBC1xs&ab_channel=WIRED >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-25mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Jeep Cherokee Hack\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=MK0SrxBC1xs&ab_channel=WIRED}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=MK0SrxBC1xs&ab_channel=WIRED)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=MK0SrxBC1xs&ab_channel=WIRED" "Jeep Cherokee Hack" "WIRED" >}}
The incident also led to a recall of over 1.4 million vehicles to patch the vulnerability, highlighting the need for manufacturers to prioritize cybersecurity in their designs. The National Highway Traffic Safety Administration (NHTSA) issued guidelines for automakers to improve vehicle cybersecurity, including recommendations for secure software development practices and incident response protocols.
@@ -167,31 +143,7 @@ In 2016, the [Mirai botnet](https://www.cloudflare.com/learning/ddos/what-is-a-d
The Mirai botnet was used to overwhelm major internet infrastructure providers, disrupting access to popular online services across the United States and beyond. The scale of the attack demonstrated how vulnerable consumer and industrial devices can become a platform for widespread disruption when security is not prioritized in their design and deployment.
:::{#vid-mirai .callout-important title="Mirai Botnet"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=1pywzRTJDaY >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-15mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Mirai Botnet\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=1pywzRTJDaY}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=1pywzRTJDaY)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=1pywzRTJDaY" "Mirai Botnet" "Vice News" >}}
While the devices exploited by Mirai did not include machine learning components, the architectural patterns exposed by this incident are increasingly relevant as machine learning expands into edge computing and Internet of Things (IoT) devices. Many ML-enabled products, such as smart cameras, voice assistants, and edge analytics platforms, share similar deployment characteristics—operating on networked devices with limited hardware resources, often managed at scale.
@@ -702,33 +654,9 @@ When an incorrect password is entered, the power analysis chart changes as shown
![**Power Consumption Jump**: The blue line's sharp increase after processing the first byte indicates immediate authentication failure, highlighting how incorrect passwords are quickly detected through power usage. Source: Colin O'Flynn.](images/png/Power_analysis_of_an_encryption_device_with_a_wrong_password.png){#fig-encryption3}
These examples demonstrate how attackers can exploit observable power consumption differences to reduce the search space and eventually recover secret data through brute-force analysis. For a more detailed walkthrough, @vid-powerattack provides a step-by-step demonstration of how these attacks are performed.
These examples demonstrate how attackers can exploit observable power consumption differences to reduce the search space and eventually recover secret data through brute-force analysis. By systematically measuring power consumption patterns and correlating them with different inputs, attackers can extract sensitive information that should remain hidden.
:::{#vid-powerattack .callout-important title="Power Attack"}
::: {.content-visible when-format="html"}
{{< video https://www.youtube.com/watch?v=2iDLfuEBcs8 >}}
:::
::: {.content-visible when-format="pdf"}
\marginnote{\centering\\\vspace*{-15mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
Power Attack\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{https://www.youtube.com/watch?v=2iDLfuEBcs8}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} [Watch on YouTube](https://www.youtube.com/watch?v=2iDLfuEBcs8)
:::
:::
{{< margin-video "https://www.youtube.com/watch?v=2iDLfuEBcs8" "Power Attack" "Colin O'Flynn" >}}
Such attacks are not limited to cryptographic systems. Machine learning applications face similar risks. For example, an ML-based speech recognition system processing voice commands on a local device could leak timing or power signals that reveal which commands are being processed. Even subtle acoustic or electromagnetic emissions may expose operational patterns that an adversary could exploit to infer user behavior.

View File

@@ -1,64 +0,0 @@
-- Margin Video Filter for MLSysBook
-- Converts {.margin-video} divs to appropriate HTML/PDF output
function Div(el)
if el.classes:includes("margin-video") then
local url = el.attr.attributes["url"] or ""
local title = el.attr.attributes["title"] or "Video"
local author = el.attr.attributes["author"] or ""
-- Extract YouTube video ID for HTML
local video_id = string.match(url, "youtube%.com/watch%?v=([%w_-]+)")
if not video_id then
video_id = string.match(url, "youtu%.be/([%w_-]+)")
end
if FORMAT:match("html") then
-- HTML: Margin video with auto-numbering
local caption = title
if author ~= "" then
caption = caption .. " - " .. author
end
local html_output = [[
<div class="column-margin">
<div class="margin-video">
<iframe src="https://www.youtube.com/embed/]] .. video_id .. [["
style="width: 100%; height: auto; aspect-ratio: 16/9; border: 0; border-radius: 6px;"
allowfullscreen>
</iframe>
</div>
<p><em>]] .. caption .. [[</em></p>
</div>
]]
return pandoc.RawBlock("html", html_output)
elseif FORMAT:match("latex") or FORMAT:match("pdf") then
-- PDF: QR code and margin note
local latex_output = [[
\marginnote{\centering\\\vspace*{5mm}%
\parbox{30mm}{\centering\footnotesize%
\textbf{Watch on YouTube}\\
]] .. title .. [[\\[1mm]
}
\begingroup
\hypersetup{urlcolor=black}
\qrcode[height=15mm]{]] .. url .. [[}
\endgroup
\\[1mm]
\parbox{25mm}{\centering\footnotesize%
Scan with your phone\\
to watch the video
}
}
\faTv{} \href{]] .. url .. [[}{Watch on YouTube}
]]
return pandoc.RawBlock("latex", latex_output)
end
end
return el
end