Fix figure placement issues in data_selection and dnn_architectures

- data_selection.qmd: Add @fig-selection-inequality reference before figure
- dnn_architectures.qmd: Remove early forward reference to @fig-im2col-diagram
  (figure appears later in the chapter with proper introduction)
This commit is contained in:
Vijay Janapa Reddi
2026-02-03 23:56:32 -05:00
parent 4d62bfe9ae
commit f502c7e733
5 changed files with 6 additions and 2 deletions

View File

@@ -1352,7 +1352,7 @@ The following analysis formalizes this heuristic, deriving what we call *the sel
:::
The following figure visualizes this trade-off, contrasting efficient and expensive selection strategies.
@fig-selection-inequality visualizes this trade-off, contrasting efficient and expensive selection strategies.
::: {#fig-selection-inequality fig-cap="**The Selection Inequality**\index{Data Selection Systems!selection inequality}: Data selection only improves end-to-end efficiency if the overhead of selection plus training on the subset is less than training on the full dataset. A lightweight selection function (proxy model, cached embeddings) keeps selection overhead low; an expensive selection function (full model forward pass) can negate the savings." fig-alt="Stacked bar chart comparing three approaches: Baseline shows a single tall bar (100) for full training; Efficient Selection shows two short stacked bars (5 selection overhead plus 40 subset training) totaling 45 with a 55 percent savings annotation; Expensive Selection shows two stacked bars (60 selection overhead plus 40 subset training) totaling 100 with a No savings annotation."}
```{python}

View File

@@ -1243,7 +1243,7 @@ The architectural efficiency of CNNs allows further optimization through special
Convolution operations create computational patterns distinct from MLP dense matrix multiplication. While high-level frameworks abstract this as a sliding window, the underlying hardware implementation typically transforms the problem to leverage highly optimized matrix multiplication units.
The most common transformation is **im2col** (image-to-column), illustrated in @fig-im2col-diagram. This technique rearranges the input image patches into columns of a large matrix, allowing the convolution to be executed as a single General Matrix Multiplication (GEMM).
The most common transformation is **im2col** (image-to-column), which rearranges the input image patches into columns of a large matrix, allowing the convolution to be executed as a single General Matrix Multiplication (GEMM). (The im2col transformation is illustrated later in this chapter when we discuss computational primitives.)
::: {#lst-conv_layer_spatial lst-cap="**Convolutional Layer Abstraction**: Framework-level convolution operations hide the complexity of sliding window computations, typically dispatching to cuDNN or MKL which internally handle im2col transformations or direct convolution algorithms."}
```{.python}

0
book/quarto/index.idx Normal file
View File

4
book/quarto/index.ilg Normal file
View File

@@ -0,0 +1,4 @@
This is makeindex, version 2.17 [TeX Live 2025] (kpathsea + Thai support).
Scanning input file index.idx...done (0 entries accepted, 0 rejected).
Nothing written in index.ind.
Transcript written in index.ilg.

0
book/quarto/index.ind Normal file
View File