mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-04-29 17:20:21 -05:00
fix(content): correct table formatting and improve readability
Fix table formatting issues in privacy_security and responsible_ai chapters that were causing pre-commit validation failures. The changes ensure proper column widths, alignment, and content spacing. Changes: - Fix adversarial knowledge spectrum table spacing in privacy_security.qmd - Restructure practitioner decision framework table in responsible_ai.qmd for improved readability by consolidating deployment contexts into clear single-line entries with examples in parentheses - Change table header from 'Context' to 'Deployment Context' for clarity - Correct typos: 'aconservative' to 'conservative' - Remove extra spacing throughout table cells - Update codespell ignore list - Auto-format index.qmd All tables now pass pre-commit validation checks with proper bolding, alignment, and column spacing.
This commit is contained in:
@@ -110,3 +110,4 @@ Dota
|
||||
ALine
|
||||
SER
|
||||
FO
|
||||
LineZ
|
||||
|
||||
@@ -1025,16 +1025,16 @@ These attacker models can be summarized along a spectrum of knowledge levels. @t
|
||||
|
||||
Common attack strategies include surrogate model construction, transfer attacks exploiting adversarial transferability, and GAN-based perturbation generation. The technical details of these approaches and their mathematical formulations are thoroughly covered in @sec-robust-ai.
|
||||
|
||||
+---------------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
| **Adversary Knowledge Level** | **Model Access** | **Training Data Access** | **Attack Example** | **Common Scenario** |
|
||||
+:======================================+:=============================================+:=========================+:===============================================+:============================================+
|
||||
| **White-box** | Full access to architecture and parameters | Full access | Crafting adversarial examples using gradients | Insider threats, open-source model reuse |
|
||||
+---------------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
| **Grey-box** | Partial access (e.g., architecture only) | Limited or no access | Attacks based on surrogate model approximation | Known model family, unknown fine-tuning |
|
||||
+---------------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
| **Black-box** | No internal access; only query-response view | No access | Query-based surrogate model training and | Public APIs, model-as-a-service deployments |
|
||||
| | | | transfer attacks | |
|
||||
+---------------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
+-------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
| **Adversary Knowledge Level** | **Model Access** | **Training Data Access** | **Attack Example** | **Common Scenario** |
|
||||
+:==============================+:=============================================+:=========================+:===============================================+:============================================+
|
||||
| **White-box** | Full access to architecture and parameters | Full access | Crafting adversarial examples using gradients | Insider threats, open-source model reuse |
|
||||
+-------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
| **Grey-box** | Partial access (e.g., architecture only) | Limited or no access | Attacks based on surrogate model approximation | Known model family, unknown fine-tuning |
|
||||
+-------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
| **Black-box** | No internal access; only query-response view | No access | Query-based surrogate model training and | Public APIs, model-as-a-service deployments |
|
||||
| | | | transfer attacks | |
|
||||
+-------------------------------+----------------------------------------------+--------------------------+------------------------------------------------+---------------------------------------------+
|
||||
|
||||
: **Adversarial Knowledge Spectrum**: Varying levels of attacker access to model details and training data define distinct threat models, influencing the feasibility and sophistication of adversarial attacks and impacting deployment security strategies. The table categorizes these models by access level, typical attack methods, and common deployment scenarios, clarifying the practical challenges of securing machine learning systems. {#tbl-adversary-knowledge-spectrum}
|
||||
|
||||
|
||||
@@ -1830,54 +1830,43 @@ Given these implementation challenges, practitioners need systematic approaches
|
||||
|
||||
- **When stakeholder values differ**: Document trade-offs explicitly and create contestability mechanisms allowing affected users to challenge decisions.
|
||||
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
| **Context** | **Primary** | **Implementation Priority** | **Acceptable Trade-offs** |
|
||||
| | **Principles** | | |
|
||||
+:=============================+:================+:=============================+:========================================+
|
||||
| **High-Stakes Individual** | Fairness, | Mandatory fairness metrics | Accept 2-5% accuracy reduction |
|
||||
| | Explainability, | across protected groups; | for interpretability; |
|
||||
| **Decisions** | Accountability | explainability for negative | 20-100 ms latency for |
|
||||
| | | outcomes; human oversight | explanations; higher |
|
||||
| **(healthcare diagnosis,** | | for edge cases | computational costs |
|
||||
| **credit/loans, criminal** | | | |
|
||||
| **justice, employment)** | | | |
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
| **Safety-Critical** | Safety, | Certified adversarial | Accept significant training |
|
||||
| | Robustness, | defenses; formal validation; | overhead (100-300% for |
|
||||
| **Systems** | Accountability | failsafe mechanisms; | adversarial training); |
|
||||
| | | comprehensive logging; | aconservative confidence |
|
||||
| **(autonomous vehicles,** | | | thresholds; redundant |
|
||||
| **medical devices,** | | | inference |
|
||||
| **industrial control)** | | | |
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
| **Privacy-Sensitive** | Privacy, | Differential privacy | Accept 2-5% accuracy loss |
|
||||
| | Security, | (ε≤1.0); local processing; | for DP; higher client-side |
|
||||
| **Applications** | Transparency | data minimization; user | compute; limited model |
|
||||
| | | consent mechanisms | updates; reduced |
|
||||
| **(health records,** | | | personalization |
|
||||
| **financial data,** | | | |
|
||||
| **personal communications)** | | | |
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
| **Large-Scale Consumer** | Fairness, | Bias monitoring across | Balance explainability costs |
|
||||
| **Systems** | Transparency, | demographics; explanation | against scale (streaming SHAP |
|
||||
| | Safety | mechanisms; content policy | vs. full SHAP); accept |
|
||||
| **(content recommendation,** | | enforcement; feedback loops | 5-15 ms latency for fairness |
|
||||
| **search, advertising)** | | detection | checks; invest in monitoring |
|
||||
| | | | infrastructure |
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
| **Resource-Constrained** | Privacy, | Local inference; data | Sacrifice real-time fairness |
|
||||
| | Efficiency, | locality; input validation; | monitoring; use lightweight |
|
||||
| **Deployments** | Safety | graceful degradation | explainability (gradients over |
|
||||
| | | | SHAP); pre-deployment validation only; |
|
||||
| **(mobile, edge, TinyML)** | | | limited model complexity |
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
| **Research/Exploratory** | Transparency, | Documentation of known | Can deprioritize sophisticated |
|
||||
| | Safety (harm | limitations; restricted | fairness/explainability for |
|
||||
| **Systems** | prevention) | user populations; monitoring | internal use; focus on |
|
||||
| | | for unintended harms | observability and rapid iteration |
|
||||
| **(internal tools,** | | | |
|
||||
| **prototypes, A/B tests)** | | | |
|
||||
+------------------------------+-----------------+------------------------------+-----------------------------------------+
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
| **Deployment Context** | **Primary** | **Implementation Priority** | **Acceptable Trade-offs** |
|
||||
| | **Principles** | | |
|
||||
+:=========================================+:================+:=============================+:=========================================+
|
||||
| **High-Stakes Individual Decisions** | Fairness, | Mandatory fairness metrics | Accept 2-5% accuracy reduction for |
|
||||
| **(healthcare diagnosis, credit/loans,** | Explainability, | across protected groups; | interpretability; 20-100 ms latency for |
|
||||
| **criminal justice, employment)** | Accountability | explainability for negative | explanations; higher computational costs |
|
||||
| | | outcomes; human oversight | |
|
||||
| | | for edge cases | |
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
| **Safety-Critical Systems** | Safety, | Certified adversarial | Accept significant training overhead |
|
||||
| **(autonomous vehicles, medical** | Robustness, | defenses; formal validation; | (100-300% for adversarial training); |
|
||||
| **devices, industrial control)** | Accountability | failsafe mechanisms; | conservative confidence thresholds; |
|
||||
| | | comprehensive logging | redundant inference |
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
| **Privacy-Sensitive Applications** | Privacy, | Differential privacy | Accept 2-5% accuracy loss for DP; higher |
|
||||
| **(health records, financial data,** | Security, | (ε≤1.0); local processing; | client-side compute; limited model |
|
||||
| **personal communications)** | Transparency | data minimization; user | updates; reduced personalization |
|
||||
| | | consent mechanisms | |
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
| **Large-Scale Consumer Systems** | Fairness, | Bias monitoring across | Balance explainability costs against |
|
||||
| **(content recommendation, search,** | Transparency, | demographics; explanation | scale (streaming SHAP vs. full SHAP); |
|
||||
| **advertising)** | Safety | mechanisms; content policy | accept 5-15 ms latency for fairness |
|
||||
| | | enforcement; feedback loops | checks; invest in monitoring |
|
||||
| | | detection | infrastructure |
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
| **Resource-Constrained Deployments** | Privacy, | Local inference; data | Sacrifice real-time fairness monitoring; |
|
||||
| **(mobile, edge, TinyML)** | Efficiency, | locality; input validation; | use lightweight explainability |
|
||||
| | Safety | graceful degradation | (gradients over SHAP); pre-deployment |
|
||||
| | | | validation only; limited model |
|
||||
| | | | complexity |
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
| **Research/Exploratory Systems** | Transparency, | Documentation of known | Can deprioritize sophisticated |
|
||||
| **(internal tools, prototypes,** | Safety (harm | limitations; restricted | fairness/explainability for internal |
|
||||
| **A/B tests)** | prevention) | user populations; monitoring | use; focus on observability and rapid |
|
||||
| | | for unintended harms | iteration |
|
||||
+------------------------------------------+-----------------+------------------------------+------------------------------------------+
|
||||
|
||||
: **Practitioner Decision Framework**: Prioritizing responsible AI principles based on deployment context, showing primary principles, implementation priorities, and acceptable trade-offs for different system types. This framework guides practitioners in making context-appropriate decisions when principles conflict or resources are constrained. {#tbl-practitioner-decision-framework}
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ format:
|
||||
<div class="book-card">
|
||||
<img src="assets/images/covers/cover-hardcover-book.png" alt="Machine Learning Systems Book Cover" class="book-image">
|
||||
<p class="book-title">Early Access Preview</p>
|
||||
<p class="book-subtitle">Publisher: MIT Press (2026)</p>
|
||||
<p class="book-subtitle">Publisher: The MIT Press (2026)</p>
|
||||
<p style="font-size: 0.8em; color: #6c757d; margin-top: 6px; margin-bottom: 0;">📖 Click here to download PDF</p>
|
||||
</div>
|
||||
</a>
|
||||
|
||||
Reference in New Issue
Block a user