mirror of
https://github.com/harvard-edge/cs249r_book.git
synced 2026-05-06 17:49:07 -05:00
Student Feedback - Chapter 17 #197
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jasonjabbour on GitHub (Aug 27, 2024).
Originally assigned to: @jasonjabbour on GitHub.
Chapter 17 - Robust AI
Originally posted by @sgiannuzzi39 in https://github.com/harvard-edge/cs249r_book/discussions/256#discussioncomment-10087259
@jasonjabbour commented on GitHub (Sep 1, 2024):
Additional Feedback to Address:
Discuss a more AI oriented example of a SDC or an attack.
On top of figure 17.2 the text reads SDCS for silent data corruptions, it should be SDCs?
Add on 17.3 that these can also happen as a result of an attack (Rowhammer)
Above figure 17.6 it mentions : “a significant different in the gradient norm” and I am not sur ethe reader at this point would understand what that means in terms of concrete consequences.
Right above the start of 17.3.2 makes it seem like BNNs are a solution to the bit flip problem but that’s not what you mean. “Networks [BNNs] (Courbariaux et al. 2016) have emerged as a promising solution”
Question: Would different types of AI models have different levels of weaknesses to the different patterns of errorS? Is a CNN more vulnerable than a DNN?
At times it feels too general purpose and not specific enough to AI
The third paragraph above Figure 17.13 is a bit odd, referring to TMR once and then only talking about the faults of DMR.
Maybe include more details about Google’s SDC checker? What is it? How does it know when there is a SDC?
Typo in the Greybox Attack bulletpoint in Section 17.4.1 (“black black-box box grey-boxes”)
Do you differentiate between AI and Machine learning early on? Here you seem to be using the two interchangeably.
Mention Nightshade to Vijay to see if he wants to incorporate it (he cite it as by Tome, may be able to reference the most recent published version of the work:
Shan, S., Ding, W., Passananti, J., Wu, S., Zheng, H., & Zhao, B. Y. (2024, May). Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models. In 2024 IEEE Symposium on Security and Privacy (SP) (pp. 212-212). IEEE Computer Society.
Iris Bahar wrote some work that shows the vulnerability of object identification models by slight perturbations. Maybe you could incorporate her work somewhere. The citation is:
X. Chen et al., "GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments," 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 3988-3995, doi: 10.1109/IROS40897.2019.8967983.
Scope and Knowledge bullet points seem out of place in the list of attack examples in section 17.4.2
Some of my colleagues at CU are working on integrating Control Systems to ML models to defend against some of the attacks discussed. I believe this is an emerging topic with the controls community. I can get you more information on this if you are interested.
I don’t think that Figure 17.26 is what is described as.
Distribution shift section seems out of place. It is unclear how the shift characteristics (first bullet point list) and the manifestation forms (second bulleted list) are different. They sound about the same as the list above it.
Is this a generally well understood concept? “Uncertainty quantification techniques“ I know about it because I work closely with a colleague in CS who does this but outside of that I would not have known anything about it, but that may just be my lacking.