The widespread adoption of machine learning (ML) in many real-world applications, such as self-driving cars [link] or AI-assisted robotic surgery [link], calls for a comprehensive understanding of their security and privacy implications. In consequence, research in the field of adversarial machine learning (AML) studies (potential) security and privacy threats an adversary can cause, e.g., predictions manipulated by adversarial examples [link]. These efforts lead to developing defense mechanisms, e.g., adversarial training [link]—a training mechanism that reduces the sensitivity of models to small input perturbations.
In this class, students will have an opportunity to familiarize themselves with the emerging research on the attacks and defenses against ML systems. The class materials will cover three popular threats in this field: (i) adversarial examples, (ii) data poisoning, and (iii) membership inference. Students will review research papers, implement the attacks and defenses, evaluate their effectiveness, and conduct a mini-research project on a topic of their choice.
In the end, we expect:
The University's Code of Academic Integrity applies, modified as follows: