CS 499/579 :: Spring 2023 :: Trustworthy Machine Learning



Overview

The widespread adoption of machine learning (ML) in many real-world applications, such as self-driving cars [link] or AI-assisted robotic surgery [link], calls for a comprehensive understanding of their security and privacy implications. In consequence, research in the field of adversarial machine learning (AML) studies (potential) security and privacy threats an adversary can cause, e.g., predictions manipulated by adversarial examples [link]. These efforts lead to developing defense mechanisms, e.g., adversarial training [link]—a training mechanism that reduces the sensitivity of models to small input perturbations.

In this class, students will have an opportunity to familiarize themselves with the emerging research on the attacks and defenses against ML systems. The class materials will cover three popular threats in this field: (i) adversarial examples, (ii) data poisoning, and (iii) membership inference. Students will review research papers, implement the attacks and defenses, evaluate their effectiveness, and conduct a mini-research project on a topic of their choice.

In the end, we expect:


Latest Announcements [Full List]


Course Information

Instructor


Course Policy

The University's Code of Academic Integrity applies, modified as follows:

[Dont's]
[Do's]
Must: Please write down the students' names if you received any help from them. It won't affect the scores for your homework or projects. But, you will learn from this practice how to credit others for their services. It is an essential skill when you collaborate with others in the future.