CS499/579, AI539 :: F23 :: Trustworthy Machine Learning



Overview

In recent years, we've seen the soar of machine learning (ML)-enabled applications in our lives, such as ChatGPT [link] or AI-assisted robotic surgery [link], which calls for a comprehensive understanding of their security and privacy implications. Research in the field of Trustworthy ML (TML) studies (potential) security and privacy risks an adversary can inflict. A well-studied risk is predictions manipulated by adversarial examples [link]. It leads to developing defenses, e.g., adversarial training [link]—a training mechanism that reduces the sensitivity of models to small input perturbations.

In this class, students will be able to familiarize themselves with the emerging research on the attacks and defenses against ML-enabled systems. The class materials will cover three popular threats: (i) adversarial attacks, (ii) data poisoning, and (iii) privacy risks. Students will review prior work, from classical papers to up-to-date ones, implement basic attacks and defenses, evaluate their effectiveness, and conduct a mini-research project on a topic of their choice.

In the end, we expect:

[Note] The class will be offered online from the first week of Nov.


Latest Announcements [Full List]


Course Information

Instructor


Course Policy

The University's Code of Academic Integrity applies, modified as follows:

[Dont's]
[Do's]
Must: Please write down the students' names if you received any help from them. It won't affect the scores for your homework or projects. But, you will learn from this practice how to credit others for their services. It is an essential skill when you collaborate with others in the future.