Part I: Overview and Motivation |
Thu. 09/28 |
Introduction [Slides] |
[HW 1 Out] |
(Classic) SoK: Security and Privacy in Machine Learning
|
Part II: Adversarial Examples |
Tue. 10/03 |
Preliminaries [Slides] |
|
(Classic) Explaining and Harnessing Adversarial Examples
(Classic) Towards Evaluating the Robustness of Neural Networks
(Classic) Towards Deep Learning Models Resistant to Adversarial Attacks
|
Thu. 10/05 |
Attacks [Slides] |
|
(Classic) Delving into Transferable Adversarial Examples and Black-box Attacks
(Classic) The Space of Transferable Adversarial Examples
(Recent) Why Do Adversarial Attacks Transfer?
|
Tue. 10/10 |
Attacks [Slides] |
[Team-up!]
|
(Classic) Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
(Recent) Improving Black-box Adversarial Attacks with a Transfer-based Prior
|
Thu. 10/12 |
Attacks [Slides] |
[HW 1 Due]
[HW 2 Out]
|
(Classic) Adversarial Examples in the Physical World
(Recent) Dirty Road Can Attack: ...(cropped the title due to the space limit)
(Recent) Universal and Transferable Adversarial Attacks on Aligned Language Models
|
Tue. 10/17 |
Defenses [Slides] |
[Recording]
|
[No class] Sanghyun will upload the recording for this class.
(Classic) Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
(Classic) [Revisit'ed] Towards Deep Learning Models Resistant to Adversarial Attacks
|
Thu. 10/19 |
Group Project |
|
Checkpoint Presentation 1 |
Tue. 10/24 |
(Certified) Defenses [Slides] |
|
(Classic) Certified Adversarial Robustness via Randomized Smoothing
(Recent) (Certified!!) Adversarial Robustness for Free!
|
Part III: Data Poisoning |
Thu. 10/26 |
Preliminaries [Slides] |
|
(Recent) Poisoning the Unlabeled Dataset of Semi-Supervised Learning
(Recent)
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
|
Tue. 10/31 |
Attacks [Slides] |
[HW 3 Out]
|
(Classic) Poisoning Attacks against Support Vector Machines
(Classic) Manipulating Machine Learning: Poisoning Attacks and Countermeasures...
|
Thu. 11/02 |
Attacks [Slides] |
|
(Classic) Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
(Classic) MetaPoison: Practical General-purpose Clean-label Data Poisoning
|
The rest lectures will be offered online (Zoom) |
Tue. 11/07 |
Group Project |
[HW 2 Due]
|
Checkpoint Presentation 2 |
Thu. 11/09 |
Defenses [Slides] |
[Recording]
|
(Classic) Certified Defenses for Data Poisoning Attacks
(Classic) Data Poisoning against Differentially-Private Learners: Attacks and Defenses
|
Part IV: Privacy |
Tue. 11/14 |
Preliminaries [Slides] |
[HW 4 Out]
|
(Classic) Exposed! A Survey of Attacks on Private Data
|
Thu. 11/16 |
Attack [Slides] |
[HW 3 Due]
|
(Classic) Membership Inference Attacks against Machine Learning Models
(Classic) Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
(Recent) Membership Inference Attacks From First Principles
|
Tue. 11/21 |
Attack [Slides] |
|
(Classic) Model Inversion that Exploit Confidence Information and Basic Countermeasures
(Recent) The Secret Sharer: Evaluating and Testing Unintended Memorization in NNs
(Recent) Extracting Training Data from Large Language Models
|
Thu. 11/23 |
|
[No lecture]
|
Thanksgiving Break.
|
Tue. 11/28 |
Attack [Slides] |
|
(Classic) Stealing Machine Learning Models via Prediction APIs
(Recent) High Accuracy and High Fidelity Extraction of Neural Networks
|
Thu. 11/30 |
(Certified) Defense [Slides] |
[HW 4 Due]
|
(Classic) Deep Learning with Differential Privacy
(Recent) Evaluating Differentially Private Machine Learning in Practice
(Recent) Red Teaming LMs with LMs
|
Tue. 12/05 |
|
[No lecture]
|
Final Presentation Prep.
|
Thu. 12/07 |
Group Project |
|
Final Presentations (Showcases) |
Finals Week (12/11 - 12/05) |
Tue. 12/12 |
- |
[No Lecture]
[Final Exam]
|
Final Exam & Submit your final project report.
|
Thu. 12/14 |
- |
[No Lecture]
|
Late submissions for HW 1-4. |