CS 499/599 :: Winter 2022 :: Machine Learning Security



Textbooks

No required textbook. Reading materials will be provided on the course website and/or distributed in class. If you lack the basics in machine learning (or deep learning), the following bibles can be helpful:

  • [FOD'20] Mathematics for Machine Learning [Link]
  • [B'06] Pattern Recognition and Machine Learning [Link]
  • [GBC'16] Deep Learning [Link]

Prerequisites

This course requires a basic understanding of machine learning. Please consider taking CS 434 :: Machine Learning and Data Mining first.

Grading

Your final grade for this course will be based on the following scheme:

  • 30: Written paper critiques [Details]
  • 35: Homeworks (HW 1-4) [Details]
  • 35: Group project [Details]
  • 20: Final exam

  • [Bonus] ~20: Extra point opportunities
    • +5: Scribe lecture notes (max. once)
    • +5: Paper presentation (max. once)
    • +5: Outstanding project work
    • +5: Submitting the final project report to workshops

Latest Announcements [Full List]


Schedule

Date Topics Notes Readings
Part I: Overview and Motivation
Mon.
01/03
Introduction
[Slides]
[HW 1] The Security of Machine Learning
[Bonus] SoK: Security and Privacy in Machine Learning
Part II: Adversarial Examples
Wed.
01/05
Preliminaries
[Slides]
Evasion Attacks against Machine Learning at Test Time
Intriguing Properties of Neural Networks
Mon.
01/10
Preliminaries
[Slides]
Explaining and Harnessing Adversarial Examples
Adversarial Examples in the Physical World
Wed.
01/12
Attacks
[Slides]
Towards Evaluating the Robustness of Neural Networks
Towards Deep Learning Models Resistant to Adversarial Attacks
[Bonus] Universal Adversarial Perturbations
Mon.
01/17
Martin Luther King Jr. Day [HW 1 Due] [No lecture]
Wed.
01/19
Attacks
[Slides]
[Team-up!] Delving into Transferable Adversarial Examples and Black-box Attacks
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
[Bonus] The Space of Transferable Adversarial Examples
Mon.
01/24
Defenses
[Slides]
[HW 2] Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
[Revisit'ed] Towards Deep Learning Models Resistant to Adversarial Attacks
Wed.
01/26
(Certified) Defenses
[Slides]
Certified Adversarial Robustness via Randomized Smoothing
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
[Bonus] Certified Robustness to Adversarial Examples with Differential Privacy
Mon.
01/31
Group Project Checkpoint Presentation 1
Part III: Data Poisoning
Wed.
02/02
Preliminaries
[Slides]
Exploiting Machine Learning to Subvert Your Spam Filter
ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors
Mon.
02/07
Attacks
[Slides]
[HW 2 Due] Poisoning Attacks against Support Vector Machines
Manipulating Machine Learning: Poisoning Attacks and Countermeasures...
Wed.
02/09
Attacks
[Slides]
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
MetaPoison: Practical General-purpose Clean-label Data Poisoning
[Bonus] Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Mon.
02/14
Defenses
[Slides]
[HW 3] [In-class Presentation: Quintin Pope] Zoom In: An Introduction to Circuits
Certified Defenses for Data Poisoning Attacks
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
[Bonus] SEVER: A Robust Meta-Algorithm for Stochastic Optimization
Wed.
02/16
Group Project Checkpoint Presentation 2
Part IV: Privacy
Mon.
02/21
Preliminaries
[Slides]
Exposed! A Survey of Attacks on Private Data
Robust De-anonymization of Large Sparse Datasets
Wed.
02/23
Attack
[Slides]
Membership Inference Attacks against Machine Learning Models
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
Mon.
02/28
Attack
[Slides]
[HW 3 Due] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Wed.
03/02
Attack
[Slides]
[HW 4] Stealing Machine Learning Models via Prediction APIs
High Accuracy and High Fidelity Extraction of Neural Networks
Mon.
03/07
(Certified) Defense
[Slides]
[In-class Presentation: Akshith Gunasekaran] Red Teaming LMs with LMs
Deep Learning with Differential Privacy
Evaluating Differentially Private Machine Learning in Practice
Wed.
03/09
Group Project Final Presentations (Showcases)
Mon.
03/14
Last Day of Classes [Final] [No lecture] Final Exam & Please submit your final project report.
Wed.
03/16
- [HW 4 Due] [No lecture] Please submit your HW 4 & You can do late submissions for HW 1-3.