CS 499/599 :: Winter 2022 :: Machine Learning Security



Notice

This is a tentative outline. The course schedule, materials, homework, and grading scheme are subject to change before the first day of class.


Textbooks

No required textbook. Reading materials will be provided on the course website and/or distributed in class. If you lack the basics in machine learning (or deep learning), the following bibles can be helpful:

  • [FOD'20] Mathematics for Machine Learning [Link]
  • [B'06] Pattern Recognition and Machine Learning [Link]
  • [GBC'16] Deep Learning [Link]

Prerequisites

This course requires a basic understanding of machine learning. Please consider taking CS 434 :: Machine Learning and Data Mining first.

Grading

Your final grade for this course will be based on the following scheme:

  • 10% Paper critiques and in-class presentation.
    • 5% Paper reviews / 5% Paper presentation.
  • 35% Homeworks (HW).
    • 5% HW 1 / 10% HW 2 / 10% HW 3 / 10% HW 4.
  • 35% Group project.
    • 5% Project proposal.
    • 5% Checkpoint 1 / 5% Checkpoint 2.
    • 10% Final presentation.
    • 5% Presentation reviews.
  • 20% Final exam.
  • [Bonus] ~40% Extra point opportunities.
    • 5% Scribe lecture notes (max. twice).
    • 5% Paper presentation (max. twice).
    • 10% Outstanding project work.
    • 10% Submitting the final project report to workshops.

Schedule

Date Topics Notes Readings
Part I: Overview and Motivation
Wed.
01/05
Introduction
[Slides]
The Security of Machine Learning
Part II: Adversarial Examples
Mon.
01/10
Preliminaries
[Slides]
[HW 1 Due] Evasion Attacks against Machine Learning at Test Time
Intriguing Properties of Neural Networks
Wed.
01/12
Preliminaries
[Slides]
Explaining and Harnessing Adversarial Examples
Adversarial Examples in the Physical World
Mon.
01/17
Attacks
[Slides]
Towards Evaluating the Robustness of Neural Networks
Towards Deep Learning Models Resistant to Adversarial Attacks
[Bonus] Universal Adversarial Perturbations
Wed.
01/20
Attacks
[Slides]
Delving into Transferable Adversarial Examples and Black-box Attacks
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
[Bonus] The Space of Transferable Adversarial Examples
Mon.
01/24
Martin Luther King Jr. Day [No lecture]
Wed.
01/26
Defenses
[Slides]
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
[Revisit'ed] Towards Deep Learning Models Resistant to Adversarial Attacks
Mon.
01/31
Group Project Checkpoint 1
Wed.
02/02
(Certified) Defenses
[Slides]
[HW 2 Due] Certified Robustness to Adversarial Examples with Differential Privacy
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
[Bonus] Certified Adversarial Robustness via Randomized Smoothing
Part III: Data Poisoning
Mon.
02/07
Preliminaries
[Slides]
Exploiting Machine Learning to Subvert Your Spam Filter
ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors
Wed.
02/09
Attacks
[Slides]
Manipulating Machine Learning:
    Poisoning Attacks and Countermeasures for Regression Learning

Mon.
02/16
Attacks
[Slides]
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
MetaPoison: Practical General-purpose Clean-label Data Poisoning
[Bonus] Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Wed.
02/24
Defenses
[Slides]
[HW 3 Due] Certified Defenses for Data Poisoning Attacks
SEVER: A Robust Meta-Algorithm for Stochastic Optimization
Mon.
02/21
Group Project Checkpoint 2
Part IV: Privacy
Wed.
02/23
Preliminaries
[Slides]
Exposed! A Survey of Attacks on Private Data
Mon.
02/28
Attack
[Slides]
Membership Inference Attacks against Machine Learning Models
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
Wed.
03/02
Attack
[Slides]
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Mon.
03/07
Attack
[Slides]
Stealing Machine Learning Models via Prediction APIs
Wed.
03/09
(Certified) Defense
[Slides]
Deep Learning with Differential Privacy
Evaluating Differentially Private Machine Learning in Practice
[Bonus] Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Mon.
03/14
Group Project [HW 4 Due] Final Presentations (Showcases)
Wed.
03/16
Last Day of Classes Final Exam
[No Class]