CS 578 :: Spring 2025 :: Cyber-Security



Textbooks

No required textbook. Reading materials will be provided on the course website and/or distributed in class. If you lack the basics in machine learning (or deep learning), the following bibles can be helpful:

  • [FOD'20] Mathematics for Machine Learning [PDF]
  • [B'06] Pattern Recognition and Machine Learning [PDF]
  • [GBC'16] Deep Learning [PDF]

Prerequisites

This course requires a basic understanding of machine learning. Please consider taking CS 434 :: Machine Learning and Data Mining first.

Grading

Your final grade for this course will be based on the following scheme:

  • 5%: Quiz
  • 30%: Written paper critiques [Details]
  • 10%: In-class paper presentation [Details]
  • 15%: Homeworks (HW 1-3) [Details]
  • 40%: Group project [Details]

  • [Bonus] ~10%: Extra point opportunities
    • +5%: Outstanding project work
    • +5%: ...TBA

Latest Announcements [Full List]


Schedule

[Note] This is a tentative schedule; subject to change depending on the progress.
Date Topics Notice Readings
Part I: Overview and Motivation
Mon.
03/31
Introduction
[Slides]
[HW 1 Out] SoK: Security and Privacy in Machine Learning
[Bonus] The Security of Machine Learning
Wed.
04/02
Preliminaries
[Slides]
Explaining and Harnessing Adversarial Examples
Adversarial Examples in the Physical World
Dirty Road Can Attack: ...(cropped the title due to the space limit)
Part II: Systems Security
Mon.
04/07
Attacks
[Slides]
[No lecture]
[Team-up!]
Towards Evaluating the Robustness of Neural Networks
Towards Deep Learning Models Resistant to Adversarial Attacks
[Bonus] The Space of Transferable Adversarial Examples
Wed.
04/09
[No lecture]
[HW 1 Due]
[HW 2 Out]
Mon.
04/14
Attacks
[Slides]
Delving into Transferable Adversarial Examples and Black-box Attacks
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
Wed.
04/16
Defenses
[Slides]
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
[Revisit'ed] Towards Deep Learning Models Resistant to Adversarial Attacks
Part III: Network Security
Mon.
04/21
(Certified) Defenses
[Slides]
[HW 2 Due] Certified Adversarial Robustness via Randomized Smoothing
(Certified!!) Adversarial Robustness for Free!
Wed.
04/23
Preliminaries
[Slides]
[HW 3 Out] Poisoning the Unlabeled Dataset of Semi-Supervised Learning
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
Mon.
04/28
[No lecture]
Wed.
04/30
Attacks
[Slides]
Poisoning Attacks against Support Vector Machines
Manipulating Machine Learning: Poisoning Attacks and Countermeasures...
Part IV: Software Analysis
Mon.
05/05
Attacks
[Slides]
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
MetaPoison: Practical General-purpose Clean-label Data Poisoning
Wed.
05/07
Group Project Checkpoint Presentation 2
Part V: ML/AI Security and Privacy
Mon.
05/12
Defenses
[Slides]
[HW 3 Due] SH's business travel, but SH will provide the recording for this lecture.
Certified Defenses for Data Poisoning Attacks
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Wed.
05/14
[No lecture]
[HW 4 Out]
SH's business travel.
Mon.
05/19
Preliminaries
[Slides]
[Zoom lecture] Exposed! A Survey of Attacks on Private Data
Robust De-anonymization of Large Sparse Datasets
Part VI: Hardware Security
Wed.
05/21
Attack
[Slides]
Membership Inference Attacks against Machine Learning Models
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
Mon.
05/26
- [No lecture] Memorial Day
Wed.
05/28
(Certified) Defense
[Slides]
Deep Learning with Differential Privacy
Evaluating Differentially Private Machine Learning in Practice
Mon.
06/02
Group Project Final Presentations I (Showcases)
Wed.
06/04
Group Project Final Presentations II (Showcases)
Finals Week (06/09 - 06/13)
Mon.
06/09
- [No Lecture]
Wed.
06/11
- [No Lecture]
[Final Exam]
Final Exam & Submit your final project report.
Late submissions for HW 1-4.