Teaching

Fall 2023

CS 4973-05 Responsible Machine Learning

Course description:

In today’s world, machine learning (ML) models have proliferated different real-world applications, and responsibly deploying them is crucial to ensure their positive impact on society. This course is designed for senior computer science undergraduate students interested in exploring the ethical challenges and responsibilities of creating and deploying ML models. Throughout the course, students will learn about the various types of biases that can exist in ML models, methods for uncovering these biases via auditing, and algorithmic fairness techniques to mitigate them. The course will also delve into emerging ethical issues related to generative text and image models and discuss good governance of the AI landscape, covering both centralized regulatory efforts and decentralized defense mechanisms against problematic AI. Through a project component, students will have the opportunity to apply what they learn in class to a real-world scenario and gain hands-on experience in developing responsible ML models. By completing this course, students will become responsible AI practitioners who can create ethical and fair AI systems that benefit society.

Requirements:

Knowledge of Python and ML fundamentals (DS 3000 or equivalent)

Syllabus:

Date Topic Readings Homework
Thu, 9/7 Introduction to Course - -
Tue, 9/12 Machine Learning - Refresher [1] Machine learning: Trends, perspectives, and prospects Overleaf
Thu, 9/14 Setting up ML stack, basic ML coding in class [1] (Optional) How NLP Cracked Transfer Learning Notebook
Tue, 9/19 Algorithmic Fairness - Intro [1] Bias detectives: the researchers striving to make algorithms fair - R. Courtland
[2] (Optional) A Survey on Bias and Fairness in Machine Learning - Mehrabi et. al
Overleaf
Thu, 9/21 Bias in the Wild [1] Can you make AI fairer than a judge? Play our courtroom algorithm game - MIT Tech Review
[2] Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency - Yee et. al
Notebook
Tue, 9/26 ML Fairness in contexts beyond the US [1] Re-imagining Algorithmic Fairness in India and Beyond - Sambasivan et. al Overleaf
Thu, 9/28 Value Sensitive Design (Guest Lecture - Prof. Vance Ricks) [1] Introduction to Value Sensitive Design In Class Activity
Tue, 10/3 Model Explainability/Interpretability [1] The Mythos of Model Interpretability Overleaf
Thu, 10/5 ML Explainability Coding Class - Notebook
Tue, 10/10 Algorithm Auditing Overview [1] An Image of Society: Gender and Racial Representation and Impact in Image Search Results for Occupations
[2] Bias in Online Freelance Marketplaces: Evidence from TaskRabbit and Fiverr
 
Thu, 10/12 Algorithm Auditing Tutorial - Notebook
Tue, 10/17 Machine Learning Privacy [1] An Overview of Privacy in Machine Learning Overleaf
Thu, 10/19 AI Safety [1] There are two factions working to prevent AI dangers. Here’s why they’re deeply divided. -
Tue, 10/24 Generative AI Memorization (Guest Lecture - Matthew Jagielski) - -
Thu, 10/26 MIDTERM - -
Tue, 10/31 Term Project - Midterm Review - -
Thu 11/2 Algorithmic Debiasing Methods - Overleaf
Tue, 11/7 Fair ML in the Real World - Part 1 [1] When Fair Ranking Meets Uncertain Inference
[2] Subverting Fair Image Search with Generative Adversarial Perturbations
-
Thu, 11/9 Fair ML in the Real World - Part 2 [1]When Fair Classification Meets Noisy Protected Attributes
[2]FairCanary: Rapid Continuous Explainable Fairness
Notebook
Thu, 11/16 Critical Image Synthesis (Guest Lecture: Eryk Salvaggio) [1] This is how AI image generators see the world
[2] How to read an AI image
-
Tue, 11/21 A (very lacking) tech policy primer (Guest Lecture: Johanna Gunawan) - -
Tue, 11/28 Lessons learned from running a privacy tech challenge (Guest Lecture: David Buckley) - Overleaf
Thu, 11/30 Participatory Methods and Community Actions for Responsible ML [1] Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms
[2] Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI
-
Tue, 12/5 Reading/Review Day - -
Thu, 12/7 FINALS - -
Tue, 12/12 Term Project Presentations - Day 1 - -
Thu, 12/14 Term Project Presentations - Day 2 - -
Email Options