Course Information

Description: AI is enhancing applications in many domains, such as healthcare, finance, manufacturing, etc., while introducing risk, uncertainty, and liability, some of which can lead to critical consequences espectially when humans are involved. The course introduces the concepts and methods of techniques for reducing AI risk, uncertainty, liability, making AI more responsible for the involved applications and humans. Topics include: transparency, fairness, security, safety, privacy, and uncertainty of widely-used AI models and algorithms, including the recent LLM.

Lectures: Friday 9:00-11:50 am, Location: E1-102

Office Hours: Friday 3:00-5:00 pm, Location: E4-306

Prerequisites: You will need to know the basic machine learning concepts, such as classification/regression, various model architectures, model training and evaluation. We don't require deep knowledge on linear algebra, calculus, and probability theory, but familiarity about key concepts such as vector, matrix, probability distributions, statistical estimation on the college level should suffice. Python is the main programming language for course homeworks and projects.

Formats: 1 mid-term exam, 1 final exam, 3 homeworks, and 1 coding project.

Submission: Homeworks and project reports need to be in PDF format prepared by Word/Latex/Page. Latex is highly recommended and here is a comprehensive (table, images, equations, paragraphs and sections) and yet short tutorial on Latex. All homeworks and projects must be submitted to Canvas (also for question answering, assignments and grades posting).

Grading: Mid-term (10%), homework (30%), coding project (40%), final (20%). Late submissions will be penalized 20% of the total grades per late day (fraction of a day will be rounded up to one day) and no assignment will be accepted more than four days after its due date.


Project

Description: The project is about one of many aspects of responsible AI. You will consult and work with a PhD student (advisor), who can provide you with certain resources such as relevant papers, source codes, datasets, and their valuable research experience. This is an individual project that each student needs to complete by him/herself. You are expected to read certain papers, implement research ideas and baselines, evaluate the various methods, interprete the results, and write a technical report, which may be submitted to a conference if it is of high-quality. You will need to present your project towards the end of the semester. Sharing and copying solutions are considered as a violation of honor code. This includes but not limited to sharing codes through any kind of media (including repositories on Github/Bitbucket), copying solutions from online forums/repositories/blogs, textbook solution manuals and previous years' submissions.


Textbooks

Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, Christoph Molnar. 2024. eBook.

Fairness and machine learning: Limitations and Opportunities, Solon Barocas, Moritz Hardt, Arvind Narayanan. MIT Press, 2023. eBook.

Federated learning, Qiang Yang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, Han Yu. Morgan & Claypool Publishers, 2020.

Distributional Reinforcement Learning, Marc G. Bellemare and Will Dabney and Mark Rowland. MIT Press, 2023. eBook.


Schedule

Sep 6: Lecture 1 - basics and model transparency

Statement on Academic Integrity

All homework, project and exam submissions should be your own work, by the following definitions: If you have doubt about where the line is, consult with the instructor for clarification.





All Rights Reserved. 2024. Sihong Xie