CSCI 699: Ethics in NLP

Fall 2023, Tuesdays and Thursdays at 4:00-5:50pm in DMC 150
Instructor: Jieyu Zhao

Teaching Assistants: Pei Zhou, Peifeng Wang


Although there have been impressive advancements in natural language processing (NLP), several studies reported that NLP models contain social biases. Even worse, the models run the risk of further amplifying the stereotypes and causing harms to people. As NLP technology continues to advance and be integrated into various domains such as healthcare, finance, marketing, and social media, it raises important ethical concerns that need to be addressed. In this course, students will critically examine the ethical implications of NLP, including issues related to bias, fairness, privacy, transparency, accountability, and social impact. Through discussions, case studies, and guest lectures, students will explore the ethical challenges associated with NLP and develop a deep understanding of the ethical considerations that arise when designing, implementing, and deploying NLP applications.

Students will get a broad understanding about possible issues in current NLP models and how current research has tried to alleviate those issues. This class will equip students with the ability to read and write critical reviews about research papers. At the same time, they will learn how to conduct research related to NLP fairness, interpretability and robustness.


Course Staff

Jieyu Zhao

Jieyu Zhao

Office Hour: Thur 12-1pm @ PHE 332

Pei Zhou

Pei Zhou

Mon 3-4pm @ RTH 313

Peifeng Wang

Peifeng Wang

Thur 2:30-3:30pm @ SAL 213

Exceptions: For 08/31 & 09/21, @ RTH 313




All assignments are due by 11:59pm on the indicated date.

Week Date Topic Related Readings Assignments
1 Tue Aug 22 Introduction (slides)
Thu Aug 24 Introduction (slides)
2 Tue Aug 29 Project Examples
Human Subjects Research Paper1, Paper2
Thu Aug 31 Social Biases in NLP Paper1, Paper2, Paper3
3 Tue Sep 5 Bias Evaluation Paper1, Paper2, Paper3
Thu Sep 7 Models vs morality Paper1, Paper2, Paper3
4 Tue Sep 12 Harms of LLMs in downstream tasks Paper1, Paper2, Paper3
Thu Sep 14 Bias Mitigation Paper1, Paper2, Paper3 Project Proposal deadline
5 Tue Sep 19 Guest Lecture: What happened in Industry Research (Sunipa Dev) Paper1
Thu Sep 21 Harms of LLMs: Bias and Stereotype Paper1, Paper2, Paper3
6 Tue Sep 26 Hate Speech Paper1, Paper2
Bias & Stereotype & Harm summary
Thu Sep 28 Midterm Presentation Workshop
7 Tue Oct 3 Midterm Presentation Workshop
Thu Oct 5 Guest Lecture: LLM memorization (Eric Wallace) Paper1
8 Tue Oct 10 Privacy Paper1, Paper2, Paper3
Thu Oct 12 Fall Recess (No Class)
9 Tue Oct 17 Human-Centered AI Paper1, Paper2, Paper3
Thu Oct 19 Guest Lecture: Human& AI Interaction (Weiyan Shi)
Bias in Dialogue Paper1
10 Tue Oct 24 Guest Lecture: Human-Centered AI (Sherry Wu)
Microaggression Paper1
Thu Oct 26 Multilingual Biases Paper1, Paper2, Paper3 Midterm Report Due
11 Tue Oct 31 Multimodal biases Paper1, Paper2, Paper3
Thu Nov 2 Reduce bias in language models Paper1, Paper2, Paper3
12 Tue Nov 7 Guest Lecture: Interpretation (Hanjie Chen) Paper1
Thu Nov 9 Alignment with humans Paper1, Paper2, Paper3
13 Tue Nov 14 Trade offs between different metrics Paper1, Paper2, Paper3
Thu Nov 16 Summary and Final Project Presentations
14 Tue Nov 21 Final Project Presentations
Thu Nov 23 Thanksgiving (No Class)
15 Tue Nov 28 Final Project Presentations
Thu Nov 30 Final Project Presentations
16 Tue Dec 5 No Class
Thu Dec 7 No Class Project Report Due


Grades will be based on attendance (10%), paper presentation (30%), and a course project (60%).

Attendance and Discussion (10% total):

Paper Reading and Discussion (30% total).

Course Project (60% total):

Late days

You have 4 late days you may use on any assignment. Each late day allows you to submit the assignment 24 hours later than the original deadline. If you are working in a group for the project, submitting the project proposal or midterm report one day late means that each member of the group spends a late day.

Paper Presentation

Paper presentation will help students to develop the skills to give research talk to others. Each student will present 2 papers to the class. The student will prepare the slides for the paper and lead the discussion. Each week, there will be another student signed up as the feedback provider (reviewer). Reviewer will provide the feedback to the instructor or the TAs. Grading rubrics: correctness of the content (40%), clarity (20%), discussion (20%), slides & presentation skills (20%).

Final Project

The final project can be done individually or in groups of up to 3. This is your chance to freely explore machine learning methods and how they can be applied to a task of our choice. You will also learn about best practices for developing machine learning methods—inspecting your data, establishing baselines, and analyzing your errors.

Each group needs to finish one research project related to the class topics. There should be a “deliverable” result out of the project, meaning that your project should be self-complete and reproducible (scientifically correct. A typical successful project could be: 1) a novel and sound solution to an interesting research problem, 2) correct and meaningful comparisons among baselines and existing approaches, 3) applying existing techniques to a new application. We will not penalize negative results, as long as your proposed approach is thoroughly explored and justified. Overall, the project should showcase the student’s ability to think critically, conduct rigorous research, and apply the concepts learned in the course to address a relevant problem in the field of NLP ethics.

Students should use the standard *ACL paper submission template to finish their writing report regarding the course project.


The following courses are relevant: