UBC NLP Symposium
Monday, July 18, 9:30 - 17:30
ICICS X836, UBC (2366 Main Mall, Vancouver)
09:30 - 10:30 - Invited talk #1. Oskar van der Wal: How can we understand the social biases of language models?
Title
How can we understand the social biases of language models?Bio
I am a PhD candidate based at the University of Amsterdam—supervised by Katrin Schulz and Willem Zuidema. My research focuses on understanding why language models exhibit social biases using interpretability techniques. On top of that, I study how we can reliably measure bias in NLP and try to ground the discussion of bias in the broader societal perspective.Talk abstract
Language Models (LMs) have been shown to learn undesirable biases towards certain social groups, which may unfairly influence the decisions, recommendations or texts that AI systems building on those LMs generate. As LMs are readily deployed by companies, governments, and other institutions in applications that directly impact the lives of ordinary citizens, detecting undesirable biases in NLP systems and finding ways to mitigate them has emerged as a prominent research field. Yet, we still face many challenges in measuring biases due to the black-box nature of these models, let alone mitigating these. While there are many interesting angles to take, in this talk we will approach the study of bias in LMs from two possible perspectives: the field of interpretability and psychometrics. First, the field of interpretability offers a toolbox for better understanding LMs in the face of their black box nature. Research on detecting such biases is crucial, but as new LMs are continuously developed, it is equally important to study how LMs come to be biased in the first place, and what role the training data, architecture, and downstream application play at various phases in the life-cycle of an NLP model. Second, the field of psychometrics offers extensive expertise on measuring abstract concepts in psychology like bias. Their theoretical insights and frameworks can help us to evaluate the current state of NLP bias measures and guide future research on understanding social biases in LMs.10:15 - 10:30 - Break
10:30 - 11:30 - Student presentation session #1
11:30 - 12:15 - Discussion #1: challenges and limitations of NLP
12:15 - 13:30 - Lunch break
13:30 - 15:00 - Invited talk #2. Debora Nozza: Roadmap to universal hate speech detection.
Title
Roadmap to universal hate speech detection.Bio
Debora Nozza (she/her) is a Postdoctoral Research Fellow in Computing Science at Bocconi University. Her research interests mainly focus on Natural Language Processing, specifically on the detection and counter-acting of hate speech and algorithmic bias on Social Media data in multilingual context. She organized three international shared tasks on multilingual detection of hate speech. She was recently awarded a grant from Fondazione Cariplo for her project MONICA, which will focus on monitoring coverage, attitudes, and accessibility of Italian measures in response to COVID-19. For updated information, see https://dnozza.github.io/.Talk abstract
An increasing propagation of hate speech has been detected on social media platforms (e.g., Twitter) where (pseudo-)anonymity enables people to target others without being recognized or easily traced. While this societal issue has attracted many studies in the NLP community, it comes with three important challenges. Hate speech detection models should be fair, work on every language, and consider the whole context (e.g., imagery). Solving these challenges will revolutionize the field of hate speech detection and help on creating a "universal" model. In this talk, I will present my contributions in this area along with my takes for future directions.15:00 - 15:30 - Break
15:30 - 16:30 - Student presentation session #2
16:30 - 17:00 - Discussion #2: the future of NLP and its potential benefits
17:00 - 17:10 - Conclusion
For questions, issues and inquiries, please email vshwartz@cs.ubc.ca or zeerak_talat@sfu.ca.