NLP research at UBC


Group Photo
Some group members (December 2021)
Group Photo
Some group members (December 2021)

The Natural Language Processing (NLP) group at University of British Columbia conducts research on core NLP problems, computational linguistics, text mining, and visual text analytics. We focus on the following research areas in particular (but not limited to):

News and Events

Apr 7, 2022


Our paper Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models by Patrick Huber and Giuseppe Carenini has been accepted to NAACL 2022 in Seattle.

Apr 7, 2022


Our paper CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training in collaboration with Facebook Inc. has been accepted to NAACL 2022 Findings.

Dec 1, 2021


Our paper Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation by Patrick Huber, Linzi Xing and Giuseppe Carenini has been accepted to AAAI 2022.

Get more news
News and Events

Apr 7, 2022


Our paper Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models by Patrick Huber and Giuseppe Carenini has been accepted to NAACL 2022 in Seattle.

Apr 7, 2022


Our paper CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training in collaboration with Facebook Inc. has been accepted to NAACL 2022 Findings.

Dec 1, 2021


Our paper Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation by Patrick Huber, Linzi Xing and Giuseppe Carenini has been accepted to AAAI 2022.

Get more news

Discourse and Summarization: We are working on discourse parsing, coreference resolution, and automatic summarization, with a focus on long documents. We are also working on analyzing conversations (such as emails, meetings, blogs, and chats), through topic segmentation, sentiment analysis, controversiality prediction, and conversational structure extraction.

Pragmatics and Commonsense Reasoning: We are working on acquiring and representing commonsense knowledge, developing reasoning modules, and incorporating commonsense into NLP models.

Natural Language Understanding: We are interested in tasks such as machine reading comprehension and natural language inference, in particular those involving reading between the lines. We are also working on interpreting figurative language.

Natural Language Generation: We are working on Text Structuring in text2text, data2text and image2text tasks. We are also interested in automatic evaluation of text generation models.

Text Analytics: We are working on tightly integrating interactive visualization with text mining and summarization techniques for information exploration and scalable decision support.

Learn more about who we are and what we’re doing.

If you’re interested in working with us as a grad student, please apply here.