Friday, December 2, 2022 12:30pm to 1:00pm
About this Event
Abstract: In machine learning, and natural language processing (NLP) especially, differences between the training and the testing data can have a disastrous impact on the performance of classifiers. For example, classification-based parsers in NLP are most widely trained on news data but may be applied on a variety of other genres. Ultimately, unexpected errors from this change can propagate and decrease performance on downstream tasks. To combat and predict these unexpected errors, we look to Domain Adaptation Theory — a subfield of Computational Learning Theory — which studies the behavior of learning algorithms under data shift. In particular, we contribute a novel PAC-Bayesian Domain Adaptation bound, along with principled approximation techniques to compute the bound’s contained statistics. To show the utility of our theoretical results, we apply the bound to analyze a common adversarial learning algorithm (DANN) designed to combat data-shift. Further, we show how statistics from the bound can be used to predict parser errors (without access to test data) on a variety of Discourse sense classification datasets. Our empirical analyses provide practically useful insights and (potential) answers to open theoretical questions. Finally, (time permitting) we briefly discuss how similar theoretical techniques can be used to study text-generation in dialogue.
Bio: Anthony Sicilia is a PhD student in the Intelligent Systems Program.
RSVP for Zoom Meeting Information: https://pitt.co1.qualtrics.com/jfe/form/SV_cHZ8SndLoF22hLw
Please let us know if you require an accommodation in order to participate in this event. Accommodations may include live captioning, ASL interpreters, and/or captioned media and accessible documents from recorded events. At least 5 days in advance is recommended.