Friday, December 3, 2021 12:30pm to 1:00pm
About this Event
Speaker: Nihal Murali
Paper Title: Augmentation by Counterfactual Explanation - Fixing an Overconfident Black-Box
Abstract: A highly accurate but overconfident model is ill-suited for decision-making pipelines,
especially in critical applications such as healthcare or autonomous driving.
The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary. Furthermore, the classification model should refrain from making overconfident decisions on samples that lie far outside its training distribution (far-out-of-distribution (OOD)) or on previously unseen samples from novel classes that lie near its training distribution (near-OOD). In this paper, we propose fine-tuning a given pre-trained classification model to fix its uncertainty characteristics, while retaining its predictive performance. Specifically, we propose using a Progressive Counterfactual Explainer (PCE) to generate data augmentation for fine-tuning the classifier. The PCE is a form of conditional Generative Adversarial Networks (cGANs) trained to generate samples that visually traverse the separating boundary of the classifier. The discriminator of the PCE serves as a density estimator to identify and reject OOD samples. We perform extensive experiments with far-OOD, near-OOD, and ambiguous samples. Our empirical results show that our model improves the uncertainty of the baseline, and its performance is competitive to other methods that require a significant change or a complete re-training of the baseline model.
RSVP: https://pitt.co1.qualtrics.com/jfe/form/SV_9NtGZQO9aNTOvVc
Please let us know if you require an accommodation in order to participate in this event. Accommodations may include live captioning, ASL interpreters, and/or captioned media and accessible documents from recorded events. At least 5 days in advance is recommended.