Events Calendar

03 Dec
panther
Event Type

Lectures, Symposia, Etc.

Topic

Research, Technology

Target Audience

Undergraduate Students, Staff, Faculty, Graduate Students

Website

https://pitt.co1.qualtrics.com/jfe/fo...

University Unit
Intelligent Systems Program
Hashtag

#isp

Subscribe
Google Calendar iCal Outlook

ISP AI Forum: Augmentation by Counterfactual Explanation - Fixing an Overconfident Black-Box

This is a past event.

Speaker: Nihal Murali

Paper Title: Augmentation by Counterfactual Explanation - Fixing an Overconfident Black-Box

Abstract: A highly accurate but overconfident model is ill-suited for decision-making pipelines,
especially in critical applications such as healthcare or autonomous driving.
The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary. Furthermore, the classification model should refrain from making overconfident decisions on samples that lie far outside its training distribution (far-out-of-distribution (OOD)) or on previously unseen samples from novel classes that lie near its training distribution (near-OOD). In this paper, we propose fine-tuning a given pre-trained classification model to fix its uncertainty characteristics, while retaining its predictive performance. Specifically, we propose using a Progressive Counterfactual Explainer (PCE) to generate data augmentation for fine-tuning the classifier. The PCE is a form of conditional Generative Adversarial Networks (cGANs) trained to generate samples that visually traverse the separating boundary of the classifier. The discriminator of the PCE serves as a density estimator to identify and reject OOD samples. We perform extensive experiments with far-OOD, near-OOD, and ambiguous samples. Our empirical results show that our model improves the uncertainty of the baseline, and its performance is competitive to other methods that require a significant change or a complete re-training of the baseline model.

RSVP:  https://pitt.co1.qualtrics.com/jfe/form/SV_9NtGZQO9aNTOvVc 

Friday, December 3 at 12:30 p.m. to 1:00 p.m.

Virtual Event

ISP AI Forum: Augmentation by Counterfactual Explanation - Fixing an Overconfident Black-Box

Speaker: Nihal Murali

Paper Title: Augmentation by Counterfactual Explanation - Fixing an Overconfident Black-Box

Abstract: A highly accurate but overconfident model is ill-suited for decision-making pipelines,
especially in critical applications such as healthcare or autonomous driving.
The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary. Furthermore, the classification model should refrain from making overconfident decisions on samples that lie far outside its training distribution (far-out-of-distribution (OOD)) or on previously unseen samples from novel classes that lie near its training distribution (near-OOD). In this paper, we propose fine-tuning a given pre-trained classification model to fix its uncertainty characteristics, while retaining its predictive performance. Specifically, we propose using a Progressive Counterfactual Explainer (PCE) to generate data augmentation for fine-tuning the classifier. The PCE is a form of conditional Generative Adversarial Networks (cGANs) trained to generate samples that visually traverse the separating boundary of the classifier. The discriminator of the PCE serves as a density estimator to identify and reject OOD samples. We perform extensive experiments with far-OOD, near-OOD, and ambiguous samples. Our empirical results show that our model improves the uncertainty of the baseline, and its performance is competitive to other methods that require a significant change or a complete re-training of the baseline model.

RSVP:  https://pitt.co1.qualtrics.com/jfe/form/SV_9NtGZQO9aNTOvVc 

Friday, December 3 at 12:30 p.m. to 1:00 p.m.

Virtual Event

University Unit
Intelligent Systems Program
Hashtag

#isp

Powered by the Localist Community Events Calendar ©