130 North Bellefield Ave., 538/539 Conference Room

The DINS PhD Student Speaker Series, an ongoing series of presentations by DINS PhD students, explores the next generation of scholarship about networks, information, and human behavior.

"Predictive Models for Driver Situational Awareness of Objects in Conditionally Automated Driving"
Lesong Jia
Abstract: Modeling a driver's situational awareness (SA) is critical to enhancing the driving safety and efficiency of the takeover process in conditionally automated driving. This study developed machine learning models to predict drivers' object-specific SA during takeover process, using features including traffic density, object properties, driver demographics, and physiological data. The dataset, including 168 takeover event data from 28 participants, was collected through a driving simulator experiment, where each participant experienced 6 challenging takeover scenarios with varying traffic densities and surrounding vehicle configurations. To construct our model, we first pre-processed raw physiological data and extracted and selected 26 key features according to feature importance and inter-correlation. Next, we applied cross-validation to train and evaluate various models, time windows, and hyperparameters using iterative grid search. We primarily used macro F1 score for evaluation, considering the dataset imbalance. The best model performance was achieved with the Support Vector Machines model, obtaining a macro F1 score of 0.75, a recall score of 0.77, and a macro recall score of 0.77 with a 1-second pre-takeover and 3-second post-takeover time window. Our modeling work could contribute to the development of driver monitoring and takeover support systems in conditionally automated vehicles.
 

"Improving Explainable Object-induced Model through Uncertainty for Automated Vehicles"
Shihong Ling
Abstract: The advancements in artificial intelligence have enhanced the capabilities of automated vehicles (AVs). However, there still exist challenges related to the transparency of their systems, which can lead to potential user misconceptions. While previous studies have produced explanations for driving, our research emphasizes the integration of uncertainties associated with actions. By utilizing the BDD-OIA dataset, we introduced a reweighting strategy driven by uncertainties, optimized a model induced by objects, and improved the interpretability of complex driving situations. Our approach, which incorporates the evidential deep learning method (EDL), demonstrated significantly enhanced performance compared to conventional methods.We will focus our future efforts on developing adaptable explanations to enhance user trust in AVs.
 

Event Details

Please let us know if you require an accommodation in order to participate in this event. Accommodations may include live captioning, ASL interpreters, and/or captioned media and accessible documents from recorded events. At least 5 days in advance is recommended.

University of Pittsburgh Powered by the Localist Community Event Platform © All rights reserved