Friday, September 27, 2024 12:30pm to 1:30pm
About this Event
135 North Bellefield Avenue, Pittsburgh, PA, 15260
Title: Engaging an LLM to Explain Worked Examples for Java Programming: Prompt Engineering and a Feasibility Study
Speaker: Arun Balajiee
Abstract:Worked code examples are among the most popular types of learning content in programming classes. Most a pproaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide line-by-line explanations of a large number of examples typically used in a programming class. This paper explores the opportunity to facilitate the development of worked examples for Java programming through a human-AI collaborative authoring approach. The idea of collaborative authoring is to generate a starting version of code explanations using LLM and present it to the instructor to edit if necessary. The critical step towards implementing this idea is to ensure that LLM can produce code explanations that look meaningful and acceptable to instructors and students. To achieve this goal, we performed an extensive prompt engineering study and evaluated the explanation produced by the selected prompt in a user study with students and authors.
Title: One Size Does Not Fit All: Designing and Evaluating Criticality-Adaptive Displays in Highly Automated Vehicles
Speaker: Yaohan Ding
Abstract: To promote drivers’ overall experiences in highly automated vehicles, we designed three criticality-adaptive displays: IO display highlighting Influential Objects, CO display highlighting Critical Objects, and ICO display highlighting Influential and Critical Objects differently. We conducted an online video-based survey study with 295 participants to evaluate them in varying traffic conditions. Results showed that ICO display was considered the best as it led to the most positive impacts on overall experience. Specifically, low-trust propensity participants found ICO display more useful while high-trust propensity participants found CO display more useful. When interacting with humans in traffic, participants had higher situational awareness (SA) but worse non-driving related task (NDRT) performance. Aging and CO display also led to slower NDRT reactions. Nonetheless, older participants found displays more useful. We recommend providing different criticality-adaptive displays based on drivers’ trust to enhance driving and NDRT performance and suggest carefully treating objects of different categories in traffic.
Bio: Yaohan Ding is a PhD student in the Intelligent Systems Program at the University of Pittsburgh. Her research interest lies in human-computer interactions, with a specific focus on enhancing user experiences in human-automated vehicle interactions.
Title: What metrics of participation balance predict outcomes of collaborative learning with a robot?
Speaker: Yuya Asano
Abstract: One of the keys to the success of collaborative learning is balanced participation by all learners, but this does not always happen naturally. Pedagogical robots have the potential to facilitate balance. However, it remains unclear what participation balance robots should aim at; various metrics have been proposed, but it is still an open question whether we should balance human participation in human-human interactions (HHI) or human-robot interactions (HRI) and whether we should consider robots' participation in collaborative learning involving multiple humans and a robot. This paper examines collaborative learning between a pair of students and a teachable robot that acts as a peer tutee to answer the aforementioned question. Through an exploratory study, we hypothesize which balance metrics in the literature and which portions of dialogues (including vs. excluding robots' participation and human participation in HHI vs. HRI) will better predict learning as a group.
We test the hypotheses with another study and replicate them with automatically obtained units of participation to simulate the information available to robots when they adaptively fix imbalances in real-time. Finally, we discuss recommendations on which metrics learning science researchers should choose when trying to understand how to facilitate collaboration.
Bio: Yuya is a PhD student in the intelligent systems program at the School of Computing and Information, advised by Dr. Diane Litman. He is also affiliated with the Learning Research & Development Center at the University of Pittsburgh. His research interest lies in the intersection of natural language processing (NLP), human-computer interaction (HCI), and educational technology. He works on the application of NLP in the domain of education to offer high-quality education at scale without requiring extensive human teachers' effort. He received HBSc in Computer Science at the University of Toronto, Canada.
Please let us know if you require an accommodation in order to participate in this event. Accommodations may include live captioning, ASL interpreters, and/or captioned media and accessible documents from recorded events. At least 5 days in advance is recommended.