
Undergraduate Students, Staff, Alumni, Prospective Students, Faculty, Graduate Students, Postdocs
With James Foulds, Assistant Professor, UMBC
This event is part of the Forbes Corridor Colloquia, sponsored by Pitt Cyber.
With the rising influence of artificial intelligence (AI) and machine learning (ML) systems on many important aspects of our daily lives, there are growing concerns that these systems may have harmful consequences, including the erosion of privacy, the potential for abuse, and unfair or discriminatory behavior. In this talk, I will discuss the need for responsible AI methods and practices, and the technical and non-technical interventions which can help to ensure that the potential harms and pitfalls of AI technologies are mitigated. I will then focus on my research group's proposed methods for ensuring that machine learning algorithms behave in a fair and equitable manner. I will present methods which help to avoid harmful discrimination against protected groups along lines of gender, race, sexual orientation, class, and disability, and show how to extend AI fairness protections to the marginalized populations at the intersections of these groups.
Dial-In Information
For log-in information, please complete the registration form.
Thursday, October 14 at 4:00 p.m. to 5:15 p.m.
Virtual EventWith James Foulds, Assistant Professor, UMBC
This event is part of the Forbes Corridor Colloquia, sponsored by Pitt Cyber.
With the rising influence of artificial intelligence (AI) and machine learning (ML) systems on many important aspects of our daily lives, there are growing concerns that these systems may have harmful consequences, including the erosion of privacy, the potential for abuse, and unfair or discriminatory behavior. In this talk, I will discuss the need for responsible AI methods and practices, and the technical and non-technical interventions which can help to ensure that the potential harms and pitfalls of AI technologies are mitigated. I will then focus on my research group's proposed methods for ensuring that machine learning algorithms behave in a fair and equitable manner. I will present methods which help to avoid harmful discrimination against protected groups along lines of gender, race, sexual orientation, class, and disability, and show how to extend AI fairness protections to the marginalized populations at the intersections of these groups.
Dial-In Information
For log-in information, please complete the registration form.
Thursday, October 14 at 4:00 p.m. to 5:15 p.m.
Virtual Event
Undergraduate Students, Staff, Alumni, Prospective Students, Faculty, Graduate Students, Postdocs