Talk

Exploring fairlearn and practical strategies for assessing and mitigating harm in AI systems

Thursday, May 29

11:05 - 11:35
RoomSpaghetti
LanguageEnglish
Audience levelIntermediate
Elevator pitch

As AI becomes integral to our lives, ensuring fairness is crucial. Join us as we explore the concept of fairness, discuss potential harms, and introduce Fairlearn: a community-driven, open-source toolkit for assessing and mitigating harm in your ML projects.

Abstract

As AI becomes a more significant part of our everyday lives, ensuring these systems are fair is more important than ever. In this session, we’ll discuss how to define fairness and the potential harms our algorithms can have on people and society. We’ll introduce fairlearn, a community-driven, open-source project that offers practical tools for assessing and mitigating harm in AI systems. We’ll also explore how to discuss bias, different types of harm, the idea of group fairness and how they all relate to fairlearn’s toolkit. To make it all concrete, we’ll walk through a real-world example of assessing fairness and share some hands-on strategies you can use to mitigate harm in your own ML projects. The tools fairlearn provides can be integrated into your existing scikit-learn pipelines.

TagsMachine-Learning
Participant

Tamara Atanasoska

Tamara is an open-source maintainer, a software engineer at :probabl and CompLing/NLP researcher.