How Do Algorithmic Fairness Metrics Align with Human Judgement? A Mixed-Initiative System for Contextualized Fairness Assessment

Rares Constantin, Moritz Dück, Anton Alexandrov, Patrik Matosevic, Daphna Keidar, Mennatallah El-Assady

View presentation:2022-10-16T14:24:00ZGMT-0600Change your timezone on the schedule page
2022-10-16T14:24:00Z
Exemplar figure, described by caption below
The simplified pipeline of FairAlign, a visual analytics platform for contextualized fairness assessment. A workflow example would be as follows: The laypeople sign up, choose an annotation dashboard and start taking decisions regarding algorithmic fairness, based on the provided visualizations of data and model's predictions. After the annotation process is finalized, data scientists and machine learning experts can login and analyze the values of the predefined fairness metrics, along with the aggregated results obtained from the human evaluation.

The live footage of the talk, including the Q&A, can be viewed on the session page, TREX: Session 1.

Abstract