The IEEE VIS 2022 virtual conference is now active!

Click here to go to the 2022 virtual conference page.

FairRankVis: A Visual Analytics Framework for Exploring Algorithmic Fairness in Graph Mining Models

Tiankai Xie, Yuxin Ma, Jian Kang, Hanghang Tong, Ross Maciejewski

View presentation: 2021-10-29T15:45:00Z GMT-0600 Change your timezone on the schedule page
2021-10-29T15:45:00Z
Exemplar figure, described by caption below
FairRankVis is a visual analytics framework designed to enable the exploration of multi-class bias in graph mining algorithms. The proposed framework is model agnostic, supports both group and individual fairness levels of comparison, and consists of a suite of interactive visualizations for investigating node attributes and topological features of graph elements to explore algorithmic fairness. The image demonstrates the fairness diagnosis of InFoRM (a debiased ranking model) on Weibo social network data.
Fast forward

Direct link to video on YouTube: https://youtu.be/LAxI6_i3CHo

Abstract

Graph mining is an essential component of recommender systems and search engines. Outputs of graph mining models typically provide a ranked list sorted by each item's relevance or utility. However, recent research has identified issues of algorithmic bias in such models, and new graph mining algorithms have been proposed to correct for bias. As such, algorithm developers need tools that can help them uncover potential biases in their models while also exploring the impacts of correcting for biases when employing fairness-aware algorithms. In this paper, we present FairRankVis, a visual analytics framework designed to enable the exploration of multi-class bias in graph mining algorithms. We support both group and individual fairness levels of comparison. Our framework is designed to enable model developers to compare multi-class fairness between algorithms (for example, comparing PageRank with a debiased PageRank algorithm) to assess the impacts of algorithmic debiasing with respect to group and individual fairness. We demonstrate our framework through two usage scenarios inspecting algorithmic fairness.