How Do We Measure Trust in Visual Data Communication?

Hamza Elhamdadi, Aimen Gaba, Yea-Seul Kim, Cindy Xiong

View presentation:2022-10-17T16:15:00ZGMT-0600Change your timezone on the schedule page
2022-10-17T16:15:00Z
Exemplar figure, but none was provided by the authors

The live footage of the talk, including the Q&A, can be viewed on the session page, BELIV: Paper Session 1.

Abstract

Trust is fundamental to effective visual data communication between the visualization designer and the reader. Although personal experience and preference influence readers' trust in visualizations, visualization designers can leverage design techniques to create visualizations that evoke a ``calibrated trust," at which readers arrive after critically evaluating the information presented. To systematically understand what drives readers to engage in ``calibrated trust," we must first equip ourselves with reliable and valid methods for measuring trust. Computer science and data visualization researchers have not yet reached a consensus on a trust definition or metric, which are essential to building a comprehensive trust model in human-data interaction. On the other hand, social scientists and behavioral economists have developed and perfected metrics that can measure generalized and interpersonal trust, which the visualization community can reference, modify, and adapt for our needs. In this paper, we gather existing methods for evaluating trust from other disciplines and discuss how we might use them to measure, define, and model trust in data visualization research. Specifically, we discuss quantitative surveys from social sciences, trust games from behavioral economics, measuring trust through measuring belief updating, and measuring trust through perceptual methods. We assess the potential issues with these methods and consider how we can systematically apply them to visualization research.