Semantic Explanation of Interactive Dimensionality Reduction
Yali Bian, Chris North, Eric Krokos, Sarah Joseph
External link (DOI)
View presentation:2021-10-27T15:50:00ZGMT-0600Change your timezone on the schedule page
2021-10-27T15:50:00Z
Keywords
Dimensionality Reduction, Machine Learning Techniques, Data Analysis, Reasoning, Problem Solving, and Decision Making, Visual Representation Design, High-dimensional Data, Text/Document Data
Abstract
Interactive dimensionality reduction helps analysts explore the high-dimensional data based on their personal needs and domain-specific problems. Recently, expressive nonlinear models are employed to support these tasks. However, the interpretation of these human-steered nonlinear models during human-in-the-loop analysis has not been explored. To address this problem, we present a new visual explanation design called semantic explanation. Semantic explanation visualizes model behaviors in a manner that is similar to users' direct projection manipulations. This design conforms to the spatial analytic process and enables analysts better understand the updated model in response to their interactions. We propose a pipeline to empower interactive dimensionality reduction with semantic explanation using counterfactuals. Based on the pipeline, we implement a visual text analytics system with nonlinear dimensionality reduction powered by deep learning via the BERT model. We demonstrate the efficacy of semantic explanation with two case studies of academic article exploration and intelligence analysis.