Enabling Multimodal User Interactions for Genomics Visualization Creation

Qianwen Wang, Xiao Liu, Man Qing Liang, Sehi L'Yi, Nils Gehlenborg

Room: 104

2023-10-25T05:03:00ZGMT-0600Change your timezone on the schedule page
2023-10-25T05:03:00Z
Exemplar figure, described by caption below
AutoGosling facilitates the creation of genomics visualizations by enabling multimodal interactions. Instead of directly constructing the grammar-based visualization specifications users express design intentions through sketches/examples images, a template GUI, and natural language commands, which are interpreted by AutoGosling and converted into interactive genomics visualizations. Interactions are progressively introduced to minimize information overload and reduce mode-switching.
Fast forward
Keywords

Human-centered computing—Visualization—Visualization systems and tools; Human-centered computing—Interaction Design;

Abstract

Visualization plays an important role in extracting insights from complex and large-scale genomics data. Traditional graphical user interfaces (GUIs) offer limited flexibility for custom visualizations. Our prior work, Gosling, enables expressive visualization creation using a grammar-based approach, but beginners may face challenges in constructing complex visualizations. To address this, we explore multimodal interactions, including sketches, example images, and natural language inputs, to streamline visualization creation. Specifically, we customize two deep learning models (YOLO v7 and GPT3.5) to interpret user interactions and convert them into Gosling specifications. A workflow is proposed to progressively introduce and integrate multimodal interactions. We then present use cases demonstrating their effectiveness and identify challenges and opportunities for future research.