Enabling Multimodal User Interactions for Genomics Visualization Creation
Qianwen Wang, Xiao Liu, Man Qing Liang, Sehi L'Yi, Nils Gehlenborg
Room: 104
2023-10-25T05:03:00ZGMT-0600Change your timezone on the schedule page
2023-10-25T05:03:00Z
Fast forward
Keywords
Human-centered computing—Visualization—Visualization systems and tools; Human-centered computing—Interaction Design;
Abstract
Visualization plays an important role in extracting insights from complex and large-scale genomics data. Traditional graphical user interfaces (GUIs) offer limited flexibility for custom visualizations. Our prior work, Gosling, enables expressive visualization creation using a grammar-based approach, but beginners may face challenges in constructing complex visualizations. To address this, we explore multimodal interactions, including sketches, example images, and natural language inputs, to streamline visualization creation. Specifically, we customize two deep learning models (YOLO v7 and GPT3.5) to interpret user interactions and convert them into Gosling specifications. A workflow is proposed to progressively introduce and integrate multimodal interactions. We then present use cases demonstrating their effectiveness and identify challenges and opportunities for future research.