Combining Voice and Gesture for Presenting Data to Remote Audiences

Arjun Srinivasan, Matthew Brehmer

Room: 105

2023-10-21T22:00:00ZGMT-0600Change your timezone on the schedule page
2023-10-21T22:00:00Z
Exemplar figure, described by caption below
Six moments in a remote presentation about American post-secondary institutions and their admission statistics, in which the presenter appears behind a semi-transparent unit chart composited in the foreground. The presenter ephemerally selects and highlights categories and items in the chart via pointing. Meanwhile, utterances forming part of the presenter's spoken monologue trigger the filtering, sorting, and aggregating of the data. See the supplemental video to watch the 3-minute presentation.
Abstract

We consider the combination of voice commands with touchless bimanual gestures performed during presentations about data delivered via teleconference applications. Our demonstration extends recent work that considers the latter interaction modality in a presentation environment where charts can be composited over live webcam video, charts that dynamically respond to the presenter’s operational (i.e., functional and deictic) hand gestures. In complementing these gestures with voice commands, new functionality is unlocked: the ability to precisely filter, sort, and highlight subsets in the data. While these abilities provide presenters with more flexibility in terms of presentation linearity and the capacity for responding to audience questions, imperative voice commands can come across to audiences as stilted or unnatural, and may be distracting.