Laboratory of Informatics of Grenoble Équipe Ingénierie de l'Interaction Humain-Machine

Équipe Ingénierie de l'Interaction
Humain-Machine

OctoPocus in VR: Using a Dynamic Guide for 3D Mid-Air Gestures in Virtual Reality

In IEEE Transactions on Visualization and Computer Graphics . 2021.

Katherine Fennedy, Jeremy Hartmann, Quentin Roy, Simon Perrault, Daniel Vogel

(work done at WaterlooHCI)

Abstract

Bau and Mackay's OctoPocus dynamic guide helps novices learn, execute, and remember 2D surface gestures. We adapt OctoPocus to 3D mid-air gestures in Virtual Reality (VR) using an optimization-based recognizer, and by introducing an optional exploration mode to help visualize the spatial complexity of guides in a 3D gesture set. A replication of the original experiment protocol is used to compare OctoPocus in VR with a VR implementation of a crib-sheet. Results show that despite requiring 0.9s more reaction time than crib-sheet, OctoPocus enables participants to execute gestures 1.8s faster with 13.8% more accuracy during training, while remembering a comparable number of gestures. Subjective ratings support these results, 75% of participants found OctoPocus easier to learn and 83% found it more accurate. We contribute an implementation and empirical evidence demonstrating that an adaptation of the OctoPocus guide to VR is feasible and beneficial.