Hand Tracking
For specifically how to design with hand tracking, please also look in the Interaction Design section. For how to align input with interfaces, follow up with the Interface Elements and Behaviors as well as the Button States and Object Manipulation sections.
Let's talk for a second...
Basics of Hand Tracking
There are two main ways hand tracking is used, and I talk about it more and better in Interaction Design, but here's a couple sentences and some pictures:
Direct manipulation is an input model that involves touching content directly with your hands. The idea behind this concept is that objects behave just as they would in the real world. Buttons can be activated simply by pressing them, objects can be picked up by grabbing them, and 2D content behaves like a virtual touchscreen. Direct manipulation is affordance-based, meaning it's user-friendly. There are no symbolic gestures to teach users. All interactions are built around a visual element that you can touch or grab. It's considered a "near" input model in that it's used for interacting with content within arms reach.
Point and commit with hands is an input model that lets users hover, select, and manipulate out of reach content. This "far" interaction technique is unique to VR/AR because humans don't naturally interact with the real world that way.
Audio and Visual Cues
Haptics provide an incredible amount of data to a user thatβs integral to an experience, so you have to make up for it with an over-inclusion of audio and visual cues. To make up for it, make sure every single place that could have a haptic UI interaction has a spatialized sound effect at that location. If I know anything about audio (of which I know close to nothing) it's the importance of using audio cues in repllacing the positional awareness gained by haptics.
Audio is of course important, but it also must be followed along with visual cues that relay information about the interaction itself. This often comes up more in AR, but very frequently you'll find user frustration pools around the inconsistenty of how the device is percieving their hands. The first key is to showcase some type of visual information that the hands are being tracked. This could include a skeleton, an actual hand model, a point on the finger tip, particle effects dispersing - anything that clues your user into the idea that the device is registering their hands. In this way, your affirming a form of neutral or passive interaction where players aren't actively performing any action but they are aware of themselves as a form of input.
The biggest of props to the designers at Ultraleap (Leap Motion). For absolutely everything they've ever put out, but also these blog posts! They were doing this in 2016 & 2017! My god.
What Makes a Spoon a Spoon? Form and Function in VR Industrial Design
Building Blocks: A Deep Dive Into Leap Motion Interactive Design
Design Sprints at Leap Motion: A Playground of 3D User Interfaces
Interaction Engine 1.0: Object Interactions, UI Toolkit, Handheld Controller Support, and More
Beyond Flatland: User Interface Design for VR
Obviously I just did call out Keichii Matsuda, Barrett Fox, and Martin Schubert for their amazing work above (please let me know if I missed more Leap Motion designers because I WILL name them all!). I also want to give credit to Luca Mefisto and Jonathan Ravasz for their work as well!
Last updated