Interaction Design

Interaction Design from Input Methods

Objects, Planes, and Buttons

In VR/AR there is no such restriction to only interacting with objects close to you, as those further away out of arm’s reach can still be observed and acted upon. The distinguishment between this is near-field interfaces and far-field interfaces. For the purpose of succinctness I will be referring to them as “buttons” but these could be any object, not just the stereotypical flat button.

Near-Field Buttons

Near-field interfaces are those you want your user to be able to interact with directly that should be within arms reach. As spoken on previously, these interfaces shouldn’t be closer than 0.5 meters (~1.5 ft) or further than 2 meters (6.5 ft). The most important concept to remember for near-field buttons is that they should always be within user reach, however in VR/AR environments it is possible that users can simply walk away, thus making them out of reach.

Far-Field Buttons

Far-field interfaces are at a significant distance from your user. They should be far enough away that your user can’t interact with them directly. As spoken on previously, these interfaces shouldn’t exceed 20 meters (~65.5 ft). The most important component to remember for far-field buttons is that they are best utilized when kept at a distinct distance from the user because they can accommodate more content from within the field of view. However, in VR/AR environments it is possible that users can simply walk closer, thus making them within reach.

Far-Field Interactions: Raycasts and Reticles

In VR/AR experiences, especially in ones that utilize a lot of far-field objects for interaction, having the ability to select and interact with objects is important and also dependent upon the input chosen. For hand, gaze, or controller input, raycasts and reticles are often utilized for far away selection.

Reticles and raycasts are used for the same function, which is to indicate to the user the precise position of the input interaction. Reticles are used in order to show the player the specific point where they gaze using a circle that is locked to head position. Raycasts are used to show the positioning of their hand or controller that is relative to that input’s position. These forms of conveying input movement can be easily analogized to digital cursors. Similarly to a cursor, users can move their desired input in space and select interactable objects like buttons in order to achieve the desired state.

For reticle systems, object selection is still heavily impacted by angular distance.

Near-Field Interaction: Digital Hands

In VR/AR experiences, especially in ones that utilize a lot of close objects for interaction, perceived object physics is important and surprisingly dependent upon the input type chosen. When directly manipulating a digital object with one’s own hand, the brain has a specific model built up based upon its own presumptions with previous physical objects. However, when a controller is present, that presumption is less strict because the brain is giving flexibility to the controller as an intermediary.

For example, a common interaction for a controller when a person tries to pick up an object is that said object is snapped to their hand at a specific alignment. In some cases if the object affords gripping in a specific way, like a cup or a gun, it will be attached to the hand in a way expected for function of said object.

However, this is incredibly different for expectations around object interactions with a user’s hands. As the brain is incredibly good at detecting proprioception, as well as tactile reciprocation, snapping objects to your hand via approximating the location of said object almost immediately breaks immersion. Users have an incredibly close relational fidelity to their hands, so making sure to test frequently and often on a wide audience is important for understanding what are good or bad object interactions.

Avoiding Shallow Object Interactions

When considering user input for an object interaction we want to avoid objects that would be considered “unilateral taskers” with shallow object interactions. For example, in most games you have an interaction of a key and a keyhole. However, in a purely immersive world players will want to utilize objects to do an immensely variable amount of actions. Players might discover a screwdriver and also put that into the keyhole, or use a gun to shoot the lock open. Because of this, users will attempt to interact with any object presented to them.

Because our users are looking to their own environment for acknowledgement, especially for a given input, it is always important that the system responds to any type of interaction from the user be it positive, negative, or neutral. If a user fails to do the correct interaction and the environment doesn’t respond in some way, a user will assume it is malfunctioning as they aren’t receiving any feedback.

Audio & Haptic Cues

Audio and haptics are easily one of the most overlooked and underutilized tools, but one of the most valuable as it helps maintain immersion while providing substance to digital objects. Spatialized sound and haptic cues help users understand digital object position without having to maintain a line of sight for visual content. Use audio cues to help remind users when content is outside the field of view, to notify users if content is rendered behind them, and to assist users in detecting characters that might be moving around constantly. Use haptic cues to notify users when they’re hovering on a digital object, when an impact of a digital object occurs, or when an important interaction that needs to take place is about to happen.

Last updated