Challenges of VR Technology
Last updated
Last updated
The core concept of Virtual Reality is that this technology surrounds the user with an entirely new environment. There will exist a foreground, midground, and background to this new space. The environment is entirely new and unfamiliar, so constraining your users and slowly exposing them over time is important. However, the VR space is also alive and users will naturally want to explore this the more comfortable they become. Utilizing this to your advantage is a must in helping the user through the environment and through tasks they're trying to complete. The biggest challenge in designing for VR is that you have to provide your users with enough grounding in the underlying principles of physics (action, reaction, and consequences), while expanding upon that environment in new and experimental ways and also minimizing user discomfort. Breaking this down, we can see some patterns appear in the biggest challenges for VR designers today.
There are instances where an action by your user and reaction from your environment might be misaligned. Because of the heavy relational value between immersion and sensory input, this can cause serious discomfort for users short-term and over time.
Because the user cannot see the physical world around them, it’s important that we allow users the ability to define their play area and restrict movement of the user to inside that predefined play area. Environmental construction should be scalable from smaller to larger sizes (typical sizes range from 1 x 1 meters to 3 x 3 meters) to assist in the best form of user engagement.
This also should be applied when users are getting dangerously near the bounds of the designated safe space as well. This can be done in several ways, including environmental changes such as displaying grid lines or a dense fog, popup notifications and arrows indicating the direction back to the safe space, and audio dimming as well as notification bells.
Taking into account user needs, our player could prefer a standing or a seated experience, as oftentimes they consume content from a chair or couch. Because of this constraint, the player’s accessible Field of View (FOV) may be limited, making engagement with content that is not directly in front of them difficult or impossible.
Provide users with controls to move distinctly further distances than the bounds of their initial environment. Virtual motion, otherwise known as locomotion, is incredibly complex and there are many implementations and considerations we as designers need to think about when having users commit to virtual motion.
One of the biggest challenges to tackle in VR-specific discomforts is the feeling of nausea and dizziness based on unnatural virtual movement that doesn’t align with a user’s senses. This can happen at any point in time without the overt intention of user movement, as virtual environments can defy the physics of reality and move themselves. This is why mitigation techniques happen at the app-level environment design, as well as the design for how a user moves around said virtual environment.
Vection is the main cause of motion sickness, and is a misalignment between perceptual understanding and sensory information that leads to discomfort. It occurs when the user’s vision tells them they are moving through space, but their body perceives the opposite. This discomfort is a very particular one to VR and even without the use of locomotion, vection can occur when an interface or object takes up the majority of the user’s field of view. When the object starts to move the user may feel like they are moving instead of the object in space.
Horizon Level
A way to ease vection is giving the user a horizon, to ground the user in an understanding of their position relative to other objects and space. If the horizon line exists, maintaining it at a fixed angle that matches the real-world horizon of their physical floor is paramount.
Playing with player perspective is fine, as is removing the visible horizon altogether, because the user will set expectations for gravity and motion by observing objects. If the horizon does not exist, make sure that the user is aligning their sense of position to digital objects that are not also constantly moving and rotating. This will have a similar effect to having a continually moving horizon line and your user will get disoriented.
In traditional 2D games, the player’s perspective is referred to as “the camera”. However, in VR the camera is the user's head and eyes. When the users themselves are not moving, or they have not given consent via interaction to be moved, moving the digital environment breaks immersion and leads to nausea.
If you do have to move your user, teleport them to the desired location instead of moving the camera; that way they can see the new environment without forced movement. You can use narrative distinction of an elevator or a blackout to explain why this occurred to the user.
The best way to mitigate vection in user-driven locomotion is to provide different options that allow the user to select which method of movement they prefer. Like previously stated, each user has individualized needs and those prone to more or less simulator sickness will require varying options.
An easy way of avoiding vection is avoiding acceleration altogether. One of the more interesting components of your vestibular senses is that it only detects acceleration and not consistent speed (velocity). This is why when you’re in a plane you can’t tell how fast you are going when you are in the air, but you can feel the takeoff and landing. In any scenario where we are allowing the user to move manually, maintain consistent speed or at least discourage multiple variations of positive (speeding up) or negative (slowing down) acceleration in short periods of time.
Tunnelling eases user vision around the periphery of a user’s eyesight by focusing content on the central vision (otherwise known as the fovea). For humans, the fovea can perceive more detail, but the peripheral vision is much better at detecting subtle movements. A common approach to tunnelling is placing a vignette or soft blur around a user’s peripheral vision during movement and then fading it out once movement has stopped. Another technique is to obscure it entirely with an in-game relevant object like the edges of a cockpit. This works because users are prevented from perceiving motion where their eyesight is most sensitive to it.
Blinks and Snap Turns
Acceleration does not just happen in a forward or side to side motion, but also rotationally. A way to mitigate user discomfort is by using “blinks” or “snap turns”. Due to the frequency of human blinking, the brain easily adjusts to minor, quick interruptions of visual content. This approach essentially mimics that behavior by using a quick cut to black in between forward and backward movement or rotations (averaging about a 10 degrees).
Another quick interruption of visual content that mimics blinking behavior is teleportation. Teleportation can be used to move the user within the same or between different environments. This is often done on completion of a task or by user designation, where all content fades to black and a user quickly (often “magically”) appears in a different location.
Teleportation, however, can cause a significant drop in immersion and presence as users have to acclimate to their new surroundings. Even if it is in the same environment, it can be disorienting if users are not given enough context and time in order to adjust. Teleportation, thus, should be used sparingly as it increases cognitive load from the user to adapt to their new surroundings. A technique to prevent this disorientation in teleportation is to use a warping effect so that users can have a familiar sense of the virtual environment. This does not work in all occasions like ones with unique landmarks or between two completely different locations.