Input Method

With current human-computer interaction of digital screens, there is a standard set of inputs (mouse/ keyboard) that we can apply for a broad spectrum of applications, as well as standards like the heads up display (HUD) for gaming. However, these types of input and interactions are even more complex when designing for embodied interactions. Because VR/AR is still an emerging technology, principles and standards for input and interaction design are still being formed. Being aware of the different types of input modalities being explored in this medium, as well as the interaction design paradigms that persist, is key to designing well thought-out games.

Types of Input

There are many different types of input that are available for a VR/AR experience, and this will range depending on the device chosen. The most popular includes, but is not limited to: tracked controller input, hand input, gaze/head input, voice and dictation, mobile devices, and non-tracked peripherals such as keyboards, handheld game controllers, and even other physical objects.
The input selected will have an immense effect on the interaction design of the objects within your experience. It is very important, therefore, to solidify your input very early on in the development process.
It is also important to note that even when one has chosen a specific input, different types of interaction design and input schemas can be created. This is because of different prioritizations of the content within the application or experience itself.
Because the vast range of diversity in interaction design systems, many users will be unfamiliar and even uncomfortable with using said input modalities. It is important, then, for us to understand the ergonomics of human movement and take these into consideration when designing for spatial applications.

Teaching the Input Schema

Every VR/AR experience has a different input and interaction paradigm. In order for users to understand the input schema, use labels that directly map to the actual controller buttons or to different components of the hand. Consider also using hand or controller model representations in order to literally convey the input to your user.
Representations don't have the be exactly representational of the controller or hand, however the more the design deviates from known standards the more we will have onboard users to the proper interactions taking place inour virtual environment.
When showcasing interaction schemas to our users we want to deprioritize negative or permanent interactions to make them less achievable. When considering layout for content that does serve a purpose, put similar objects close to one another and place them into the environment that affords a specific type of interaction.
Ear a burrito to exit from Job Simulator is a great way of showcasing why making an action difficult to do, may not always be the worst?
Because our users are looking to their own environment for acknowledgement, especially for a given input, it is always important that the system responds to any type of interaction from the user be it positive, negative, or neutral. If a user fails to do the correct interaction and the environment doesn’t respond in some way, a user will assume it is malfunctioning as they aren’t receiving any feedback.