Interface Layout and Location

Context-Dependent Interfaces

Depending upon the variability of objects within your experience, your interface(s) can take on a range of contexts for your user. For specific types of interfaces in VR/AR, we can observe a pattern for location that operates based upon a specific coordinate space (Anandapadmanaban 2019).

Content-Locking to World Space: world-space interfaces are located in the virtual environment and users can move freely while these interfaces are locked in space.

Content-Locking to Screen Space: screen-space interfaces are locked to the display in an arc around the user. In this case screen-space interfaces are also locked to the user’s field of view, since the display is locked to the users’ head. This includes content directly in front of the user, but also within their periphery.

Content-Locking to Avatar Space: avatar-space interfaces are locked to the user themselves, often to their arms or control systems. These are always available to the user, and can be pulled up with ease.

Content-locking to Object Space: object-space interfaces are locked to specific objects providing pointed information and intractability relevant to that object. This object could be digital or physical in nature, and it’s goal is to benefit your usage of the experience.

Thinking Beyond the Surface

Spatialized 3D content now exists in a potential of 360 degrees around the user, however it also has a limitation to the amount of space it could potentially occupy. Similar to traditional interface design standards, we can define specific patterns for places interfaces and content can live by looking at the limitations of physical and cognitive abilities.

Zones of Spatial Design:

Headspace “no-no” Zone

The zone in which content is too close to users for them to be comfortable. This is often eased on headset devices that utilize a “clipping plane” or a plane in front of the user’s head that prevents content from breaching a specific distance.

Workspace Zone

The zone in which users find it comfortable to interact with content.

Content Zone

The zone in which people can see content comfortably without straining.

Periphery Zone

The zone in which users can see content at the periphery of their comfortable line of sight.

Curiosity Zone

The zone in which content has to be discovered by moving around (behind you).

Depth and Rotation of Spatial Design

Harkening to foreground, midground, and background of the environment, spatial design includes both depth and rotation. The human perceptions of individual, interpersonal, social, and public spaces can also help inform object placement that isn’t in the direct interaction space of our user.

Depth Cues

The acuity in which humans can judge depth can be poor, however the representation of objects at differing depths is something that we are distinctly aware of. In order to convince the human eye of differing depths, we must use techniques for digital reconstructions to mimic real-world objects that exist in an atmosphere.

The depth of nearby objects can be incredibly difficult to judge, as human eyes can flex and change lenses dynamically to assess the relativity of space. This flexing of eye muscles, in which the eyes have to rotate inwards or outwards so their lines of sight intersect at a particular depth plane, is called vergence. The lenses inside your eyes then focus on said object, through adjusting their shape to bring a depth plane into focus; this process of focusing is called accommodation.

Both vergence and accommodation are coupled with one another to understand depth and proximity. Within VR/AR there is a particular occurrence called the vergence-accommodation conflict, and this often causes fatigue and discomfort over long periods of time (Hoffman; Girshick; Akeley; Banks 2008). Because of this, we have to design content smartly. Objects in the distance should lose contrast, and appear fuzzy. Adding visual noise, gradients, and shadows to an object in the distance is important to convey materiality. Objects that are close should appear sharper and at full color and contrast. Their shadows should be full and apparent as well to give believability into space.

Angular Size for Layout & Elements

We are no longer just designing for screens, and a novel concept for user interfaces that is unique to VR/AR design is the construct of angular size. Most interfaces are confined to a distinct distance and dimensions of a screen, however in VR/AR your UI can occupy any location or distance around the user. This can be incredibly problematic when it comes to both reading and interacting with said content.The construct of angular size can be boiled down to this: objects appear smaller as they get further away, and larger as they come close. A key factor in understanding what “appropriate dimensions” are for VR/AR is that we, as designers, are now no longer working in just height and width but also distance from our users. Interfaces that are too close could cause eye strain, but interfaces that are too far can easily become unreadable.​

Angular size takes into account the distance of geometry as degrees of the users’ field of view, and this degree system is a slightly more standardized way of understanding user interfaces over distance. Any content present in the environment that we want our users to gather information from should be between 0.5 meters (~1.5 ft) and 20 meters (~65.5 ft) away. The eyes typically focus on content at a distance of 2 meters (6.5 ft) so try to maintain near-field interactable content before that level. However the recommended angular size still depends on the pixel density of the display in the VR/AR device. The general recommendation for headsets that are ~13 pixels per degree is text that will take up ~1.5 degrees (20px tall on most displays).

I want to specifically call out the work of both Mike Alger as well as Eswar Anandapadmanaban - both of whom have really informed the frameworks I use here. Please go check out their work!

Last updated