The Design of Virtual and Augmented Reality
  • 😄Introduction
    • 🔠Terminology
    • ⁉️Answering Common Questions
  • Background on Existing Design Best Practices
    • 🧩Content Creation: Designing an Experience
    • 🧑‍🤝‍🧑User Experience Design: Designing for People
    • 🏨Environmental Design: Designing for Space
  • The Foundations of VR and AR
    • Immersion, Presence, and Engagement
    • Designing for Human Embodiment
    • Designing for Human Perception
    • Designing for Accessibility
  • Designing for the Medium
    • Challenges of VR Technology
    • Challenges of AR Technology
      • AR Mobile Design
  • An Introduction to Spatial Design
    • Ergonomics of People and Places
    • Input Method
      • Controllers & Other Peripherals
      • Hand Tracking
      • Head Pose and Gaze
      • Voice and Dictation
    • Interface Layout and Location
      • Types of Interfaces and Visual Design
      • Interface Elements and Behaviors
      • Error Prevention
    • Interaction Design
      • Button States and Object Manipulation
      • Uncanny Valley of Interaction Design
  • By Definition, Doing Too Much
    • Multiple Inputs, Platforms, Locations, and People?
    • How To Figure Out What's Not Working
  • Afterword
    • 🌅A Call to the Past and Towards the Future
  • A Ridiculous Repository of Resources and References
    • 🗃️Guides and Frameworks
    • 📺Video Presentations and Essays
    • ✏️Blogs and Articles
    • 📚Books
    • 📑Published Research
Powered by GitBook
On this page
  • Context-Dependent Interfaces
  • Thinking Beyond the Surface
  • Zones of Spatial Design:
  • Depth and Rotation of Spatial Design
  • Depth Cues
  • Angular Size for Layout & Elements
  1. An Introduction to Spatial Design

Interface Layout and Location

PreviousVoice and DictationNextTypes of Interfaces and Visual Design

Last updated 2 years ago

Context-Dependent Interfaces

Depending upon the variability of objects within your experience, your interface(s) can take on a range of contexts for your user. For specific types of interfaces in VR/AR, we can observe a pattern for location that operates based upon a specific coordinate space (Anandapadmanaban 2019).

Content-Locking to World Space: world-space interfaces are located in the virtual environment and users can move freely while these interfaces are locked in space.

Content-Locking to Screen Space: screen-space interfaces are locked to the display in an arc around the user. In this case screen-space interfaces are also locked to the user’s field of view, since the display is locked to the users’ head. This includes content directly in front of the user, but also within their periphery.

Content-Locking to Avatar Space: avatar-space interfaces are locked to the user themselves, often to their arms or control systems. These are always available to the user, and can be pulled up with ease.

Content-locking to Object Space: object-space interfaces are locked to specific objects providing pointed information and intractability relevant to that object. This object could be digital or physical in nature, and it’s goal is to benefit your usage of the experience.

Thinking Beyond the Surface

Spatialized 3D content now exists in a potential of 360 degrees around the user, however it also has a limitation to the amount of space it could potentially occupy. Similar to traditional interface design standards, we can define specific patterns for places interfaces and content can live by looking at the limitations of physical and cognitive abilities.

Zones of Spatial Design:

Headspace “no-no” Zone

The zone in which content is too close to users for them to be comfortable. This is often eased on headset devices that utilize a “clipping plane” or a plane in front of the user’s head that prevents content from breaching a specific distance.

Workspace Zone

The zone in which users find it comfortable to interact with content.

Content Zone

The zone in which people can see content comfortably without straining.

Periphery Zone

The zone in which users can see content at the periphery of their comfortable line of sight.

Curiosity Zone

The zone in which content has to be discovered by moving around (behind you).

Depth and Rotation of Spatial Design

Harkening to foreground, midground, and background of the environment, spatial design includes both depth and rotation. The human perceptions of individual, interpersonal, social, and public spaces can also help inform object placement that isn’t in the direct interaction space of our user.

Depth Cues

The acuity in which humans can judge depth can be poor, however the representation of objects at differing depths is something that we are distinctly aware of. In order to convince the human eye of differing depths, we must use techniques for digital reconstructions to mimic real-world objects that exist in an atmosphere.

The depth of nearby objects can be incredibly difficult to judge, as human eyes can flex and change lenses dynamically to assess the relativity of space. This flexing of eye muscles, in which the eyes have to rotate inwards or outwards so their lines of sight intersect at a particular depth plane, is called vergence. The lenses inside your eyes then focus on said object, through adjusting their shape to bring a depth plane into focus; this process of focusing is called accommodation.

Angular Size for Layout & Elements

We are no longer just designing for screens, and a novel concept for user interfaces that is unique to VR/AR design is the construct of angular size. Most interfaces are confined to a distinct distance and dimensions of a screen, however in VR/AR your UI can occupy any location or distance around the user. This can be incredibly problematic when it comes to both reading and interacting with said content.The construct of angular size can be boiled down to this: objects appear smaller as they get further away, and larger as they come close. A key factor in understanding what “appropriate dimensions” are for VR/AR is that we, as designers, are now no longer working in just height and width but also distance from our users. Interfaces that are too close could cause eye strain, but interfaces that are too far can easily become unreadable.​

Angular size takes into account the distance of geometry as degrees of the users’ field of view, and this degree system is a slightly more standardized way of understanding user interfaces over distance. Any content present in the environment that we want our users to gather information from should be between 0.5 meters (~1.5 ft) and 20 meters (~65.5 ft) away. The eyes typically focus on content at a distance of 2 meters (6.5 ft) so try to maintain near-field interactable content before that level. However the recommended angular size still depends on the pixel density of the display in the VR/AR device. The general recommendation for headsets that are ~13 pixels per degree is text that will take up ~1.5 degrees (20px tall on most displays).

I want to specifically call out the work of both Mike Alger as well as Eswar Anandapadmanaban - both of whom have really informed the frameworks I use here. Please go check out their work!

Both vergence and accommodation are coupled with one another to understand depth and proximity. Within VR/AR there is a particular occurrence called the vergence-accommodation conflict, and this often causes fatigue and discomfort over long periods of time (Hoffman; Girshick; Akeley; Banks ). Because of this, we have to design content smartly. Objects in the distance should lose contrast, and appear fuzzy. Adding visual noise, gradients, and shadows to an object in the distance is important to convey materiality. Objects that are close should appear sharper and at full color and contrast. Their shadows should be full and apparent as well to give believability into space.

Image for post
Image for post
Image for post
2008
Link:
https://uxplanet.org/ux-101-for-virtual-and-mixed-reality-part-1-physicality-3fed072f371
Link:
https://www.youtube.com/watch?v=id86HeV-Vb8