Multidimensionality: Using Spatial Intelligence x Spatial Computing to Create New Worlds

Spatial intelligence and multimodal AI models are transforming the way we design user experiences in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), eXtended Reality (XR) / spatial computing environments. In the past, there were many incomplete design paradigms and limited data manipulation capabilities and embodied reality was limited. Recent developments have created smarter, more interactive UX for users. Many future technologies leverage AI which improve usability, and privacy concerns can also become challenging.

Further advancements in user interaction multimodal inputs like text, speech, and gestures are finally changing the way we see data visualized and offer more direct and new ways for direct manipulation and interacting with information. With the advent of AI foundation models, we begin to see new forms of telepresence, world generation, and collaboration.

As the cost of compute continues to drop, we can expect more startups and platforms to push the boundaries of how we create and interact with immersive experiences. These advances lower the barrier to entry for software engineers, creators, and end users enabling more innovative and collaborative applications that redefine our understanding of computing.

This talk will explore how the multidimensionality of these technologies create rich, layered experiences and applications, more specifically how we create XR with AI and how each of these technologies intersect. From integration of multiple sensory inputs (visual, auditory, tactile, etc.), data types (text, images, video, 3D models) in interactive dimensions (2D interfaces, 3D environments, gestures, voice) are shaping how we create the next generation of user experiences and applications in spatial computing / AR VR MR XR.

Interview:

What is the focus of your work?

Spatial Computing/Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), eXtended Reality (XR), and Artificial Intelligence (AI)

What’s the motivation for your talk?

A combination of Spatial Intelligence, generative AI in recent years is changing the game for the future of spatial computing and we need to understand why that's important to understanding the future of human computer interaction as a society.

Who is your talk for?

Anyone interested in the intersection between these two disciplines that is a technical professional (software engineers, creators etc.)

What do you want someone to walk away with from your presentation?

AR VR MR XR is not dead, that AI is crucial to consider given its exponential growth and its enablement of AR from its foundations in computer vision and its evolution in recent years booming with generative AI.

What do you think is the next big disruption in software?

I believe all of the future of programming/back-end software engineering is automation in combination with humans, which to me AI is playing its most crucial role here, and spatial computing or 3D (3 dimensional) data (not only flat UI we have been stuck in for ages due to limitations in various hardware, optics etc., is spatial computing/AR VR MR XR. The combination of the two will enable us to be more creative, connected, and productive.


Speaker

Erin Pañgilinan

Spatial Computing x AI Leader, Author of "Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation Spatial Computing", fast.ai Diversity Fellow, Deep Learning Program

Erin Jerri Malonzo Pañgilinan is an internationally acclaimed author, publishing Book Authority’s #2 must-read book on Virtual Reality in 2019, O’Reilly Media book, Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation of Spatial Computing, which has been translated into Chinese, Korean, and distributed in over 2 dozen countries.

She was also previously a fellow in the University of San Francisco (USF) Data Institute’s Deep Learning Program (2017-2018) and Data Ethics Inaugural Class (2020) through fast.ai.

She is currently working on her next books, applications, and films.

Erin earned her BA from the University of California, Berkeley.

Read more
Find Erin Pañgilinan at:

From the same track

Session Augmented Reality

Making Augmented Reality Accessible: A Case Study of Lens in Maps

Wednesday Nov 20 / 10:35AM PST

Augmented reality (AR) has the potential to revolutionize how we interact with the world, but its visual-centric nature often excludes users with visual impairments.

Speaker image - Ohan Oda

Ohan Oda

Senior Software Engineer @Google, Expert in AR with Maps Starting from MARS @ColumbiaUniversity (2005), then CityLens @Nokia (2012), and Currently Live View @Google

Session XR

Accessible Innovation in XR: Maximizing the Curb Cut Effect

Wednesday Nov 20 / 11:45AM PST

Accessibility is often seen as the last step in many software projects - a checklist to be crossed off to satisfy regulations. But in reality, accessible design thinking can lead to a fountain of features that benefit disabled and abled users alike.

Speaker image - Dylan Fox

Dylan Fox

Director of Operations @XR Access, Previously UC Berkeley Researcher & UX Designer, Expert on Accessibility for Emerging Technologies

Session

Building Inclusive Mini Golf: A Practical Guide to Accessible XR Development

Wednesday Nov 20 / 01:35PM PST

Creating accessible tools and experiences in VR is an ongoing challenge, especially in visually intensive environments like gaming.

Speaker image - Colby Morgan

Colby Morgan

Technical Director @Mighty Coconut, Walkabout Mini Golf, XR Accessibility Advocate

Session

Panel: Next Generation Inclusive UIs

Wednesday Nov 20 / 03:55PM PST

Augmented, Virtual, Extended, and Mixed Reality unlock the ability to integrate the power of computers more seamlessly into our physical three-dimensional world. However, designing the user experience of these next-generation UIs to be as inclusive as possible comes with a lot of challenges.

Speaker image - Erin Pañgilinan

Erin Pañgilinan

Spatial Computing x AI Leader, Author of "Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation Spatial Computing", fast.ai Diversity Fellow, Deep Learning Program

Speaker image - Colby Morgan

Colby Morgan

Technical Director @Mighty Coconut, Walkabout Mini Golf, XR Accessibility Advocate

Speaker image - Dylan Fox

Dylan Fox

Director of Operations @XR Access, Previously UC Berkeley Researcher & UX Designer, Expert on Accessibility for Emerging Technologies