Spatial intelligence and multimodal AI models are transforming the way we design user experiences in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), eXtended Reality (XR) / spatial computing environments. In the past, there were many incomplete design paradigms and limited data manipulation capabilities and embodied reality was limited. Recent developments have created smarter, more interactive UX for users. Many future technologies leverage AI which improve usability, and privacy concerns can also become challenging.
Further advancements in user interaction multimodal inputs like text, speech, and gestures are finally changing the way we see data visualized and offer more direct and new ways for direct manipulation and interacting with information. With the advent of AI foundation models, we begin to see new forms of telepresence, world generation, and collaboration.
As the cost of compute continues to drop, we can expect more startups and platforms to push the boundaries of how we create and interact with immersive experiences. These advances lower the barrier to entry for software engineers, creators, and end users enabling more innovative and collaborative applications that redefine our understanding of computing.
This talk will explore how the multidimensionality of these technologies create rich, layered experiences and applications. From integration of multiple sensory inputs (visual, auditory, tactile, etc.), data types (text, images, video, 3D models) in interactive dimensions (2D interfaces, 3D environments, gestures, voice) are shaping how we create the next generation of user experiences and applications in spatial computing / AR VR MR XR.
Interview:
What is the focus of your work?
Spatial Computing/Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), eXtended Reality (XR), and Artificial Intelligence (AI)
What’s the motivation for your talk?
A combination of Spatial Intelligence, generative AI in recent years is changing the game for the future of spatial computing and we need to understand why that's important to understanding the future of human computer interaction as a society.
Who is your talk for?
Anyone interested in the intersection between these two disciplines that is a technical professional (software engineers, creators etc.)
What do you want someone to walk away with from your presentation?
AR VR MR XR is not dead, that AI is crucial to consider given its exponential growth and its enablement of AR from its foundations in computer vision and its evolution in recent years booming with generative AI.
What do you think is the next big disruption in software?
I believe all of the future of programming/back-end software engineering is automation in combination with humans, which to me AI is playing its most crucial role here, and spatial computing or 3D (3 dimensional) data (not only flat UI we have been stuck in for ages due to limitations in various hardware, optics etc., is spatial computing/AR VR MR XR. The combination of the two will enable us to be more creative, connected, and productive.
Speaker
Erin Pañgilinan
Spatial Computing x AI Leader, Author of "Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation Spatial Computing", fast.ai Diversity Fellow, Deep Learning Program
Erin Jerri Malonzo Pañgilinan is an internationally acclaimed author, publishing Book Authority’s #2 must-read book on Virtual Reality in 2019, O’Reilly Media book, Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation of Spatial Computing, which has been translated into Chinese, Korean, and distributed in over 2 dozen countries.
She was also previously a fellow in the University of San Francisco (USF) Data Institute’s Deep Learning Program (2017-2018) and Data Ethics Inaugural Class (2020) through fast.ai.
She is currently working on her next books, applications, and films.
Erin earned her BA from the University of California, Berkeley.