Making Augmented Reality Accessible: A Case Study of Lens in Maps

Augmented reality (AR) has the potential to revolutionize how we interact with the world, but its visual-centric nature often excludes users with visual impairments. This presentation will explore how Google Maps' Lens in Maps feature was adapted to provide a meaningful AR experience for visually impaired users. By leveraging audio cues, haptic feedback, and intuitive screen reader interactions, we demonstrate how AR can be made accessible without compromising its core functionality. Join us as we discuss the design decisions, challenges, and successes of this project, and explore the potential for broader applications of accessible AR in various domains.

What is the focus of your work these days?

I am focused on pioneering next-generation interactions for the Lens in Maps feature, aiming to revolutionize Google Maps experience. 

What’s the motivation for your talk?

I hope to advocate for the inclusion of low-vision accessibility in augmented reality applications, where applicable and feasible. 

How would you describe the persona and level of the target audience?

Our ideal audience includes individuals with a foundational understanding of augmented reality principles and a keen interest in exploring its applications, regardless of their current role or experience level.

What do you want this persona to walk away with from your presentation?

I hope to inspire attendees to envision a future where augmented reality technology serves as a powerful tool for empowerment, enabling individuals with low vision to fully participate in and benefit from the digital world.

What do you think is the next big disruption in software?

Neural-interface technology, enabling direct brain-to-text conversion for head-worn devices, will become increasingly indispensable as natural language interfaces proliferate. This will alleviate the social and practical constraints of traditional input methods in public settings.


Speaker

Ohan Oda

Senior Software Engineer @Google, Expert in AR with Maps Starting from MARS @ColumbiaUniversity (2005), then CityLens @Nokia (2012), and Currently Live View @Google

I was born in China, and grew up in Japan. I came to the US after high school and graduated from University of Wisconsin Madison with a double major in Computer Engineering and Computer Science. I got my Ph.D in Computer Science with focus on Augmented Reality from Columbia University under Prof. Steven Feiner's supervision. I currently work at Google as Software Engineer for Live View features in Google Maps.

Read more
Find Ohan Oda at:

From the same track

Session

Improving Accessibility Through Leveraging Large Language Models (LLMs)

Wednesday Nov 20 / 01:35PM PST

Leveraging Large Language Models (LLMs) to automate accessibility tasks represents a transformative advancement in digital inclusion efforts.

Session XR

Accessible Innovation in XR: Maximizing the Curb Cut Effect

Wednesday Nov 20 / 11:45AM PST

Accessibility is often seen as the last step in many software projects - a checklist to be crossed off to satisfy regulations. But in reality, accessible design thinking can lead to a fountain of features that benefit disabled and abled users alike.

Speaker image - Dylan Fox

Dylan Fox

Director of Operations @XR Access, Previously UC Berkeley Researcher & UX Designer, Expert on Accessibility for Emerging Technologies

Session

Inclusive UI Principles in Practice

Wednesday Nov 20 / 03:55PM PST

Details coming soon.

Session

Building Inclusive Mini Golf: A Practical Guide to Accessible XR Development

Wednesday Nov 20 / 02:45PM PST

Creating accessible tools and experiences in VR is an ongoing challenge, especially in visually intensive environments like gaming.

Speaker image - Colby Morgan

Colby Morgan

Technical Director @Mighty Coconut, Walkabout Mini Golf, XR Accessibility Advocate