Next Generation Inclusive UIs

As our world becomes increasingly online, making the web accessible and as inclusive of an experience for all is paramount. However, the web is no longer the main interface that people interact with. With the rise of VR, AR, voice-controlled UIs, conversation agents, natural UIs, and more, we must make all of these interfaces inclusive and leverage them to enhance the experiences for us all.


From this track

Session

Make Augmented Reality Application Accessible

Details coming soon.

Speaker image - Ohan Oda
Ohan Oda

Senior Software Engineer @Google, Expert in AR with Maps Starting from MARS @ColumbiaUniversity (2005), then CityLens @Nokia (2012), and Currently Live View @Google

Session

Improving Accessibility Through Leveraging Large Language Models (LLMs)

Leveraging Large Language Models (LLMs) to automate accessibility tasks represents a transformative advancement in digital inclusion efforts.

Speaker image - Sheri Byrne-Haber
Sheri Byrne-Haber

Advisor @IAAP, Co-Chair Maturity Model Task Force @W3C, Previously Senior Staff Architect, Accessibility @VMWare

Track Host

Erin Doyle

Staff Platform Engineer @Lob, with 20+ Years Previously as a Full Stack Engineer and Instructor @Egghead

Erin Doyle is a Staff Platform Engineer @Lob. For the last 20+ years prior she’s been working as a Full Stack Engineer with a focus on the Front-End. She’s also an instructor for Egghead.io and given talks and workshops focusing on building the best and most accessible experiences for users and developers.

Read more
Find Erin Doyle at: