Improving Accessibility Through Leveraging Large Language Models (LLMs)

Leveraging Large Language Models (LLMs) to automate accessibility tasks represents a transformative advancement in digital inclusion efforts. These powerful models can replace manual accessibility work by generating image descriptions, providing real-time closed captions, and seamlessly translating content into multiple languages. LLMs can convert complicated documents and interfaces into plain language and improve consistency between products. All of these reduce the burden on content creators, designers, developers, accessibility, and quality assurance teams. As LLM use evolves, it holds the promise of significantly enhancing accessibility and making digital content more inclusive for individuals with diverse assistive technology needs.

From the same track


Make Augmented Reality Application Accessible

Details coming soon.

Speaker image - Ohan Oda

Ohan Oda

Senior Software Engineer @Google, Expert in AR with Maps Starting from MARS @ColumbiaUniversity (2005), then CityLens @Nokia (2012), and Currently Live View @Google