Abstract
Delivering AI-powered features in mobile apps is not just about calling an LLM API. It is about crafting fast, reliable, and engaging user experiences. In this talk, I’ll share practical lessons from designing and scaling LLM-driven experiences in mobile apps, where frontend architecture and UX design played an important role.
We’ll explore how to architect for speed and interactivity, when to use on-device LLMs versus backend inference, and how to design interfaces that gracefully handle latency. We'll also examine the trade-offs that shape responsive, cost-effective, and scalable AI features.
Whether you're experimenting with LLMs in your product or already in production, you’ll leave with insights to make your AI-powered experiences feel fast, native, and user-first.
Speaker
Balakrishnan (Bala) Ramdoss
Senior Android Engineer @Amazon - Building Camera-Based AI Features, Specializes in Scalable Solutions for Complex Challenges
Bala Ramdoss is a Senior Android Engineer at Amazon, where he builds camera-based AI features like Amazon Lens to enhance the visual shopping experience. With over 10 years of Android development experience, Bala specializes in scalable solutions for complex challenges, including AR-powered experiences and high-performance Android UI. When not building apps, he enjoys exploring new tech and engaging conversations over coffee.