Abstract
As AI applications demand faster and more intelligent data access, traditional caching strategies are hitting performance and reliability limits.
This talk presents architecture patterns that shift from milliseconds to microseconds using Valkey Cluster, an open-source, Redis-compatible, in-memory datastore. Learn when to use proxy-based versus direct-access caching, how to avoid hidden reliability issues in sharded systems, and how to optimize for high price-performance at scale. Backed by the Linux Foundation, Valkey offers rich data structures and community-driven innovation.
Whether you’re building GenAI services or scaling existing platforms, this session delivers actionable patterns to improve speed, resilience, and efficiency.
Speaker

Dumanshu Goyal
Uber Technical Lead @Airbnb Powering $11B Transactions, Formerly @Google and @AWS
Dumanshu Goyal leads Online Data Priorities at Airbnb, powering its $11B transaction platform and building the next generation of the company’s data systems. Previously, he led in-memory caching for Google Cloud Databases, delivering 10x improvements in scale and price-performance for Google Cloud Memorystore, one of the rare times “10x” was more than a slide promise. Before that, he spent 10 years at AWS as the founding engineer of AWS Timestream, a serverless time-series database, and architected durability and availability features for one of the internet’s foundational services, AWS DynamoDB, ensuring data stayed available even when Wi-Fi did not.
An expert in building and operationalizing large-scale distributed systems, Dumanshu holds 20 patents and brings deep experience in architecting the resilient infrastructure that underpins today’s digital world.