I remember a decade ago running and managing popular websites and internet communities from a single machine. Over time, we've had to build complex infrastructure and platforms to support the rich experiences and high scale demands of online services and software. Much of the running of large scale systems are concentrated on a few key providers in the market.
In a standard web request today, it's highly likely that hundreds or even thousands of computers get involved to crunch data and generate events and run complex algorithms and still manage to serve you that request in hundreds of milliseconds.
Let's go on a journey on the platforms and software patterns that made both mainframes and microservices (and everything in between) so popular, and how the foundations in programming languages, software architecture, virtual machines and containers and even stateful systems have influenced how we build and run software at scale today. We’ll talk about the massive reduction in cost and complexity to get large scale software running on the web and how that trend might not continue forever, especially in the era of specialized offerings like custom datastores that you can’t host yourself, edge computing and even dive into purpose built silicon chips.
Staff Engineer @Monzo Focused on Designing and Operating Distributed Systems, Previously @Citymapper
Suhail is a Staff Engineer at Monzo focused on building the Core Platform. His role involves building and maintaining Monzo's infrastructure which spans nearly two thousand microservices and leverages key infrastructure components like Kubernetes, Cassandra, Etcd and more. He focuses specifically in investigating deviant behaviour and ensuring services continue to work reliably in the face of a constantly shifting environment in the cloud.