You are viewing content from a past/completed QCon -

Presentation: Yes, I Test In Production (And So Do You)

Track: Production Readiness: Building Resilient Systems

Location: Ballroom A

Duration: 4:10pm - 5:00pm

Day of week: Wednesday

Slides: Download Slides

Level: Intermediate

Persona: Architect, CTO/CIO/Leadership, Developer, General Software, Technical Engineering Manager

This presentation is now available to view on InfoQ.com

Watch video with transcript

What You’ll Learn

  • Learn strategies to think about with observability for production services.

  • Hear about the importance of context when it comes to service owners.

  • Understand approaches to improve the reliability of your systems.

Abstract

Testing in production has gotten a bad rap.  People seem to assume that you can only test before production *or* in production.  But while you can rule some things out before shipping, those are typically the easy things, the known unknowns.  For any real unknown-unknown, you're gonna be Testing in Production.
 
And this is a good thing!  Staging areas are a black hole for engineering cycles.  Most interesting problems will only manifest under real workloads, on real data, with real users and real concurrency.  So you should spend much fewer of your scarce engineering cycles poring over staging, and much more of them building guard rails for prod -- ways to ship code safely, absorb failures without impacting users, test various kinds of failures in prod, detect them, and roll back. 
 
You can never trust a green diff until it's baked in prod.  So let's talk about the tools and principles of canarying software and gaining confidence in a build.  What types of problems are worth setting up a capture/replay apparatus to boost your confidence?  How many versions should you be able to run simultaneously in flight?  And finally we'll talk about the missing link that makes all these things possible: instrumentation and observability for complex systems.
Question: 

 

What's the motivation for this talk?

Answer: 

The motivation for this talk is to help people understand that deploying software carries an irreducible element of uncertainty and risk.  Trying too hard to prevent failures will actually make your systems and your teams *more* vulnerable to failure and prolonged downtime. So what can you do about it?

First, you can shift the way you think about shipping code.  Don’t think of a deploy like a binary switch, where you can stop thinking about it as soon as it’s successfully started up in production.  Think it more like turning on the heat in your oven. Deploys simply kick off the process of gaining confidence in your code over time. You should never trust new code until it’s been baked in production, with real users, real services, real data, real networking.  Your confidence should rise in proportion to the range of conditions and user behaviors that your code has handled in the wild over time.

Second, you can empower your software engineers and turn them into software owners.  Ownership over the full software life cycle means the same person can write code, deploy their code, and debug the code live in prod.  Why does this matter? It’s extremely inefficient to have different sets of people running, deploying and debugging, because the person with the most context from developing it is the best suited to spot problems or unintended consequences quickly and fix them before users are affected.  Software ownership leads to better services, happier users, and better lives for practitioners.

A key part of this ownership is the process of shepherding your code into that fully baked state. I call the talk “testing in production” (somewhat tongue in cheek) to embrace the side of it that means leaning into resilience rather than fixating on preventing failures.  Every single time you run a deploy, you are kicking off a test -- of that unique combination of point-in-time infra, deploy tools, environment and artifact.  So I want to talk about some of the techniques and practices that are worth spending some of your precious engineering cycles on, instead of the black sucking hole for engineering effort that is every staging environment I’ve ever seen.

Question: 

Martin Fowler famously had a blog post about how “you must be this tall for microservices” or organizational maturity before you’re really ready for microservices.  Is there a “you must be this tall” equivalent for being able to test in production?

Answer: 

Absolutely.  Starting with observability. Until you can drill down and explain any anomalous raw event, you should not attempt any advanced maneuvers - you’ll just be irresponsibly screwing with production and starting fires right and left that you don’t even know about. And I mean observability in the technical sense -- traditional monitoring is not good enough.  You need instrumentation, an event-oriented perspective, ideally some tracing, etc. You need the tooling that lets you hunt down outliers and aggregate at read time by high-cardinality dimensions like request ID. No aggregation at write time. Etc.

You also need a team of software engineers that cares deeply about production quality and what happens when their code meets reality, that doesn’t consider their job finished and walk away any time they merge a branch, or that don’t understand how to instrument their code for understandability and debuggability.  If you’re lucky, you have some SREs or other operational experts to help everyone level up at best practices.

Question: 

A lot of times when I talk to people about CI/CD and things like getting code out fast in production, they push back or they have hesitancy. That's probably an indication that something else, like observability, is the problem, right?

Answer: 

If you’re terrified about shipping bad code to prod, well, you need to work on that first.  Because shipping new code shouldn’t be a terrifying experience. It should be pedestrian. But deploy software (and deploy processes) are systematically under-invested in everywhere.

Instead of building the guard rails and processes that build confidence and help to find problems early, people just write code, deploy it, and wait to get paged.  That’s INSANE. First of all, most anomalies don’t generate an alert, and they shouldn’t -- nobody would ever get any sleep because our systems exist in an eternal state of imperfection and partial degradation.  And that’s ok! That’s what SLOs are for!

Once we know about a possible failure, we can monitor for that failure and write tests for that failure.  That means most of the failures we do ship to prod are going to be unknown-unknowns. We can’t predict them, we shouldn’t try.  But we can often spot them.

That’s why the best, most efficient, most practical, least damaging way to identify breaking changes is for every software developer to merge their code, deploy it, and immediately go look at it compared it to the last version.  You just wrote it so it's fresh in your mind, you have all the context. Did you ship what you think you just shipped, is it working the way you expected it to work? Do you see anything else that looks unexpected or suspicious? This simple checklist will catch a high percentage of the logic problems and other subtle bugs you ship.

Question: 

What do you want someone who comes to this talk to be able to walk away with?

Answer: 

There's a lot of fear around production. There's a lot of scariness, a lot of association with the masochism and lack of sleep that operations has historically been known for (that’s on us: sorry!). But it doesn't have to be so scary.  You should want to do this.

Owning your own code in production should be something that you want to do, because it leads to better code, higher quality of services, happier users, stronger teams … even better quality of life for the engineers who own it.  There is plenty of science showing that autonomy and empowerment make people love their work. When ownership goes up, bugs go down and frustration goes down, and free time for product development goes up too. It’s a virtuous feedback loop: everyone wins.  So I want to show you the steps to get there.

Speaker: Charity Majors

Co-Founder @Honeycombio, formerly DevOps @ParseIT/@Facebook

Charity is a cofounder and engineer at Honeycomb.io, a startup that blends the speed of time series with the raw power of rich events to give you interactive, iterative debugging of complex systems. She has worked at companies like Facebook, Parse, and Linden Lab, as a systems engineer and engineering manager, but always seems to end up responsible for the databases too. She loves free speech, free software and a nice peaty single malt.

Find Charity Majors at

Last Year's Tracks

  • Monday, 16 November

  • The Future of APIs

    Web-based API continue to evolve. The track provides the what, how, and why of future APIs, including GraphQL, Backend for Frontend, gRPC, & ReST

  • Resurgence of Functional Programming

    What was once a paradigm shift in how we thought of programming languages is now main stream in nearly all modern languages. Hear how software shops are infusing concepts like pure functions and immutablity into their architectures and design choices.

  • Social Responsibility: Implications of Building Modern Software

    Software has an ever increasing impact on individuals and society. Understanding these implications helps build software that works for all users

  • Non-Technical Skills for Technical Folks

    To be an effective engineer, requires more than great coding skills. Learn the subtle arts of the tech lead, including empathy, communication, and organization.

  • Clientside: From WASM to Browser Applications

    Dive into some of the technologies that can be leveraged to ultimately deliver a more impactful interaction between the user and client.

  • Languages of Infra

    More than just Infrastructure as a Service, today we have libraries, languages, and platforms that help us define our infra. Languages of Infra explore languages and libraries being used today to build modern cloud native architectures.

  • Tuesday, 17 November

  • Mechanical Sympathy: The Software/Hardware Divide

    Understanding the Hardware Makes You a Better Developer

  • Paths to Production: Deployment Pipelines as a Competitive Advantage

    Deployment pipelines allow us to push to production at ever increasing volume. Paths to production looks at how some of software's most well known shops continuous deliver code.

  • Java, The Platform

    Mobile, Micro, Modular: The platform continues to evolve and change. Discover how the platform continues to drive us forward.

  • Security for Engineers

    How to build secure, yet usable, systems from the engineer's perspective.

  • Modern Data Engineering

    The innovations necessary to build towards a fully automated decentralized data warehouse.

  • Machine Learning for the Software Engineer

    AI and machine learning are more approachable than ever. Discover how ML, deep learning, and other modern approaches are being used in practice by Software Engineers.

  • Wednesday, 18 November

  • Inclusion & Diversity in Tech

    The road map to an inclusive and diverse tech organization. *Diversity & Inclusion defined as the inclusion of all individuals in an within tech, regardless of gender, religion, ethnicity, race, age, sexual orientation, and physical or mental fitness.

  • Architectures You've Always Wondered About

    How do they do it? In QCon's marquee Architectures track, we learn what it takes to operate at large scale from well-known names in our industry. You will take away hard-earned architectural lessons on scalability, reliability, throughput, and performance.

  • Architecting for Confidence: Building Resilient Systems

    Your system will fail. Build systems with the confidence to know when they do and you won’t.

  • Remotely Productive: Remote Teams & Software

    More and more companies are moving to remote work. How do you build, work on, and lead teams remotely?

  • Operating Microservices

    Building and operating distributed systems is hard, and microservices are no different. Learn strategies for not just building a service but operating them at scale.

  • Distributed Systems for Developers

    Computer science in practice. An applied track that fuses together the human side of computer science with the technical choices that are made along the way