You are viewing content from a past/completed QCon

Presentation: Yes, I Test In Production (And So Do You)

Track: Production Readiness: Building Resilient Systems

Location: Ballroom A

Duration: 4:10pm - 5:00pm

Day of week: Wednesday

Level: Intermediate

Persona: Architect, CTO/CIO/Leadership, Developer, General Software, Technical Engineering Manager

Share this on:

This presentation is now available to view on InfoQ.com

Watch video with transcript

What You’ll Learn

  • Learn strategies to think about with observability for production services.

  • Hear about the importance of context when it comes to service owners.

  • Understand approaches to improve the reliability of your systems.

Abstract

Testing in production has gotten a bad rap.  People seem to assume that you can only test before production *or* in production.  But while you can rule some things out before shipping, those are typically the easy things, the known unknowns.  For any real unknown-unknown, you're gonna be Testing in Production.
And this is a good thing!  Staging areas are a black hole for engineering cycles.  Most interesting problems will only manifest under real workloads, on real data, with real users and real concurrency.  So you should spend much fewer of your scarce engineering cycles poring over staging, and much more of them building guard rails for prod -- ways to ship code safely, absorb failures without impacting users, test various kinds of failures in prod, detect them, and roll back. 
You can never trust a green diff until it's baked in prod.  So let's talk about the tools and principles of canarying software and gaining confidence in a build.  What types of problems are worth setting up a capture/replay apparatus to boost your confidence?  How many versions should you be able to run simultaneously in flight?  And finally we'll talk about the missing link that makes all these things possible: instrumentation and observability for complex systems.
Question: 

What's the motivation for this talk?

Answer: 

The motivation for this talk is to help people understand that deploying software carries an irreducible element of uncertainty and risk.  Trying too hard to prevent failures will actually make your systems and your teams *more* vulnerable to failure and prolonged downtime. So what can you do about it?

First, you can shift the way you think about shipping code.  Don’t think of a deploy like a binary switch, where you can stop thinking about it as soon as it’s successfully started up in production.  Think it more like turning on the heat in your oven. Deploys simply kick off the process of gaining confidence in your code over time. You should never trust new code until it’s been baked in production, with real users, real services, real data, real networking.  Your confidence should rise in proportion to the range of conditions and user behaviors that your code has handled in the wild over time.

Second, you can empower your software engineers and turn them into software owners.  Ownership over the full software life cycle means the same person can write code, deploy their code, and debug the code live in prod.  Why does this matter? It’s extremely inefficient to have different sets of people running, deploying and debugging, because the person with the most context from developing it is the best suited to spot problems or unintended consequences quickly and fix them before users are affected.  Software ownership leads to better services, happier users, and better lives for practitioners.

A key part of this ownership is the process of shepherding your code into that fully baked state. I call the talk “testing in production” (somewhat tongue in cheek) to embrace the side of it that means leaning into resilience rather than fixating on preventing failures.  Every single time you run a deploy, you are kicking off a test -- of that unique combination of point-in-time infra, deploy tools, environment and artifact.  So I want to talk about some of the techniques and practices that are worth spending some of your precious engineering cycles on, instead of the black sucking hole for engineering effort that is every staging environment I’ve ever seen.

Question: 

Martin Fowler famously had a blog post about how “you must be this tall for microservices” or organizational maturity before you’re really ready for microservices.  Is there a “you must be this tall” equivalent for being able to test in production?

Answer: 

Absolutely.  Starting with observability. Until you can drill down and explain any anomalous raw event, you should not attempt any advanced maneuvers - you’ll just be irresponsibly screwing with production and starting fires right and left that you don’t even know about. And I mean observability in the technical sense -- traditional monitoring is not good enough.  You need instrumentation, an event-oriented perspective, ideally some tracing, etc. You need the tooling that lets you hunt down outliers and aggregate at read time by high-cardinality dimensions like request ID. No aggregation at write time. Etc.

You also need a team of software engineers that cares deeply about production quality and what happens when their code meets reality, that doesn’t consider their job finished and walk away any time they merge a branch, or that don’t understand how to instrument their code for understandability and debuggability.  If you’re lucky, you have some SREs or other operational experts to help everyone level up at best practices.

Question: 

A lot of times when I talk to people about CI/CD and things like getting code out fast in production, they push back or they have hesitancy. That's probably an indication that something else, like observability, is the problem, right?

Answer: 

If you’re terrified about shipping bad code to prod, well, you need to work on that first.  Because shipping new code shouldn’t be a terrifying experience. It should be pedestrian. But deploy software (and deploy processes) are systematically under-invested in everywhere.

Instead of building the guard rails and processes that build confidence and help to find problems early, people just write code, deploy it, and wait to get paged.  That’s INSANE. First of all, most anomalies don’t generate an alert, and they shouldn’t -- nobody would ever get any sleep because our systems exist in an eternal state of imperfection and partial degradation.  And that’s ok! That’s what SLOs are for!

Once we know about a possible failure, we can monitor for that failure and write tests for that failure.  That means most of the failures we do ship to prod are going to be unknown-unknowns. We can’t predict them, we shouldn’t try.  But we can often spot them.

That’s why the best, most efficient, most practical, least damaging way to identify breaking changes is for every software developer to merge their code, deploy it, and immediately go look at it compared it to the last version.  You just wrote it so it's fresh in your mind, you have all the context. Did you ship what you think you just shipped, is it working the way you expected it to work? Do you see anything else that looks unexpected or suspicious? This simple checklist will catch a high percentage of the logic problems and other subtle bugs you ship.

Question: 

What do you want someone who comes to this talk to be able to walk away with?

Answer: 

There's a lot of fear around production. There's a lot of scariness, a lot of association with the masochism and lack of sleep that operations has historically been known for (that’s on us: sorry!). But it doesn't have to be so scary.  You should want to do this.

Owning your own code in production should be something that you want to do, because it leads to better code, higher quality of services, happier users, stronger teams … even better quality of life for the engineers who own it.  There is plenty of science showing that autonomy and empowerment make people love their work. When ownership goes up, bugs go down and frustration goes down, and free time for product development goes up too. It’s a virtuous feedback loop: everyone wins.  So I want to show you the steps to get there.

Speaker: Charity Majors

Co-Founder @Honeycombio, formerly DevOps @ParseIT/@Facebook

Charity is a cofounder and engineer at Honeycomb.io, a startup that blends the speed of time series with the raw power of rich events to give you interactive, iterative debugging of complex systems. She has worked at companies like Facebook, Parse, and Linden Lab, as a systems engineer and engineering manager, but always seems to end up responsible for the databases too. She loves free speech, free software and a nice peaty single malt.

Find Charity Majors at

Tracks

  • Architectures You've Always Wondered About

    Next-gen architectures from the most admired companies in software, such as Netflix, Google, Facebook, Twitter, & more

  • Machine Learning for Developers

    AI/ML is more approachable than ever. Discover how deep learning and ML is being used in practice. Topics include: TensorFlow, TPUs, Keras, PyTorch & more. No PhD required.

  • Production Readiness: Building Resilient Systems

    Making systems resilient involves people and tech. Learn about strategies being used from chaos testing to distributed systems clustering.

  • Regulation, Risk and Compliance

    With so much uncertainty, how do you bulkhead your organization and technology choices? Learn strategies for dealing with uncertainty.

  • Languages of Infrastructure

    This track explores languages being used to code the infrastructure. Expect practices on toolkits and languages like Cloudformation, Terraform, Python, Go, Rust, Erlang.

  • Building & Scaling High-Performing Teams

    To have a high-performing team, everybody on it has to feel and act like an owner. Organizational health and psychological safety are foundational underpinnings to support ownership.

  • Evolving the JVM

    The JVM continues to evolve. We’ll look at how things are evolving. Covering Kotlin, Clojure, Java, OpenJDK, and Graal. Expect polyglot, multi-VM, performance, and more.

  • Trust, Safety & Security

    Privacy, confidentiality, safety and security: learning from the frontlines.

  • JavaScript & Transpiler/WebAssembly Track

    JavaScript is the language of the web. Latest practices for JavaScript development in and how transpilers are affecting the way we work. We’ll also look at the work being done with WebAssembly.

  • Living on the Edge: The World of Edge Compute From Device to Application Edge

    Applied, practical & real-world deep-dive into industry adoption of OS, containers and virtualization, including Linux on.

  • Software Supply Chain

    Securing the container image supply chain (containers + orchestration + security + DevOps).

  • Modern CS in the Real World

    Thoughts pushing software forward, including consensus, CRDT's, formal methods & probabilistic programming.

  • Tech Ethics: The Intersection of Human Welfare & STEM

    What does it mean to be ethical in software? Hear how the discussion is evolving and what is being said in ethics.

  • Optimizing Yourself: Human Skills for Individuals

    Better teams start with a better self. Learn practical skills for IC.

  • Modern Data Architectures

    Today’s systems move huge volumes of data. Hear how places like LinkedIn, Facebook, Uber and more built their systems and learn from their mistakes.

  • Practices of DevOps & Lean Thinking

    Practical approaches using DevOps and a lean approach to delivering software.

  • Microservices Patterns & Practices

    What's the last mile for deploying your service? Learn techniques from the world's most innovative shops on managing and operating Microservices at scale.

  • Bare Knuckle Performance

    Killing latency and getting the most out of your hardware