Presentation: Stop Talking & Listen; Practices for Creating Effective Customer SLOs

Track: Production Readiness: Building Resilient Systems

Location: Seacliff ABC

Duration: 4:10pm - 5:00pm

Day of week: Wednesday

Share this on:

What You’ll Learn

  1. Find out what are some of the pitfalls to avoid in over-measuring or under-measuring.
  2. Learn some of the ways to reduce noise in measuring systems to have a better evaluation of what their status is.

Abstract

In this data-driven age we are constantly collecting and analyzing monumental quantities of data. We want to know everything about our product, how our customers use it, how long they use it and more importantly is the product even working? With all this data, we should be able to answer all of these questions. But turns out, that’s not always the case. In this talk, we’ll discuss some of the common pitfalls that arise from collecting and analyzing service data such as only using 'out-of-the-box' metrics and not having feedback loops. Then we'll discuss some practical tips for reducing noise and increasing effective customer signals with SLOs and analyzing customer pain points.

Question: 

What is the work you're doing today?

Answer: 

My job title is Site Reliability Engineer. I work to help scale services and systems to billions of users. But currently I'm in a different but similar role where I've taken that knowledge that I've gained at teams at Google. And now we take it and we try to help our customers google cloud customers, adopt these same practices and principles.

Question: 

What are the goals of the talk?

Answer: 

The goals of the talk are to highlight some of the key findings we found with reliability and the way we think about user interactions with our services. I’ve seen folks assume that if their metrics are doing well then their users are not having issues, but those assumptions are incorrect. My talk is about customer focused metrics that tell us how our system is doing from the user’s perspective rather than metrics you get out of the box.

Question: 

Can you give me an example of a more customer focused metric?

Answer: 

A non-customer one would be measuring utilization such as CPU and RAM. But if the customer isn't having any issues, CPU and RAM usage isn't actually telling you anything useful. What we recommend instead of measuring things like uptime of your services or machines is maybe look at your load balancer and see how many requests were made and how many of those requests were successful

Question: 

Are there common pitfalls that you see?

Answer: 

Yes, that's one, and over measuring, trying to cover everything. You tend to overlap a lot of things and you get a lot of false positives that way. Over alerting and alerting for issues that aren’t actionable are common ones as well.

Question: 

What do you want people to leave the talk with?

Answer: 

I would like to see a shift towards more customer centric reliability. We work with a lot of cloud customers and we see the same common pitfalls where folks assume that the out of the box stuff is good enough, or they’ll over measuring or under measuring. And I want to be able to show people the difference between what that looks like versus what customer focused monitoring looks like. It's surprising how much other companies just use the same thing and then tell us that they’re burnt out, or they have all these metrics but don’t know where to start troubleshooting. I'd like people to see, this is the change that we're trying to implement, and it does work.

Speaker: Cindy Quach

Site Reliability Engineer @Google

Cindy is a Site Reliability Engineer at Google, she's worked on various teams and projects such as Google's internal Linux distribution, mobile platforms and virtualization. She currently works on the customer reliability engineering team helping large scale GCP customers learn about and implement SRE principles.

Find Cindy Quach at

Tracks

Monday, 11 November

Tuesday, 12 November

Wednesday, 13 November