Metrics for MLOps Platforms

Many companies are investing heavily into their ML platforms, either building something in-house or working with vendors. How do we know that an ML platform is any good? How do we compare different platforms? This talk analyzes metrics to evaluate ML platforms, and discusses some of the best practices for building a good ML platform according to these metrics.

 


Speaker

Chip Huyen

Co-founder @Claypot AI

Chip Huyen is a co-founder of Claypot AI, a platform for real-time machine learning. Previously, she was with Snorkel AI and NVIDIA. She teaches CS 329S: Machine Learning Systems Design at Stanford. She’s the author of the book Designing Machine Learning Systems (O’Reilly, 2022).

Read more
Find Chip Huyen at:

Date

Monday Oct 24 / 02:55PM PDT ( 50 minutes)

Track

MLOps

Share

From the same track

Session

Ray: The Next Generation Compute Runtime for ML Applications

Monday Oct 24 / 10:35AM PDT

Ray is an open source project that makes it simple to scale any compute-intensive Python workload. Industry leaders like Uber, Shopify, Spotify are building their next generation ML platforms on top of Ray.

Zhe Zhang

Head of Open Source Engineering @anyscalecompute

Session

Fabricator: End-to-End Declarative Feature Engineering Platform

Monday Oct 24 / 11:50AM PDT

At Doordash, the last year has seen a surge in applications of machine learning to various product verticals in our growing business. However, with this growth, our data scientists have had increasing bottlenecks in their development cycle because of our existing feature engineering process.

Kunal Shah

ML Platform Engineering Manager @DoorDash

Session

An Open Source Infrastructure for PyTorch

Monday Oct 24 / 01:40PM PDT

In this talk we’ll go over tools and techniques to deploy PyTorch in production. The PyTorch organization maintains and supports open source tools for efficient inference like pytorch/serve, job management pytorch/torchx and streaming datasets like pytorch/data.

Mark Saroufim

Applied AI Engineer @Meta

Session

Empower Your ML Models with Customers Voice

Monday Oct 24 / 04:10PM PDT

ML engineers use A/B testings to iterate ML models, however, there are limitations of A/B testing that might not give us all the answers, and A/B testing might limit innovation if not used correctly.  I’ll share examples from my previous examples and lessons I learned from interviewing 10+ ML eng

Daliana Liu

Senior Data Scientist @Predibase and “The Data Scientist Show" Podcast Host

Session

Unconference: MLOps

Monday Oct 24 / 05:25PM PDT

What is an unconference? At QConLondon, we’ll have unconferences in most of our tracks.

Shane Hastie

Director of Agile Learning Programs @ICAgile