You are viewing content from a past/completed QCon

Presentation: Human-Centric Machine Learning Infrastructure @Netflix

Track: Applied AI & Machine Learning

Location: Ballroom A

Duration: 10:35am - 11:25am

Day of week: Wednesday

Level: Intermediate

Persona: Data Engineering, Data Scientist, ML Engineer

Share this on:

What You’ll Learn

  • Learn the right questions to be asking to be able to get heterogeneous models into a production environment in a consistent, reliable way.

  • Hear about choices and reasoning the Netflix Machine Learning Infrastructure team made on developing tooling for the data scientists.

  • Understand some of the challenges and solutions made to create a paved road for machine learning models to production.

Abstract

Netflix has over 100 data scientists applying machine learning to a wide range of business problems from title popularity predictions to quality of streaming optimizations. Our unique culture gives data scientists plenty of freedom to choose the modeling approach, libraries, and even the programming language that will make them productive at solving problems. However, we want to balance this freedom by providing a solid infrastructure for machine learning, ensuring models can be promoted quickly and reliably from prototype to production, and enabling reproducible and easily shareable results.

We started building this infrastructure a little over a year ago with a human-centric mindset. Many existing open-source machine learning frameworks are great at making advanced modeling possible. The job of our ML infrastructure is to make it remarkably easy to apply these frameworks to real business problems at Netflix. We have found that this requires an infrastructure that covers the day-to-day challenges of data scientists holistically, from understanding input data to building trust with consumers of models, not just the parts that are directly related to fitting and scoring models.

Come learn the techniques and underlying principles driving our approach, which you'll be able to adapt and apply to your own use cases.

Question: 

What is the focus of the work you do at Netflix?

Answer: 

I'm with the machine learning infrastructure team at Netflix. We work with about a hundred data scientists who solve all kinds of business problems. It’s not only video recommendations, but we also help answer many other questions to make Netflix an even better experience. We help all these data scientists to be more productive and make it easier for them to start prototyping their models to produce business value.

Question: 

Netflix has a 'paved path’ approach when it comes to software and microservices. Is it the same thing when it comes to machine learning?

Answer: 

It is very much the same thing. We want to provide a 'paved path' so there's always a very clear, recommended way to do things. This is especially important for data science since there are many people from academia who are very adept at creating really strong theoretical models but when it comes to actually taking something to production and making it operationally solid, typically they require a lot of help. Integrating with a platform like Netflix can be non-trivial. At the same time, we want to balance that with the idea of freedom and responsibility, so people still have the freedom to choose the exact modeling approach they want to take. The platform has to be flexible.

Question: 

From a high level, when you talk about a platform for machine learning are you talking about models that you cluster or are you talking about things that you stick into a container and then run across a cluster? What are we talking about for offering a platform for data scientists?

Answer: 

One thing we have noticed is the question of putting machine learning in production is far broader than how you train or score a model. It starts all the way from how you find the data you need, how you do the feature engineering, and how you scale it. Then obviously there is the training and scoring question, like how do you run it in a container management system.

After that, there are questions like how do you integrate the results of your machine learning models to other downstream value clients (so consumers). Finally, how do you operate the whole pipeline in a way that the people who consume the results can trust that the results are always correct and trustworthy?

If you need to iterate on things, how can you go back to the drawing board and quick release the next version? There are so many questions that are outside that narrow question of just training and scoring the model.

Question: 

What do you use to manage the lifecycle, the CI/CD pipeline of machine learning? Is it custom software or is it open source tooling?

Answer: 

It’s both. Obviously, Netflix has been investing in CI and CD for a long time and actually many of the tools they use, like Spinnaker and Titus, are open source. They are also really close to other open source tools like Kubernetes and whatever you want to use for CI.

Question: 

Can you give me an example of some of the questions you get from data scientists when you are trying to deploy models?

Answer: 

When it comes to common questions, as boring as it may sound, my experience is that machine learning infrastructure is much more about data than science. Most questions we get are related to data: how do I find the data I need, how do I set up the data pipeline, how do I handle the somewhat non-trivial amounts of data in python and R, can I use pandas, can I use R, how do I structure my feature engineering so I completely iterate on new ideas? We get many questions in that space.

We get questions about modeling as well, but usually, once you have the data in a beautiful data frame, then the data scientists are totally happy to use tools they know best, like Scikit Learn or Tensorflow. So the questions we are seeing are mostly related to the data pipelines.

Speaker: Ville Tuulos

Machine Learning Infrastructure Engineer @Netflix

Ville Tuulos is a software architect in the Machine Learning Infrastructure team at Netflix. He has been building ML systems at various startups, including one that he founded, and large corporations for over 15 years. He enjoys exploring and building novel human-computer interfaces for complex domains, as well as low-level systems hacking.

Find Ville Tuulos at

Proposed Tracks

  • Delivering on the Promise of Containers

    Runtime containers, libraries and services that power microservices.

  • Evolving Java & the JVM

    6 month cadence, cloud-native deployments, scale, Graal, Kotlin, and beyond. Learn how the role of Java and the JVM is evolving.

  • Trust, Safety & Security

    Privacy, confidentiality, safety and security: learning from the frontlines.

  • Beyond the Web: What’s Next for JavaScript

    JavaScript is the language of the web. Latest practices for JavaScript development in and out of the browser topics: react, serverless, npm, performance, & less traditional interfaces.

  • Modern Operating Systems

    Applied, practical & real-world deep-dive into industry adoption of OS, containers and virtualization, including Linux on.

  • Optimizing You: Human Skills for Individuals

    Better teams start with a better self. Learn practical skills for IC.

  • Modern CS in the Real World

    Thoughts pushing software forward, including consensus, CRDT's, formal methods & probabilistic programming.

  • Human Systems: Hacking the Org

    Power of leadership, Engineering Metrics and strategies for shaping the org for velocity.

  • Building High-Performing Teams

    Building, maintaining, and growing a team balanced for skills and aptitudes. Constraint theory, systems thinking, lean, hiring/firing and performance improvement

  • Software Defined Infrastructure: Kubernetes, Service Meshes & Beyond

    Deploying, scaling and managing your services is undifferentiated heavy lifting. Hear stories, learn techniques and dive deep into what it means to code your infrastructure.

  • Practices of DevOps & Lean Thinking

    Practical approaches using DevOps and a lean approach to delivering software.

  • Operationalizing Microservices: Design, Deliver, Operate

    What's the last mile for deploying your service? Learn techniques from the world's most innovative shops on managing and operating Microservices at scale.

  • Developer Experience: Level up your Engineering Effectiveness

    Improving the end to end developer experience - design, dev, test, deploy and operate/understand.

  • Architectures You've Always Wondered About

    Next-gen architectures from the most admired companies in software, such as Netflix, Google, Facebook, Twitter, & more

  • Machine Learning without a PhD

    AI/ML is more approachable than ever. Discover how deep learning and ML is being used in practice. Topics include: TensorFlow, TPUs, Keras, PyTorch & more. No PhD required.

  • Production Readiness: Building Resilient Systems

    Making systems resilient involves people and tech. Learn about strategies being used from chaos testing to distributed systems clustering.

  • Building Predictive Data Pipelines

    From personalized news feeds to engaging experiences that forecast demand: learn how innovators are building predictive systems in modern application development.

  • Modern Languages: The Right Language for the Job

    We're polyglot developers. Learn languages that excel at very specific tasks and remove undifferentiated heavy lifting at the language level.