You are viewing content from a past/completed QCon -

Presentation: Human-Centric Machine Learning Infrastructure @Netflix

Track: Applied AI & Machine Learning

Location: Ballroom A

Duration: 10:35am - 11:25am

Day of week: Wednesday

Slides: Download Slides

Level: Intermediate

Persona: Data Engineering, Data Scientist, ML Engineer

This presentation is now available to view on

Watch video with transcript

What You’ll Learn


  • Learn the right questions to be asking to be able to get heterogeneous models into a production environment in a consistent, reliable way.

  • Hear about choices and reasoning the Netflix Machine Learning Infrastructure team made on developing tooling for the data scientists.

  • Understand some of the challenges and solutions made to create a paved road for machine learning models to production.


Netflix has over 100 data scientists applying machine learning to a wide range of business problems from title popularity predictions to quality of streaming optimizations. Our unique culture gives data scientists plenty of freedom to choose the modeling approach, libraries, and even the programming language that will make them productive at solving problems. However, we want to balance this freedom by providing a solid infrastructure for machine learning, ensuring models can be promoted quickly and reliably from prototype to production, and enabling reproducible and easily shareable results.

We started building this infrastructure a little over a year ago with a human-centric mindset. Many existing open-source machine learning frameworks are great at making advanced modeling possible. The job of our ML infrastructure is to make it remarkably easy to apply these frameworks to real business problems at Netflix. We have found that this requires an infrastructure that covers the day-to-day challenges of data scientists holistically, from understanding input data to building trust with consumers of models, not just the parts that are directly related to fitting and scoring models.

Come learn the techniques and underlying principles driving our approach, which you'll be able to adapt and apply to your own use cases.


What is the focus of the work you do at Netflix?


I'm with the machine learning infrastructure team at Netflix. We work with about a hundred data scientists who solve all kinds of business problems. It’s not only video recommendations, but we also help answer many other questions to make Netflix an even better experience. We help all these data scientists to be more productive and make it easier for them to start prototyping their models to produce business value.


Netflix has a 'paved path’ approach when it comes to software and microservices. Is it the same thing when it comes to machine learning?


It is very much the same thing. We want to provide a 'paved path' so there's always a very clear, recommended way to do things. This is especially important for data science since there are many people from academia who are very adept at creating really strong theoretical models but when it comes to actually taking something to production and making it operationally solid, typically they require a lot of help. Integrating with a platform like Netflix can be non-trivial. At the same time, we want to balance that with the idea of freedom and responsibility, so people still have the freedom to choose the exact modeling approach they want to take. The platform has to be flexible.


From a high level, when you talk about a platform for machine learning are you talking about models that you cluster or are you talking about things that you stick into a container and then run across a cluster? What are we talking about for offering a platform for data scientists?


One thing we have noticed is the question of putting machine learning in production is far broader than how you train or score a model. It starts all the way from how you find the data you need, how you do the feature engineering, and how you scale it. Then obviously there is the training and scoring question, like how do you run it in a container management system.

After that, there are questions like how do you integrate the results of your machine learning models to other downstream value clients (so consumers). Finally, how do you operate the whole pipeline in a way that the people who consume the results can trust that the results are always correct and trustworthy?

If you need to iterate on things, how can you go back to the drawing board and quick release the next version? There are so many questions that are outside that narrow question of just training and scoring the model.



What do you use to manage the lifecycle, the CI/CD pipeline of machine learning? Is it custom software or is it open source tooling?


It’s both. Obviously, Netflix has been investing in CI and CD for a long time and actually many of the tools they use, like Spinnaker and Titus, are open source. They are also really close to other open source tools like Kubernetes and whatever you want to use for CI.


Can you give me an example of some of the questions you get from data scientists when you are trying to deploy models?


When it comes to common questions, as boring as it may sound, my experience is that machine learning infrastructure is much more about data than science. Most questions we get are related to data: how do I find the data I need, how do I set up the data pipeline, how do I handle the somewhat non-trivial amounts of data in python and R, can I use pandas, can I use R, how do I structure my feature engineering so I completely iterate on new ideas? We get many questions in that space.

We get questions about modeling as well, but usually, once you have the data in a beautiful data frame, then the data scientists are totally happy to use tools they know best, like Scikit Learn or Tensorflow. So the questions we are seeing are mostly related to the data pipelines.

Speaker: Ville Tuulos

Machine Learning Infrastructure Engineer @Netflix

Ville Tuulos is a software architect in the Machine Learning Infrastructure team at Netflix. He has been building ML systems at various startups, including one that he founded, and large corporations for over 15 years. He enjoys exploring and building novel human-computer interfaces for complex domains, as well as low-level systems hacking.

Find Ville Tuulos at

Last Year's Tracks

  • Monday, 16 November

  • Remotely Productive: Remote Teams & Software

    More and more companies are moving to remote work. How do you build, work on, and lead teams remotely?

  • Operating Microservices

    Building and operating distributed systems is hard, and microservices are no different. Learn strategies for not just building a service but operating them at scale.

  • Distributed Systems for Developers

    Computer science in practice. An applied track that fuses together the human side of computer science with the technical choices that are made along the way

  • The Future of APIs

    Web-based API continue to evolve. The track provides the what, how, and why of future APIs, including GraphQL, Backend for Frontend, gRPC, & ReST

  • Resurgence of Functional Programming

    What was once a paradigm shift in how we thought of programming languages is now main stream in nearly all modern languages. Hear how software shops are infusing concepts like pure functions and immutablity into their architectures and design choices.

  • Social Responsibility: Implications of Building Modern Software

    Software has an ever increasing impact on individuals and society. Understanding these implications helps build software that works for all users

  • Tuesday, 17 November

  • Non-Technical Skills for Technical Folks

    To be an effective engineer, requires more than great coding skills. Learn the subtle arts of the tech lead, including empathy, communication, and organization.

  • Clientside: From WASM to Browser Applications

    Dive into some of the technologies that can be leveraged to ultimately deliver a more impactful interaction between the user and client.

  • Languages of Infra

    More than just Infrastructure as a Service, today we have libraries, languages, and platforms that help us define our infra. Languages of Infra explore languages and libraries being used today to build modern cloud native architectures.

  • Mechanical Sympathy: The Software/Hardware Divide

    Understanding the Hardware Makes You a Better Developer

  • Paths to Production: Deployment Pipelines as a Competitive Advantage

    Deployment pipelines allow us to push to production at ever increasing volume. Paths to production looks at how some of software's most well known shops continuous deliver code.

  • Java, The Platform

    Mobile, Micro, Modular: The platform continues to evolve and change. Discover how the platform continues to drive us forward.

  • Wednesday, 18 November

  • Security for Engineers

    How to build secure, yet usable, systems from the engineer's perspective.

  • Modern Data Engineering

    The innovations necessary to build towards a fully automated decentralized data warehouse.

  • Machine Learning for the Software Engineer

    AI and machine learning are more approachable than ever. Discover how ML, deep learning, and other modern approaches are being used in practice by Software Engineers.

  • Inclusion & Diversity in Tech

    The road map to an inclusive and diverse tech organization. *Diversity & Inclusion defined as the inclusion of all individuals in an within tech, regardless of gender, religion, ethnicity, race, age, sexual orientation, and physical or mental fitness.

  • Architectures You've Always Wondered About

    How do they do it? In QCon's marquee Architectures track, we learn what it takes to operate at large scale from well-known names in our industry. You will take away hard-earned architectural lessons on scalability, reliability, throughput, and performance.

  • Architecting for Confidence: Building Resilient Systems

    Your system will fail. Build systems with the confidence to know when they do and you won’t.