MLOps is an emerging engineering discipline that combines ML, DevOps, and Data Engineering to provide automation and infrastructure to speed up the AI/ML development lifecycle and bring models to production faster. It is one of the widely discussed topics in the ML practitioner community.
In this track, we will explore the best practices and innovations the ML community is developing and creating. Key areas of focus include declarative ML systems, distributed model training, scalable and low latency model inference, and ML observability to protect the downsides and ROI.
From this track
Ray: The Next Generation Compute Runtime for ML Applications
Monday Oct 24 / 10:35AM PDT
Ray is an open source project that makes it simple to scale any compute-intensive Python workload. Industry leaders like Uber, Shopify, Spotify are building their next generation ML platforms on top of Ray.
Head of Open Source Engineering @anyscalecompute, Previously Hadoop/Spark infra Team Manager @LinkedIn
Fabricator: End-to-End Declarative Feature Engineering Platform
Monday Oct 24 / 11:50AM PDT
At Doordash, the last year has seen a surge in applications of machine learning to various product verticals in our growing business. However, with this growth, our data scientists have had increasing bottlenecks in their development cycle because of our existing feature engineering process.
ML Platform Engineering Manager @DoorDash, Previously ML Platforms & Data Engineering frameworks @Airbnb & @YouTube
An Open Source Infrastructure for PyTorch
Monday Oct 24 / 01:40PM PDT
In this talk we’ll go over tools and techniques to deploy PyTorch in production. The PyTorch organization maintains and supports open source tools for efficient inference like pytorch/serve, job management pytorch/torchx and streaming datasets like pytorch/data.
Applied AI Engineer @Meta
Real-Time Machine Learning: Architecture and Challenges
Monday Oct 24 / 02:55PM PDT
Fresh data beats stale data for machine learning applications. This talk discusses the value of fresh data as well as different types of architecture and challenges of online prediction.
Co-founder @Claypot AI, previously @Snorkel Ai & @NVIDIA
Declarative Machine Learning: A Flexible, Modular and Scalable Approach for Building Production ML Models
Monday Oct 24 / 04:10PM PDT
Building ML solutions from scratch is challenging because of a variety of reasons: the long development cycles of writing low level machine learning code and the fast pace of state-of-the-art ML methods to name a few.
Founding Engineer @Predibase
Monday Oct 24 / 05:25PM PDT
What is an unconference? At QCon SF, we’ll have unconferences in most of our tracks.
Global Delivery Lead for SoftEd and Lead Editor for Culture & Methods at InfoQ.com