Declarative Machine Learning: A Flexible, Modular and Scalable Approach for Building Production ML Models

Building ML solutions from scratch is challenging because of a variety of reasons: the long development cycles of writing low level machine learning code and the fast pace of state-of-the-art ML methods to name a few. On the other hand, solutions that automate the ML model development process are often opaque and hard to iterate on, resulting in users churning out. In this talk I’ll cover declarative ML systems, and how they address key issues that help shorten the time taken to bring ML models to production.


Speaker

Shreya Rajpal

Founder @Guardrails AI, Experienced ML Practitioner with a Decade of Experience in ML Research, Applications and Infrastructure

Shreya Rajpal is the creator and maintainer of Guardrails AI, an open source platform developed to ensure increased safety, reliability, and robustness of large language models in real-world applications. Her expertise spans a decade in the field of machine learning and AI. Most recently, she was the founding engineer at Predibase, where she led the ML infrastructure team. In earlier roles, she was part of the cross-functional ML team within Apple's Special Projects Group and developed computer vision models for autonomous driving perception systems at Drive.ai.

Read more
Find Shreya Rajpal at:

Date

Monday Oct 24 / 04:10PM PDT ( 50 minutes )

Location

Pacific DEKJ

Track

MLOps

Topics

Machine Learning YAML Pipeline Batch Architectures Architecture

Share

From the same track

Session Machine Learning

Ray: The Next Generation Compute Runtime for ML Applications

Monday Oct 24 / 10:35AM PDT

Ray is an open source project that makes it simple to scale any compute-intensive Python workload. Industry leaders like Uber, Shopify, Spotify are building their next generation ML platforms on top of Ray.

Speaker image - Zhe Zhang
Zhe Zhang

Head of Open Source Engineering @anyscalecompute, Previously Hadoop/Spark infra Team Manager @LinkedIn

Session Machine Learning

Fabricator: End-to-End Declarative Feature Engineering Platform

Monday Oct 24 / 11:50AM PDT

At Doordash, the last year has seen a surge in applications of machine learning to various product verticals in our growing business. However, with this growth, our data scientists have had increasing bottlenecks in their development cycle because of our existing feature engineering process.

Speaker image - Kunal Shah
Kunal Shah

ML Platform Engineering Manager @DoorDash, Previously ML Platforms & Data Engineering frameworks @Airbnb & @YouTube

Session Machine Learning

An Open Source Infrastructure for PyTorch

Monday Oct 24 / 01:40PM PDT

In this talk we’ll go over tools and techniques to deploy PyTorch in production. The PyTorch organization maintains and supports open source tools for efficient inference like pytorch/serve, job management pytorch/torchx and streaming datasets like pytorch/data.

Speaker image - Mark Saroufim
Mark Saroufim

Applied AI Engineer @Meta

Session Machine Learning

Real-Time Machine Learning: Architecture and Challenges

Monday Oct 24 / 02:55PM PDT

Fresh data beats stale data for machine learning applications. This talk discusses the value of fresh data as well as different types of architecture and challenges of online prediction.  

Speaker image - Chip Huyen
Chip Huyen

Co-founder @Claypot AI, previously @Snorkel Ai & @NVIDIA

Session

Unconference: MLOps

Monday Oct 24 / 05:25PM PDT

What is an unconference? At QCon SF, we’ll have unconferences in most of our tracks.

Speaker image - Shane Hastie
Shane Hastie

Global Delivery Lead for SoftEd and Lead Editor for Culture & Methods at InfoQ.com