Presentation: Models in Minutes not Months: AI as Microservices

Track: The Practice & Frontiers of AI

Location: Seacliff ABC

Duration: 2:55pm - 3:45pm

Day of week: Monday

Level: Intermediate

Persona: Architect, Backend Developer, Data Engineering, Data Scientist, Developer, DevOps Engineer, General Software, ML Engineer

Share this on:

What You’ll Learn

  • Learn how Salesforce built an AI platform that scales to thousands of customers.
  • Hear about the fundamental parts of the Einstein Platform, and how it does automated data ingestion, ML, monitoring, alerting, etc.
  • Deepen the understanding on what it takes to bring the first model into production.

Abstract

Companies are redefining their businesses by building models and learning from data. Whether it is using data science to predict their best sales and marketing targets, automating digital customer interactions using bots, or reducing waste in logistics and manufacturing - Artificial Intelligence will improve your business once deployed.

Serving up good predictions at the right time to drive the appropriate action is hard. It requires setting up data streams, transforming data, building models and delivering predictions. Most approach this by building single models and realizing along the way that data science is only the beginning. The engineering and infrastructure required to maintain a single model and ship the predictions present even more challenges.

Trying to replicate this success for more models or customers is even more difficult. Most approach it by building a handful of additional models, painstakingly addressing challenges by taking one-off approaches to handling increasing volumes of data, differences in data, changes in process, etc. Scaling to 1000s of customers becomes impossible.

At Salesforce we built the Einstein Platform to enable the automation and scaling of Artificial Intelligence to 1000s of customers, each with multiple models. The data ingestion, automated machine learning, instrumentation and intelligent monitoring and alerting make it possible to serve the varied needs of many different businesses. In this talk we will cover the nuts and bolts of the system, and share how we learned to solve for scale and variability with a fully operational Machine Learning platform.

Interview

Question: 
I cannot go to any Data Conference and not hear about the Einstein Platform. Why?
Answer: 

Salesforce is democratizing AI with Einstein. Any company and any business user should be able to use AI, regardless of size.

My team is responsible for embedding advanced AI capabilities in the Salesforce Platform—in fields, objects, workflows, components and more—so everyone will be able to build AI-powered apps. At Salesforce, we're not just building one model, we're building frameworks that can be customized by any Salesforce customer. If QCon were a customer they would have their own model built off of their own data. In fact, Salesforce is a customer of itself—we have our own model built off of our own data.

You don't want to have new data scientists or an army of engineers build a new app every single time - customization should be available through clicks, not code. My team focuses on working with our data scientists within Salesforce to build these new applications so that it is automated and easy to serve all of our customers with their own AI.

Question: 
What's the motivation for this talk?
Answer: 

There are a lot of talented data scientists and engineers who use data science to build one model for access to some data. That's where things usually end. What’s challenging is pushing the data back out-- a model is useless if the prediction cannot be served up to the person that needs it at the right time to react to it.

I think everyone takes for granted the amount of effort that is involved in this process. This talk is about helping people who are in the process of serving predictions to customers by talking about how Salesforce has done it at scale. My talk is also about helping anybody who's interested in getting started to plan ahead. Otherwise, all this effort that you're putting into building a beautiful model and obtaining data will just end up in a PowerPoint deck somewhere.

Question: 
Can you give me one example that someone can learn to avoid a pitfall?
Answer: 

One of the easiest things that people tend to overlook is alerting. You should have systems that are able to detect if there is a problem. For example, scores aren't being produced, scores are looking strange, or your data is out of range of what is expected and things can go haywire. That's one thing that can happen at the very end of any process. Then, there are all the obvious ones even on the front-end. Essentially, you need to think about how you allow a user to configure and consume the data and the result in an intuitive way.

Question: 
Salesforce has deep insights, the metadata, about the data of the customers. How do you do that when you don't have that insight in a generic way?
Answer: 

This talk is not aimed at helping people build something that is as automated as Salesforce. But at the very least it's important to be able to work in some reasonable way with data sets, and focus on those alerts that bring your attention to the right place.

Question: 
Sometimes engineers do not fully understand the ML math, and scientists do not understand the engineering that makes this work. Is that what you're talking about with alerting and observation?
Answer: 

Yes, that is part of it. For someone who is an engineer entering this arena, assuming upfront that everything will be perfect and that the model will work the first time is not a good assumption. I came from a data scientist background, and I had to learn the other elements. It's also very clear to me that folks who come from engineering either think they will abandon everything they've learned in engineering, or feel that if you build the model once, you're all set.

Question: 
Who is the persona you're talking to?
Answer: 

I'm talking to either somebody who's designing the system, an architect, data scientist or an engineer that is going into a process. There's the term data scientist and there's the term machine learning engineer, and there needs to be a blended space in between at some point. I don’t think that exists yet. But there are people who want to be in that space, and there are people who want to architect an environment for those people.

Question: 
What do you want someone to walk away with from your talk?
Answer: 

I want them to walk away with an expanded view of what they need to do in order to get this model out and serving their customers. I want to make sure that people leave there with at least one new thing that is missing from their pipeline today. My hope is that I'm going to share something new.

Speaker: Sarah Aerni

Senior Manager, Data Science @Salesforce

Sarah Aerni is a Senior Manager of Data Science at Salesforce Einstein, where she leads teams building AI-powered applications across the Salesforce platform. Prior to Salesforce she led the healthcare & life science and Federal teams at Pivotal. Sarah obtained her PhD from Stanford University in Biomedical Informatics, performing research at the interface of biomedicine and machine learning. She also co-founded a company offering expert services in informatics to both academia and industry.

Find Sarah Aerni at

Similar Talks

VP of Machine Learning @CrowdFlower
Director of Vulnerability Research @Endgame
Technical Program Manager @Questback
Product Management and Marketing @Datacoral
Founding Member of the Atom Editor Team @GitHub

.

Tracks

  • Architectures You've Always Wondered About

    Architectural practices from the world's most well-known properties, featuring startups, massive scale, evolving architectures, and software tools used by nearly all of us.

  • Going Serverless

    Learn about the state of Serverless & how to successfully leverage it! Lessons learned in the track hit on security, scalability, IoT, and offer warnings to watch out for.

  • Microservices: Patterns and Practices

    Stories of success and failure building modern Microservices, including event sourcing, reactive, decomposition, & more.

  • DevOps: You Build It, You Run It

    Pushing DevOps beyond adoption into cultural change. Hear about designing resilience, managing alerting, CI/CD lessons, & security. Features lessons from open source, Linkedin, Netflix, Financial Times, & more. 

  • The Art of Chaos Engineering

    Failure is going to happen - Are you ready? Chaos engineering is an emerging discipline - What is the state of the art?

  • The Whole Engineer

    Success as an engineer is more than writing code. Hear inward looking thoughts on inclusion, attitude, leadership, remote working, and not becoming the brilliant jerk.

  • Evolving Java

    Java continues to evolve & change. Track covers Spring 5, async, Kotlin, serverless, the 6-month cadence plans, & AI/ML use cases.

  • Security: Attacking and Defending

    Offense and defensive security evolution that application developers should know about including SGX Enclaves, effects of AI, software exploitation techniques, & crowd defense

  • The Practice & Frontiers of AI

    Learn about machine learning in practice and on the horizon. Learn about ML at Quora, Uber's Michelangelo, ML workflow with Netflix Meson and topics on Bots, Conversational interfaces, automation, and deployment practices in the space.

  • 21st Century Languages

    Compile to Native, Microservices, Machine learning... tailor-made languages solving modern challenges, featuring use cases around Go, Rust, C#, and Elm.

  • Modern CS in the Real World

    Applied trends in Computer Science that are likely to affect Software Engineers today. Topics include category theory, crypto, CRDT's, logic-based automated reasoning, and more.

  • Stream Processing In The Modern Age

    Compelling applications of stream processing using Flink, Beam, Spark, Strymon & recent advances in the field, including Custom Windowing, Stateful Streaming, SQL over Streams.  

Conference for Professional Software Developers