You are viewing content from a past/completed QCon -

Presentation: Future of Data Engineering

Track: Modern Data Architectures

Location: Ballroom BC

Duration: 10:35am - 11:25am

Day of week:

Slides: Download Slides

This presentation is now available to view on

Watch video with transcript

What You’ll Learn

  1. Hear about the current state-of-the-art in data pipelines and data warehousing.
  2. Find out what are some of the solutions to current problems dealing with data streaming and warehousing.


The current generation of data engineering has left us with data pipelines, data warehouses, and machine learning platforms that are largely batch-based and centrally managed. They're often largely manually operated, and integrating new systems can be cumbersome. Over the next few years, a number of trends are going to require us to rethink how and what we build. Data is now realtime, companies are running many database technologies, teams are demanding more control of their data, and regulatory policy has begun dictating how and when we store data. This talk will present a vision of what it will take for data engineers deliver a next generation data ecosystem.


What is the work you're doing today?


I am a distinguished software engineer at WePay, which is a payment processing company that was recently acquired by JP Morgan Chase a couple years ago. Most of what I focus on there is data infrastructure and data engineering. I also, more broadly, do architectural design and working with other teams. But my primary focus is in the data space. And so that involves playing in two different ecosystems. One is the data warehousing ecosystem and the other is our streaming and stream processing ecosystem. So on the batch side of things, the offline world, we have BigQuery which is our primary data warehouse that we use at WePay, and then Apache Airflow which is our job scheduler. On the streaming side it's essentially Apache Kafka plus a bunch of associated infrastructure surrounding it, such as Kafka Connect, where we run a couple of connectors there for MySQL. On the other end, KCBQ which is a connector we have written to stream data from Kafka. We string all this stuff together to get a low-latency real-time data warehouse where we can essentially see a lot of what our data looks like in production in our data warehouse within a matter of milliseconds.


What are the goals you have for the talk?


My primary goal for the talk is to survey the current state of the art when it comes to data pipelines, ETL, and data warehousing, and then to take a look forward over the next couple of years and hopefully elicit some thought and discussion around the direction that I think we're going. The two primary areas where I think we're going are towards more real-time data pipelines and towards decentralized and automated data warehouse management.

I think the low-latency stuff is primarily being driven by a renaissance in streaming and stream processing, and I think the decentralization and automation is going to be driven in large part by regulation such asCCPA, GDPR, or just government regulation.


So you think regulatory policy will have an impact on how we store and use data?


I do and because I'm part of the financial industry I get a little bit of a preview about how regulatory policy can affect data privacy and stuff like that. I think more broadly in Silicon Valley and beyond we've gotten a flavor of it with GDPR, we have CCPA coming up around the corner, but there's a wide array of directions that can go. And so I think in reality if you have a large scale data warehouse or data pipeline you need to be able to track sensitive data. This requires everything from data lineage to data cataloging and stuff like that to not only keep track of it but make sure only certain people who are appropriate can get access to it. At some point you have to automate things because it becomes too large for anyone or team or group to manage. And once you have it automated the next logical step is, “Well, why do you need a centralized team to manage this? Can't you just set up policies that will allow teams to use the automated tools themselves?”


What do you want people to leave the talk with?


Hopefully they'll leave the talk with a good understanding of where the current state of the art is when it comes to data pipelines and data warehousing, and also some thoughts planted around where we might go and how we might solve those problems.

Speaker: Chris Riccomini

Distinguished Engineer @WePay

Software engineer. WePay, LinkedIn, PayPal. Co-author Apache Samza. Committer/PMC Apache Airflow.

Find Chris Riccomini at

Last Year's Tracks

  • Monday, 16 November

  • The Future of APIs

    Web-based API continue to evolve. The track provides the what, how, and why of future APIs, including GraphQL, Backend for Frontend, gRPC, & ReST

  • Resurgence of Functional Programming

    What was once a paradigm shift in how we thought of programming languages is now main stream in nearly all modern languages. Hear how software shops are infusing concepts like pure functions and immutablity into their architectures and design choices.

  • Social Responsibility: Implications of Building Modern Software

    Software has an ever increasing impact on individuals and society. Understanding these implications helps build software that works for all users

  • Non-Technical Skills for Technical Folks

    To be an effective engineer, requires more than great coding skills. Learn the subtle arts of the tech lead, including empathy, communication, and organization.

  • Clientside: From WASM to Browser Applications

    Dive into some of the technologies that can be leveraged to ultimately deliver a more impactful interaction between the user and client.

  • Languages of Infra

    More than just Infrastructure as a Service, today we have libraries, languages, and platforms that help us define our infra. Languages of Infra explore languages and libraries being used today to build modern cloud native architectures.

  • Tuesday, 17 November

  • Mechanical Sympathy: The Software/Hardware Divide

    Understanding the Hardware Makes You a Better Developer

  • Paths to Production: Deployment Pipelines as a Competitive Advantage

    Deployment pipelines allow us to push to production at ever increasing volume. Paths to production looks at how some of software's most well known shops continuous deliver code.

  • Java, The Platform

    Mobile, Micro, Modular: The platform continues to evolve and change. Discover how the platform continues to drive us forward.

  • Security for Engineers

    How to build secure, yet usable, systems from the engineer's perspective.

  • Modern Data Engineering

    The innovations necessary to build towards a fully automated decentralized data warehouse.

  • Machine Learning for the Software Engineer

    AI and machine learning are more approachable than ever. Discover how ML, deep learning, and other modern approaches are being used in practice by Software Engineers.

  • Wednesday, 18 November

  • Inclusion & Diversity in Tech

    The road map to an inclusive and diverse tech organization. *Diversity & Inclusion defined as the inclusion of all individuals in an within tech, regardless of gender, religion, ethnicity, race, age, sexual orientation, and physical or mental fitness.

  • Architectures You've Always Wondered About

    How do they do it? In QCon's marquee Architectures track, we learn what it takes to operate at large scale from well-known names in our industry. You will take away hard-earned architectural lessons on scalability, reliability, throughput, and performance.

  • Architecting for Confidence: Building Resilient Systems

    Your system will fail. Build systems with the confidence to know when they do and you won’t.

  • Remotely Productive: Remote Teams & Software

    More and more companies are moving to remote work. How do you build, work on, and lead teams remotely?

  • Operating Microservices

    Building and operating distributed systems is hard, and microservices are no different. Learn strategies for not just building a service but operating them at scale.

  • Distributed Systems for Developers

    Computer science in practice. An applied track that fuses together the human side of computer science with the technical choices that are made along the way