Stream and Batch Processing Convergence in Apache Flink

The idea of executing streaming and batch jobs with one engine has been there for a while. People always say batch is a special case of streaming. Conceptually, it is. However, practically, there are many gaps between streaming and batch processing in resource management, scheduling, failure recovery, aggregation, shuffling, etc. Apache Flink has gone through a long journey to address all these challenges, and becomes a leading convergence engine. This talk will introduce these challenges as well as the way Flink tackles them.


Speaker

Jiangjie (Becket) Qin

Principal Staff Software Engineer @LinkedIn, Data Infra Engineer, PMC Member of Apache Kafka & Apache Flink, Previously @Alibaba and @IBM

Becket is currently a Principal Staff Software Engineer at LinkedIn. He started to work on Apache Kafka at LinkedIn after he graduated from Carnegie Mellon University. After that, he joined Alibaba and led the Flink team focusing on Flink SQL, PyFlink, Flink ML, Connectors, among others. He returned to LinkedIn in 2022 to drive the effort of stream and batch unification.

Becket is a PMC member of Apache Kafka and Apache Flink.

Read more
Find Jiangjie (Becket) Qin at:

From the same track

Session Platform Engineering

Beyond Durability: Enhancing Database Resilience and Reducing the Entropy Using Write-Ahead Logging at Netflix

Tuesday Nov 19 / 10:35AM PST

In modern database systems, durability guarantees are crucial but often insufficient in scenarios involving extended system outages or data corruption.

Speaker image - Prudhviraj Karumanchi

Prudhviraj Karumanchi

Staff Software Engineer at Data Platform @Netflix, Building Large-Scale Distributed Storage Systems and Cloud Services, Previously @Oracle, @NetApp, and @EMC/Dell

Speaker image - Vidhya Arvind

Vidhya Arvind

Staff Software Engineer @Netflix Data Platform, Founding Member of Data Abstractions at Netflix, Previously @Box and @Verizon

Session Architecture

OpenSearch Cluster Topologies for Cost-Saving Autoscaling

Tuesday Nov 19 / 11:45AM PST

The indexing rates of many clusters follow some sort of fluctuating pattern - be it day/night, weekday/weekend, or any sort of duality when the cluster changes from being active to less active. In these cases how does one scale the cluster?

Speaker image - Amitai Stern

Amitai Stern

Engineering Manager @Logz.io, Managing Observability Data Storage of Petabyte Scale, OpenSearch Leadership Committee Member and Contributor

Session Data Pipelines

Efficient Incremental Processing with Netflix Maestro and Apache Iceberg

Tuesday Nov 19 / 03:55PM PST

Incremental processing, an approach that processes only new or updated data in workflows, substantially reduces compute resource costs and execution time, leading to fewer potential failures and less need for manual intervention.

Speaker image - Jun He

Jun He

Staff Software Engineer @Netflix, Managing and Automating Large-Scale Data/ML Workflows, Previously @Airbnb and @Hulu

Session

Stream All the Things — Patterns of Effective Data Stream Processing

Tuesday Nov 19 / 01:35PM PST

Data streaming is a really difficult problem. Despite 10+ years of attempting to simplify it, teams building real-time data pipelines can spend up to 80% of their time optimizing it or fixing downstream output by handling bad data at the lake.

Speaker image - Adi Polak

Adi Polak

Director, Advocacy and Developer Experience Engineering @Confluent

Session

Unconference: Shift-Left Data Architecture

Tuesday Nov 19 / 05:05PM PST