Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/twalthr/flink-api-examples
Examples for using Apache Flink® with DataStream API, Table API, Flink SQL and connectors such as MySQL, JDBC, CDC, Kafka.
https://github.com/twalthr/flink-api-examples
apache-flink data-engineering flink flink-examples flink-sql stream-processing
Last synced: 9 days ago
JSON representation
Examples for using Apache Flink® with DataStream API, Table API, Flink SQL and connectors such as MySQL, JDBC, CDC, Kafka.
- Host: GitHub
- URL: https://github.com/twalthr/flink-api-examples
- Owner: twalthr
- Created: 2021-10-11T11:04:02.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2023-09-26T13:55:12.000Z (about 1 year ago)
- Last Synced: 2024-08-02T14:06:41.564Z (3 months ago)
- Topics: apache-flink, data-engineering, flink, flink-examples, flink-sql, stream-processing
- Language: Java
- Homepage:
- Size: 31.3 KB
- Stars: 53
- Watchers: 3
- Forks: 18
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Flink API Examples for DataStream API and Table API
The Table API is not a new kid on the block. But the community has worked hard on reshaping its future. Today, it is one
of the core abstractions in Flink next to the DataStream API. The Table API can deal with bounded and unbounded streams
in a unified and highly optimized ecosystem inspired by databases and SQL. Various connectors and catalogs integrate
with the outside world.But this doesn't mean that the DataStream API will become obsolete any time soon. This repository demos what Table API
is capable of today. We present how the API solves different scenarios:- as a batch processor,
- a changelog processor,
- a change data capture (CDC) hub,
- or a streaming ETL toolwith many built-in functions and operators for deduplicating, joining, and aggregating data.
It shows hybrid pipelines in which both APIs interact in symbiosis and contribute their unique strengths.
# How to Use This Repository
1. Import this repository into your IDE (preferably IntelliJ IDEA). Select the `pom.xml` file during import to treat it
as a Maven project. The project uses the latest Flink 1.15.2. All examples are runnable from the IDE. You simply need to execute the `main()` method of every example class.
3. In order to make the examples run within IntelliJ IDEA, it is necessary to tick
the `Add dependencies with "provided" scope to classpath` option in the run configuration under `Modify options`.4. For the Apache Kafka examples, download and unzip [Apache Kafka](https://kafka.apache.org/downloads). Start up Kafka
and Zookeeper:```
./bin/zookeeper-server-start.sh config/zookeeper.properties &
./bin/kafka-server-start.sh config/server.properties &
```Run `FillKafkaWithCustomers` and `FillKafkaWithTransactions` to create and fill the Kafka topics with Flink.
5. For the MySQL CDC example, run `StartMySqlContainer` with an available Docker setup to set up a dummy database
instance. `FillMySqlWithValues` provides a Flink job to update the database tables while the CDC example is running.