Flink connector
Flink connector
Flink Connector implements the standard Flink interfaces that allow you to
create source Tables for reading, and sink Tables for writing to stream
layers.
As a result, you can use both relational APIs that Flink supports: Table API
and SQL. In addition, you can convert a Table to a DataStream and use the
Flink DataStream API.
For information on how to build your app and which dependencies to use, see Dependencies for Stream Pipelines.
Supported layer types, data formats and operations
| Layer Type | Protobuf | Avro | Parquet | Raw (octet-stream) | GeoJSON |
|---|---|---|---|---|---|
| Stream layer | Read, Write | not supported | not supported | Read, Write | not applicable |
| Index layer | Read, Write | Read, Write | Read, Write | Read, Write | not applicable |
| Versioned layer | Read | Read | Read | Read | not applicable |
| Volatile layer | Read, Write | Read, Write | Read, Write | Read, Write | not applicable |
| Interactive Map layer | not applicable | not applicable | not applicable | not applicable | Read, Write |
Configuration
For Flink connector configuration, see here.
Reading data continuously
Flink has got the ability to read data continuously from non-stream layers. For details, see here.
Related Topics
- How to integrate Flink connector with stream layers
- How to integrate Flink connector with version layers
- How to integrate Flink connector with index layers
- How to integrate Flink connector with volatile layers
- How to integrate Flink connector with interactive map layers
- How to continously read data using Flink
Updated 21 days ago