How to integrate Flink connector with volatile layers
How to integrate Flink connector with volatile layers
Create table sink and table source for volatile layer
The main entry point of the Flink Connector API is OlpStreamConnectorHelper.
import com.here.platform.data.client.flink.scaladsl.OlpStreamConnectorHelperimport com.here.platform.data.client.flink.scaladsl.OlpStreamConnectorHelper;An instance of OlpStreamConnectorHelper used to create flink.table.api.Schema
and build SQL statement. The following code snippet shows how to create an
instance of OlpStreamConnectorHelper, build flink.table.api.Schema and create
table with given schema and options:
// define the properties
val sourceProperties =
Map(
"olp.layer.query" -> "mt_partition=in=(1,2,3)"
)
// create the Table Connector Helper Source
val sourceHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(inputCatalogHrn),
"volatile-layer-protobuf-input",
sourceProperties)
val tEnv = StreamTableEnvironment.create(env)
// register the Table Source
tEnv.executeSql(
s"CREATE TABLE InputTable ${sourceHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sourceHelper.options}")OlpStreamConnectorHelper sourceHelper =
OlpStreamConnectorHelper.create(
HRN.fromString(inputCatalogHrn), inputLayerId, sourceProperties);
Schema sourceSchema = sourceHelper.prebuiltSchema(tEnv).build();
tEnv.executeSql(
String.format("CREATE TABLE InputTable %s WITH %s", sourceSchema, sourceHelper.options()));The source factory supports the following Volatile layer properties:
olp.layer.query: specifies an RSQL query that is used to query the volatile layer. If it is not defined, the value "mt_timestamp=ge=0" will be used by default, and it would mean that all the partitions will be read.olp.catalog.layer-schema: applicable only for the parquet and avro data formats. It is an Avro schema string that uses JSON format.olp.connector.download-parallelism: the maximum number of blobs that are being read in parallel in one flink task. The number of tasks corresponds to the set parallelism. As a result, the number of blobs that your pipeline can read in parallel is equal to the parallelism level times the value of this property. The default value is 10.olp.connector.download-timeout: the overall timeout in milliseconds that is applied for reading a blob from the Blob API. The default value is 300000 milliseconds.
An instance of OlpStreamConnectorHelper used to create flink.table.api.Schema
and build SQL statement. The following code snippet shows how to create an
instance of OlpStreamConnectorHelper, build flink.table.api.Schema and create
table with given schema and options:
You create a Table Sink the same way as Source with
OlpStreamConnectorHelper:
val sinkHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(outputCatalogHrn), "volatile-layer-protobuf-output", Map.empty)
tEnv.executeSql(
s"CREATE TABLE OutputTable ${sinkHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sinkHelper.options}")OlpStreamConnectorHelper sinkHelper =
OlpStreamConnectorHelper.create(
HRN.fromString(outputCatalogHrn), outputLayerId, new HashMap<>());
Schema sinkSchema = sinkHelper.prebuiltSchema(tEnv).build();
tEnv.executeSql(
String.format("CREATE TABLE OutputTable %s WITH %s", sinkSchema, sinkHelper.options()));The sink factory supports the following properties for Volatile layers:
olp.catalog.layer-schema: applicable only for the parquet and avro data formats. It is an Avro schema string that uses JSON format.olp.connector.aggregation-window: an interval in milliseconds that defines how often the sink should aggregate rows with the same partition id together. The default value is 10000 milliseconds. The property applies only for the avro and parquet formats.olp.connector.upload-parallelism: the maximum number of blobs that are being written in parallel in one flink task. The number of tasks corresponds to the set parallelism. As a result, the number of blobs that your pipeline can write in parallel is equal to the parallelism level times the value of this property. The default value is 10.olp.connector.upload-timeout: the overall timeout in milliseconds that is applied for writing a blob from the Blob API. The default value is 300000 milliseconds.olp.connector.publication-window: defines how often metadata are published to the Publish API. The default value is 1000 milliseconds. If the value is defined as -1, metadata will not be published.
Data formats
The Flink Connector supports the following data formats for Volatile layer payload:
- Raw. The decoding and encoding logic is not applied and you get your data payload as an array of bytes. Your Table schema appears as follows:
root
|-- data: Array[Byte]
|-- mt_partition: String
|-- mt_timestamp: Long
|-- mt_checksum: String
|-- mt_crc: String
|-- mt_dataSize: Long
|-- mt_compressedDataSize: Long
The column with the payload data is called data. The metadata columns follow
the data column and have the mt_ prefix.
This format is used if your layer content type is configured as
application/octet-stream.
- Protobuf. Flink uses the attached Protobuf schema (that you specify in your layer configuration) to derive a Flink Table schema.
root
|-- protobuf_field_1: String
|-- protobuf_field_2: String
|-- probobuf_field_3.nested_column: Long
|-- ...
|-- mt_partition: String
|-- mt_timestamp: Long
|-- mt_checksum: String
|-- mt_crc: String
|-- mt_dataSize: Long
|-- mt_compressedDataSize: Long
The Flink Connector puts the top level protobuf fields as the top level Row
columns, then the metadata columns follow.
This format is used if your layer content type is configured as
application/x-protobuf and you have a specified schema. If the schema is not
specified, an error will be thrown.
NoteSelf-referencing protobuf fields are not supported because there is no way to represent them in the Flink TypeInformation-based schema.
- Avro. The Flink uses the passed Avro schema (that you specify in the factory Map) to derive a Flink Table schema.
root
|-- avro_field_1: String
|-- avro_field_2: String
|-- ...
|-- mt_partition: String
|-- mt_timestamp: Long
|-- mt_checksum: String
|-- mt_crc: String
|-- mt_dataSize: Long
|-- mt_compressedDataSize: Long
The Flink Connector puts the top level Avro fields as the top level Row
columns, then the metadata columns follow.
This format is used if your layer content type is configured as
application/x-avro-binary and you have a specified schema. If the schema is
not specified, an error will be thrown.
WARNING: New version of connector is not support metadata columns for avro data type.
- Parquet. Flink uses the passed Avro schema (that you specify in the factory Map) to derive a Flink Table schema.
root
|-- parquet_field_1: String
|-- parquet_field_2: String
|-- ...
|-- mt_partition: String
|-- mt_timestamp: Long
|-- mt_checksum: String
|-- mt_crc: String
|-- mt_dataSize: Long
|-- mt_compressedDataSize: Long
The Flink Connector puts the top level parquet fields as the top level Row
columns, then the metadata columns follow.
This format is used if your layer content type is configured as
application/x-parquet and you have a specified schema. If the schema is not
specified, an error will be thrown.
WARNING: New version of connector is not support metadata columns for parquet data type.
The hadoop client is not provided by the streaming environment at the moment. As a result, if you want to use the parquet-format you have to include the hadoop client dependency in your fat jar:
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3</version>
<scope>compile</scope>
<exclusions>
<exclusion>
<groupId>org.apache.htrace</groupId>
<artifactId>htrace-core</artifactId>
</exclusion>
<exclusion>
<groupId>xerces</groupId>
<artifactId>xercesImpl</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>libraryDependencies ++=
Seq("org.apache.hadoop" % "hadoop-client" % "2.7.3" exclude ("org.apache.htrace", "htrace-core") exclude ("xerces", "xercesImpl"))-
Other formats
If your layer uses a format other than the described formats, an error will be thrown.
Table Source and Sink have the same schema for the same layer.
You can always print your Table schema using the standard Flink API:
// imagine that we have already registered InputTable
tEnv.from("InputTable").printSchema()// imagine that we have already registered InputTable
tEnv.from("InputTable").printSchema();Read and write raw data
Using SQL:
val tEnv = StreamTableEnvironment.create(env)
val partitions = (1 to 5).mkString(",")
val sourceProperties = Map("olp.layer.query" -> s"mt_partition=in=($partitions)",
"olp.connector.metadata-columns" -> "true")
val sourceHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(inputCatalogHrn), "volatile-layer-raw-input", sourceProperties)
tEnv.executeSql(
s"CREATE TABLE InputTable ${sourceHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sourceHelper.options}")
val sinkHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(outputCatalogHrn),
"volatile-layer-raw-output",
Map("olp.connector.metadata-columns" -> "true"))
tEnv.executeSql(
s"CREATE TABLE OutputTable ${sinkHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sinkHelper.options}")
tEnv.executeSql("""
|INSERT INTO OutputTable
|SELECT
| data,
| mt_partition,
| mt_timestamp,
| mt_checksum,
| mt_crc,
| mt_dataSize,
| mt_compressedDataSize
| FROM InputTable
|""".stripMargin)// define the properties
Map<String, String> sourceProperties = new HashMap<>();
sourceProperties.put("olp.layer.query", "mt_partition=in=(1,2,3)");
sourceProperties.put("olp.connector.metadata-columns", "true");Read and write protobuf data
Using SQL:
/// [create-table-source]
// define the properties
val sourceProperties =
Map(
"olp.layer.query" -> "mt_partition=in=(1,2,3)"
)
// create the Table Connector Helper Source
val sourceHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(inputCatalogHrn),
"volatile-layer-protobuf-input",
sourceProperties)
val tEnv = StreamTableEnvironment.create(env)
// register the Table Source
tEnv.executeSql(
s"CREATE TABLE InputTable ${sourceHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sourceHelper.options}")
/// [create-table-source]
/// [create-table-sink]
val sinkHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(outputCatalogHrn), "volatile-layer-protobuf-output", Map.empty)
tEnv.executeSql(
s"CREATE TABLE OutputTable ${sinkHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sinkHelper.options}")
/// [create-table-sink]
tEnv.executeSql(
"INSERT INTO OutputTable SELECT * FROM InputTable"
)StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
/// [create-table-source-java]
OlpStreamConnectorHelper sourceHelper =
OlpStreamConnectorHelper.create(
HRN.fromString(inputCatalogHrn), inputLayerId, sourceProperties);
Schema sourceSchema = sourceHelper.prebuiltSchema(tEnv).build();
tEnv.executeSql(
String.format("CREATE TABLE InputTable %s WITH %s", sourceSchema, sourceHelper.options()));
/// [create-table-source-java]
/// [create-table-sink-java]
OlpStreamConnectorHelper sinkHelper =
OlpStreamConnectorHelper.create(
HRN.fromString(outputCatalogHrn), outputLayerId, new HashMap<>());
Schema sinkSchema = sinkHelper.prebuiltSchema(tEnv).build();
tEnv.executeSql(
String.format("CREATE TABLE OutputTable %s WITH %s", sinkSchema, sinkHelper.options()));
/// [create-table-sink-java]
tEnv.executeSql("INSERT INTO OutputTable SELECT * FROM InputTable");Read and write Avro data
Using SQL:
val tEnv = StreamTableEnvironment.create(env)
val inputLayerSchema = """
{
"type" : "record",
"name" : "Event",
"namespace" : "my.example",
"fields" : [
{"name" : "event_timestamp", "type" : "long"},
{"name" : "latitude", "type" : "double"},
{"name" : "longitude", "type" : "double"}
]
}
"""
val outputLayerSchema = """
{
"type" : "record",
"name" : "Event",
"namespace" : "my.example",
"fields" : [
{"name" : "city", "type" : "string"},
{"name" : "event_timestamp", "type" : "long"},
{"name" : "latitude", "type" : "double"},
{"name" : "longitude", "type" : "double"}
]
}
"""
val sourceProperties =
Map(
"olp.catalog.layer-schema" -> inputLayerSchema,
"olp.layer.query" -> "mt_partition=in=(1,2,3)"
)
val sourceHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(inputCatalogHrn), "volatile-layer-avro-input", sourceProperties)
tEnv.executeSql(
s"CREATE TABLE InputTable ${sourceHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sourceHelper.options}")
val sinkHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(outputCatalogHrn),
"volatile-layer-avro-output",
Map("olp.catalog.layer-schema" -> outputLayerSchema))
tEnv.executeSql(
s"CREATE TABLE OutputTable ${sinkHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sinkHelper.options}")
tEnv.executeSql(
"""
INSERT INTO OutputTable
SELECT
'Berlin',
event_timestamp,
latitude,
longitude
FROM InputTable"""
)// register the Table Source
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
OlpStreamConnectorHelper sourceHelper =
OlpStreamConnectorHelper.create(
HRN.fromString(inputCatalogHrn), inputLayerId, sourceProperties);
Schema sourceSchema = sourceHelper.prebuiltSchema(tEnv).build();
tEnv.executeSql(
String.format("CREATE TABLE InputTable %s WITH %s", sourceSchema, sourceHelper.options()));
// define sink properties
Map<String, String> sinkProperties = new HashMap<>();
sinkProperties.put("olp.catalog.layer-schema", outputLayerSchema);
OlpStreamConnectorHelper sinkHelper =
OlpStreamConnectorHelper.create(
HRN.fromString(outputCatalogHrn), outputLayerId, sinkProperties);
tEnv.executeSql(
String.format(
"CREATE TABLE OutputTable(" + "`city` STRING" + ") WITH %s", sinkHelper.options()));
tEnv.executeSql("INSERT INTO OutputTable SELECT 'Berlin' FROM InputTable");Read and write Parquet data
Using SQL:
val tEnv = StreamTableEnvironment.create(env)
val inputLayerSchema = """
{
"type" : "record",
"name" : "Event",
"namespace" : "my.example",
"fields" : [
{"name" : "event_timestamp", "type" : "long"},
{"name" : "latitude", "type" : "double"},
{"name" : "longitude", "type" : "double"}
]
}
"""
val outputLayerSchema = """
{
"type" : "record",
"name" : "Event",
"namespace" : "my.example",
"fields" : [
{"name" : "city", "type" : "string"},
{"name" : "event_timestamp", "type" : "long"},
{"name" : "latitude", "type" : "double"},
{"name" : "longitude", "type" : "double"}
]
}
"""
val sourceProperties =
Map(
"olp.catalog.layer-schema" -> inputLayerSchema,
"olp.layer.query" -> "mt_partition=in=(1,2,3)"
)
val sourceHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(inputCatalogHrn),
"volatile-layer-parquet-input",
sourceProperties)
tEnv.executeSql(
s"CREATE TABLE InputTable ${sourceHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sourceHelper.options}")
val sinkHelper: OlpStreamConnectorHelper =
OlpStreamConnectorHelper(HRN(outputCatalogHrn),
"volatile-layer-parquet-output",
Map("olp.catalog.layer-schema" -> outputLayerSchema))
tEnv.executeSql(
s"CREATE TABLE OutputTable ${sinkHelper.prebuiltSchema(tEnv).build()} " +
s"WITH ${sinkHelper.options}")
tEnv.executeSql(
"""
INSERT INTO OutputTable
SELECT
'Berlin',
event_timestamp,
latitude,
longitude
FROM InputTable"""
)String inputLayerSchema =
"{\"type\" : \"record\", \"name\" : \"Event\", \"namespace\" : \"my.example\", \"fields\" : [ {\"name\" : \"event_timestamp\", \"type\" : \"long\"}, {\"name\" : \"latitude\", \"type\" : \"double\"}, {\"name\" : \"longitude\", \"type\" : \"double\"} ] }";
String outputLayerSchema =
"{\"type\" : \"record\", \"name\" : \"Event\", \"namespace\" : \"my.example\", \"fields\" : [ {\"name\" : \"city\", \"type\" : \"string\"}, {\"name\" : \"event_timestamp\", \"type\" : \"long\"}, {\"name\" : \"latitude\", \"type\" : \"double\"}, {\"name\" : \"longitude\", \"type\" : \"double\"} ] }";
// define source properties
Map<String, String> sourceProperties = new HashMap<>();
sourceProperties.put("olp.catalog.layer-schema", inputLayerSchema);
sourceProperties.put("olp.layer.query", "mt_partition=in=(1,2,3)");Updated 21 days ago