GuidesChangelogData Inspector Library API Reference
Guides

How to read interactive map layer data

How to read interactive map layer data

The Data Client Library provides the class LayerDataFrameReader, a custom
Spark
DataFrameReader
for creating
DataFrames
that contain the data for all supported layer type including interactive map
layer.

Read process

The read operation works according to the following steps:

  1. Spark connector analyzes your query and starts communication with the server
    in order to retrieve information for distributing your query across the spark
    cluster. Individual filters that are part of your query are taken into
    account at this stage already.
  2. Spark will distribute your query across the workers in the cluster which will
    then start requesting their individual chunks of data from the server.
  3. The data returned by the server is converted into a generic row format on the
    worker nodes.
  4. The resulting rows are passed on to the Spark framework to return the
    finalized DataFrame.

Dataframe columns

Unlike other layer types, interactive map layers use a static row format when
working with the Spark framework.

Data columns

A DataFrame for interactive map layers will contain the following columns:

Column nameData TypeMeaning
mt_idStringOID of the object
geometryROW<STRING, STRING>Object's geometry, first field will contain type, second field will contain coordinates
propertiesMAP<STRING, STRING>Object's properties in reduced format
custom_membersMAP<STRING, STRING>Non-standard top-level fields in reduced format
mt_tagsARRAYThe object's tags
mt_datahubROW<BIGINT, BIGINTObject metadata: createdAt and updatedAt

Project dependencies

If you want to create an application that uses the HERE platform Spark Connector
to read data from interactive map layer, add the required dependencies to your
project as described in chapter
Dependencies for Spark Connector.

Read interactive map layer data

The following snippet demonstrates how to access an interactive map layer based
DataFrame from an interactive map layer of a catalog. Note that interactive
map layer uses a dynamic schema. Therefore, you don't need to specify the format
explicitly.

It is also possible to provide context for interactive map layer by passing an
appropriate option. See the notes below for details on option name.

import com.here.platform.data.client.spark.InteractiveMapDataFrame.InteractiveMapContextOptionName
import com.here.platform.data.client.model.InteractiveMapContext.{DEFAULT, EXTENSION}
import com.here.platform.data.client.spark.LayerDataFrameReader.SparkSessionExt
import org.apache.spark.sql.SparkSession
val query = "mt_geometry=inboundingbox=(85, -85, 180, -180) and  p.row=contains=7"

log.info("Loading data from IML for default context.")
val readDF = sparkSession
  .readLayer(catalogHrn, layerId)
  .query(query)
  .load()

log.info("Data loaded for default context!")
val defaultContextCount = readDF.count()
log.info("Dataframe contains " + defaultContextCount.toString + " rows for default context.")

log.info("Loading data from IML for extension context.")
val readExtensionDF = sparkSession
  .readLayer(catalogHrn, layerId)
  .option(InteractiveMapContextOptionName, EXTENSION)
  .query(query)
  .load()

log.info("Data loaded for extension context!")
val extensionContextCount = readExtensionDF.count()
log.info(
  "Dataframe contains " + extensionContextCount.toString + " rows for extension context.")
import static com.here.platform.data.client.model.InteractiveMapContext.EXTENSION;
import static com.here.platform.data.client.spark.InteractiveMapDataFrameConstants.INTERACTIVE_MAP_CONTEXT_OPTION_NAME;

import com.here.hrn.HRN;
import com.here.platform.data.client.model.InteractiveMapContext;
import com.here.platform.data.client.spark.javadsl.JavaLayerDataFrameReader;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
JavaLayerDataFrameReader javaLayerDFReader = JavaLayerDataFrameReader.create(sparkSession);
String query = "mt_geometry=inboundingbox=(85, -85, 180, -180) and  p.row=contains=7";

Dataset<Row> inputDF = javaLayerDFReader.readLayer(catalogHrn, layerId).query(query).load();

// Show the schema and the contents of the DF
long defaultContextCount = inputDF.count();
log.info("Number of rows in dataframe with default context: " + defaultContextCount);

// When reading interactive map layer, one can additionally pass an optional context
Dataset<Row> interactiveDF =
    javaLayerDFReader
        .readLayer(catalogHrn, layerId)
        .option(INTERACTIVE_MAP_CONTEXT_OPTION_NAME, EXTENSION)
        .query(query)
        .load();

long extensionContextCount = interactiveDF.count();
log.info("Number of rows in dataframe with extension context: " + extensionContextCount);

Note

  • When reading data from an interactive map layer, users are currently limited
    to requesting data within the bounds of the mercator projection, i.e. only
    object that are within -85° to +85° latitude will be returned.
  • Reading from an interactive map layer supports the
    option("olp.connector.ignore-invalid-partitions", true) which allows to
    skip over partitions that contain too many objects to be loaded at once.
    Default is false.
  • Reading from an interactive map layer supports the
    option("olp.connector.max-features-per-request", <INTEGER>) which allows
    the user to define the maximum number of objects to be contained in a single
    request. Default is 10000.
  • Reading from an interactive map layer supports the
    option("olp.connector.interactive-map-context", <InteractiveMapContext>)
    which allows the user to define the context for Interactive Map Layer
    operation. Omitting this option is the same as using
    InteractiveMapContext.DEFAULT. In order to avoid duplicating this option's
    name, constants exist for both Java
    (com.here.platform.data.client.spark.InteractiveMapDataFrameConstants.INTERACTIVE_MAP_CONTEXT_OPTION_NAME)
    and Scala
    (com.here.platform.data.client.spark.InteractiveMapDataFrame.InteractiveMapContextOptionName).
  • For information on RSQL, see RSQL.