Data Client Base Library Base Client
Data Client Base Library Base Client
Base Client instances and HTTP Stack
Base Client communicates with HERE platform data services using
OkHttp Stack. OkHttp is a lightweight and
efficient HTTP client. For 99% of all use-cases it is good enough to use one
instance of BaseClient for the whole runtime of your application. This means
there is also one instance of OkHttp stack running with one set of caches,
connection pool etc. Nevertheless, we support applications which might need more
instances of BaseClient in one application (e.g. Flink task slots). This means
that there are running multiple instances of OkHttp. Use it with care as it
takes much resources. If you want to use multiple instances please read
this article
to understand the implications.
Please note that:
- once you called the
shutdown()function of aBaseClientinstance, you cannot use nor reactivate that instance anymore during the runtime of your JVM, - by default, the
shutdown()function is automatically called when your JVM is shutting down. Should this behavior not be desirable, you can set thecom.here.platform.data.client.http.auto-shutdownsetting tofalseto disable it.
Configuration
The Data Client Base Library has three levels of abstraction when talking to HERE platform Data APIs:
- Base Client
- specific API
- specific endpoint
The Base Client is the base of all supported APIs. It introduces the concepts that apply to all APIs, currently generic configuration.
The specific API encapsulates all endpoints of that API. It introduces the concepts which apply to all endpoints of that APIs, currently per-api configuration.
The specific endpoint determines all parameters which are available and
mandatory or optional. It defines the data types and implicitly the encoding of
the request and the decoding of the response. It supports
per-request configuration and setting any HTTP header
parameters using withHeaderParam() function.
The result of each endpoint call is a future of result type that is defined for that endpoint. After the endpoint call you can retrieve the updated metrics.
Examples
Very simple application that just creates Base Client and shuts it down. The
resources that are held will be released automatically if they remain idle, so
it's not obligatory to explicitly shut down the client, the last line can be
omitted. But if you need to immediately shut down the client the method
shutdown() is right for this purpose.
import com.here.platform.data.client.base.generated.scaladsl.model.config.CatalogsResultBase
import com.here.platform.data.client.base.http.settings.{
ApiConfiguration,
RetryPolicy,
RetryStrategyType
}
import com.here.platform.data.client.base.scaladsl.BaseClient
import java.util.concurrent.{ExecutorService, ForkJoinPool}
import scala.concurrent.{Await, ExecutionContext, ExecutionContextExecutor}
object WorkingWithBaseClientMain {
import ExecutionContext.Implicits.global
def main(args: Array[String]): Unit = {
val client = BaseClient()
// val whateverApi = client.of[WhateverApi]
// do something with whateverApi
// Note: if you use a real api call it usually returns a Future thus the
// client.shutdown() needs to be in the final .andThen() block
client.shutdown()
}
}import com.here.platform.data.client.base.javadsl.BaseClient;
import com.here.platform.data.client.base.javadsl.BaseClientJava;
public class JavaWorkingWithBaseClientMain {
public static void main(String[] args) {
BaseClient baseClient = BaseClientJava.instance();
// WhateverApi whateverApi = new WhateverApi(baseClient);
// do something with whateverApi
}
}This application retrieves a list of catalogs defining retry logic on endpoint level. See the details of defining generic or specific configuration. At the end, it fetches the metrics for that endpoint.
import com.here.platform.data.client.base.generated.scaladsl.model.config.CatalogsResultBase
import com.here.platform.data.client.base.http.settings.{
ApiConfiguration,
RetryPolicy,
RetryStrategyType
}
import com.here.platform.data.client.base.scaladsl.BaseClient
import java.util.concurrent.{ExecutorService, ForkJoinPool}
import scala.concurrent.{Await, ExecutionContext, ExecutionContextExecutor}
import com.here.platform.data.client.base.generated.codecs.JsonSupport._
import com.here.platform.data.client.base.generated.scaladsl.api.config.ConfigApi
import com.here.platform.data.client.base.common.metrics.scaladsl.BaseClientMetricsScala
import scala.concurrent.duration._
import scala.util.{Failure, Success}
object WorkingWithBaseClientMain2 {
import ExecutionContext.Implicits.global
def main(args: Array[String]): Unit = {
val client = BaseClient()
val configApi = client.of[ConfigApi]
val retryPolicy =
RetryPolicy(100.millis, 10.seconds, 60.seconds, Set(408), RetryStrategyType.EXPONENTIAL, 10)
val result = configApi
.getCatalogs(
verbose = Some(false),
organisationType = Nil,
layerType = Nil,
region = Nil,
resourceType = None,
coverage = Nil,
access = Nil,
marketplaceReady = None,
sortBy = None,
sortOrder = None
)
.withConfig(ApiConfiguration(retryPolicy))
.executeToEntity()
result
.andThen {
case Success(response) => println(s"response: $response")
case Failure(ex) => ex.printStackTrace()
}
.andThen {
case _ =>
/// [metrics]
BaseClientMetricsScala()
.getMetricsFor("ConfigApi.getCatalogs")
.flatMap(_.counter)
.foreach(c => println(c.count))
/// [metrics]
}
Await.result(result, Duration.Inf)
}
}import com.here.platform.data.client.base.common.metrics.javadsl.BaseClientMetricsJava;
import com.here.platform.data.client.base.common.metrics.javadsl.MetricsJava;
import com.here.platform.data.client.base.generated.javadsl.api.config.ConfigApi;
import com.here.platform.data.client.base.generated.scaladsl.model.config.CatalogsListResult;
import com.here.platform.data.client.base.http.settings.ApiConfiguration;
import com.here.platform.data.client.base.http.settings.RetryPolicy;
import com.here.platform.data.client.base.http.settings.RetryStrategyType;
import com.here.platform.data.client.base.javadsl.BaseClient;
import com.here.platform.data.client.base.javadsl.BaseClientJava;
import java.time.Duration;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Optional;
public class JavaWorkingWithBaseClientMain2 {
public static void main(String[] args) {
BaseClient baseClient = BaseClientJava.instance();
ConfigApi configApi = new ConfigApi(baseClient);
ApiConfiguration config =
new ApiConfiguration.Builder()
.withRetryPolicy(
new RetryPolicy.Builder()
.withRetryStrategy(RetryStrategyType.EXPONENTIAL)
.withInitTimeout(Duration.ofMillis(100))
.withMaxAttempts(10)
.withRetriableHttpErrors(new HashSet(Arrays.asList(408)))
.withRequestTimeout(Duration.ofSeconds(10))
.withOverallTimeout(Duration.ofSeconds(60))
.build())
.build();
ConfigApi.GetCatalogsAdapter request =
configApi.getCatalogs().withVerbose(Optional.of(true)).build().withConfig(config);
CatalogsListResult listResult =
(CatalogsListResult) request.executeToEntity().toCompletableFuture().join();
System.out.println(listResult);
/// [metrics]
new BaseClientMetricsJava.builder()
.getInstance()
.getMetricsFor("ConfigApi.getCatalogs")
.flatMap(MetricsJava::getCounter)
.ifPresent(c -> System.out.println(c.count()));
/// [metrics]
}
}And with blocking request:
object WorkingWithBaseClientMainBlocking {
import ExecutionContext.Implicits.global
def main(args: Array[String]): Unit = {
val client = BaseClient()()
val configApi = client.of[ConfigApi]
val result: CatalogsResultBase = configApi
.getCatalogs(verbose = Some(false))
.toEntity()
println(s"response: $result")
}
}public class JavaWorkingWithBaseClientMainBlocking {
public static void main(String[] args) {
BaseClient baseClient = BaseClientJava.instance();
ConfigApi configApi = new ConfigApi(baseClient);
CatalogsListResult listResult =
(CatalogsListResult)
configApi.getCatalogs().withVerbose(Optional.of(true)).build().toEntity();
System.out.println(listResult);
}
}Spark and Flink
NoteUsage with Spark and Flink
BaseClient must use synchronous requests only when running in Spark or Flink. Parallelism and multi-threading is handled by Spark and Flink internally.
Updated 22 days ago