Get started with the Pipelines API
Obtain access to the HERE platform through your organization administrator's invitation or contact us to get started.
- If your company has already established a HERE platform organization, contact your organization admin who can invite you to join the organization.
- If your company hasn’t established a HERE platform organization yet, contact us.
To get started using pipelines, you need to be able to use the HERE platform. This means that you need to be able to use the HERE platform software and libraries. To develop new applications using the HERE platform requires a few more tools. Once, you have an account, refer to the following:
- Review the HERE platform documentation to get familiar with the design and use of the HERE platform.
- Explore the platform portal - After you log in, you will be directed to the platform portal. Explore the platform portal and become familiar with its features.
- Manage your profile - Click on your Name in the top right of the platform portal screen to access your profile. Here you can find your User-ID and the Groups that you belong to.
Security
The HERE platform is a cloud-based platform providing several microservices and resources. Each microservice and each resource (data catalog, pipelines, schemas, and other resources) are protected and require authentication and authorization to be used. Within the HERE platform, you will need credentials to access these services. Learn how to set up your teams and credentials in the Identity and Access Management Guide.
Set up a development environment
Follow these steps:
- Get your credentials set up as mentioned above or refer to Get your credentials.
- Configure your environment.
Pipeline tasks
There are three main tasks involved in using HERE platform pipelines.
- Designing and implementing a data processing application as a JAR file.
- Deploying the pipeline JAR file to the HERE platform pipeline.
- Running and managing jobs on a Pipeline Version on the pipeline.
The first task is the most technically ambitious. It involves designing a data processing workflow and implementing it inside of a pipeline JAR file. See the following articles for more information:
The second and third tasks can both be accomplished in two possible ways.
- You can use the platform portal and its GUI. For more details, see Deploy a pipeline via the web portal and Run pipelines sections. This is the recommended approach for most users.
- You can use the CLI. For more details, see the OLP CLI Pipeline workflows. This approach is recommended for the most experienced who want to use pipelines in their scripts.
Pipeline monitoring and alerts
- For pipeline monitoring information, see the Logs, Monitoring and Alerts User Guide.
- For information about pipeline logging levels, see Pipeline logging.
- For information on how to use Grafana for alerts, see Pipeline monitoring.
Batch pipelines
- Batch processing best practices.
- Configurations available for pipeline developers.
- Batch pipeline environment - Changelog.
Stream pipelines
- Stream processing best practices.
- Configurations available for pipeline developers.
- Stream pipeline environment - Changelog.
Pipeline operations
- Deploy pipelines.
- Run a pipeline.
- Manage pipelines.
- To run a pipeline using OLP CLI, follow the instructions from the Run a Flink application on the platform. or the Run a Spark application on the platform tutorials.
- For how to upgrade a running pipeline, see the Pipeline lifecycle or Upgrade a pipeline version.
Design a pipeline
- Develop pipelines.
- Maven Archetypes.
- For instruction on building a new pipeline project, visit Organize your work in projects tutorial.
Design data manipulation for a pipeline
See any of these documents:
- Data Processing Library Developer Guide.
- Data Inspector Library Developer Guide.
- Location Library Developer Guide.
- Data Client Library Developer Guide.
- Data API Developer Guide.
Work with data specifications
Updated 2 days ago