# How to log a pipeline Your data processing pipelines may need to log various custom events for troubleshooting and maintenance purposes. The amount of information that is reported in the logs depends on the logging level you select for each pipeline version. This section describes the basics of pipeline logging, explains how to change and retrieve the pipeline version logging level, etc. > #### Note > > The user is charged for the amount of logs written during the execution of the pipeline.
## Pipeline logging basics The HERE platform pipelines use logging to provide more details during their operation. Different levels of logging are available for different purposes. The following logging levels are supported: * `Debug` - Includes fine-grained informational events that are most useful for pipeline troubleshooting. * `Info` - Includes informational messages that highlight the progress of the pipeline at a coarse-grained level. * `Warn` - Includes information about potentially harmful situations, for instance, runtime situations that are undesirable or unexpected, but not necessarily wrong. This is the default logging level used by pipeline versions. * `Error` - Includes runtime errors or unexpected conditions such as error events. By default, log messages are sent to Splunk, which is a data collection, indexing, and visualization engine for operational intelligence. For information on how to use Splunk, see the [Splunk Enterprise User Documentation](https://docs.splunk.com/Documentation/Splunk). > #### Note > > The maximum storage retention limit for Splunk is defined on a per-realm basis and is shared by all pipelines within the realm. > Note that this limit can be used up more quickly if you log a lot of data. ## Retrieve the logging settings You can check the logging settings for a particular pipeline version using either the platform portal or OLP CLI. For the latter, refer to [this guide](https://docs.here.com/workspace/docs/olp-cli-topics-pipeline-version-commands#pipeline-version-log-level-get) as this section covers the platform portal part. To retrieve the logging settings, you need to open the `Details` tab for the pipeline version you want to inspect. You can do this either by clicking on a particular version in the list of pipeline versions, or by opening its `Admin` menu and selecting the `View details` option: ![manage-pipelines-show-2.png](https://files.readme.io/843af81304d97bb055776d04aa854ee1004f0b3fdb28178a4b7fb25dd6ea2d6f-manage-pipelines-show-2.png "View details") A new tab opens showing various pipeline version details, with logging configuration on the bottom left of the tab: ![pipeline-logging-1.png](https://files.readme.io/ca8827c0b9b0a79ec5eb3dc9a40b4efc500718d94010505644e719befe9a130f-pipeline-logging-1.png "Logging configuration") As you can see, the default configuration is used for this particular pipeline version - the `Warn` logging level is set at `root` level for the entire pipeline version. However, multiple loggers can be configured for the same pipeline version. This process is discussed in the next chapter. ## Change the logging settings You can change the logging settings for a particular pipeline version using either the platform portal or the OLP CLI. For the latter, refer to [this guide](https://docs.here.com/workspace/docs/olp-cli-topics-pipeline-version-commands#pipeline-version-log-level-set) as this section covers the platform portal part. Changing the logging level for pipeline versions that are in different states results in different behaviour: * `Running` - If the logging settings are changed for a pipeline version that is in the `Running` state, the system will change the logging settings of the job that is currently running. * `Ready` or `Scheduled` - If the logging settings are changed for a pipeline version in the `Ready` or `Scheduled` state, the system will run the future jobs using the new logging settings. * `Paused` - For a pipeline version in the `Paused` state, if the logging settings are changed and the pipeline version is resumed, the system will run the future jobs with the new logging settings. The logging settings can be changed from the platform portal in the [`Details` tab of the pipeline version](managing-pipelines#view-information-about-a-pipeline-version). As mentioned earlier, the logging configuration information is located at the bottom left of this tab. To change it, click on the `Edit` button: ![pipeline-logging-2.png](https://files.readme.io/79e174633c718a3a98e5e173cb48da6fe65239883a25ab872a6c15367d7ac95d-pipeline-logging-2.png "Edit logging configuration") When it's done, the following dialog box opens: ![pipeline-logging-3.png](https://files.readme.io/5da923ec2db114631c4ba5b2b4dcb84896527c7b94180bbd058fb6b289ab2b83-pipeline-logging-3.png "Logging configuration dialog box") The default logger is set at the `root` level for the entire pipeline version. In our case, this is the only logger present, and it has the `Warn` logging level. To change the logging level for the `root` logger, click on the `Warn` logging level and select a new level from the drop-down list: ![pipeline-logging-4.png](https://files.readme.io/3e38d6f03272e51771a53f0fd9324559e2712f3a6cd6ab482af7b707ed24d04b-pipeline-logging-4.png "Select logging level for root logger") Loggers can also be set for specific pipeline application classes. And, because multiple loggers are present, it is possible to set different loggers to different logging levels. This allows monitoring different parts of the executing pipeline application at different logging levels, if set up correctly. To set the logger for a particular pipeline application class, open the logging configuration dialog box and select the `Add logger` option: ![pipeline-logging-6.png](https://files.readme.io/bedf118eb0db1377f7e03d2d339c26539d086811437bb5beb6c3290f97cf5796-pipeline-logging-6.png "Add logger for specific pipeline application class") Next, you need to specify name and logging level for your new logger. The logger name is usually the class name in the pipeline application code. The logging level can be set as required and does not have to match the `root` logging level.
Once the logger has been added, click on the `Done` button as shown below and save this logging configuration: ![pipeline-logging-8.png](https://files.readme.io/a31724836d70d38eba9d539a36ed15af391df874f6023d28d9c2071c94763f4a-pipeline-logging-8.png "Save logger for specific pipeline application class") > #### Note > > If you decided to add a logger that already exists, you will not be able to save this configuration. To delete an existing logger, open the logging configuration dialog box, click on the `x` button for the specific logger and save that configuration: ![pipeline-logging-9.png](https://files.readme.io/f4c7b7c5c81edcd352973430948e14a7825ca4d824d2b376af23fbc503ebd5bd-pipeline-logging-9.png "Delete loggers") Once saved, the appropriate message will be displayed and the updated logging configuration will be shown in the pipeline version `Details` tab: ![pipeline-logging-5.png](https://files.readme.io/feeb760eb45de6ff6dd12904d1ff342fd3b1fa01d4058098ace23b2d8e48a17e-pipeline-logging-5.png "Logging configuration was changed") Due to operational latency, it takes a few minutes for the changes to take effect. This may delay the availability of the logs at the new level in Splunk. ## Finding pipeline logs As was mentioned above, all the messages logged by the pipeline application are stored in Splunk. You can access the logs by clicking on the `Log` link for a particular pipeline version, as shown below: ![pipeline-logging-10.png](https://files.readme.io/ddf453e52f7e91e0828c0dbeebbea3ebd813832aa9f5ef6d6d628a0dda9156e1-pipeline-logging-10.png "Access logs from the pipeline versions list") Alternatively, you can access the same page by [opening the `Jobs` tab](managing-pipelines#view-jobs-history) and clicking on the `See log` link: ![pipeline-logging-11.png](https://files.readme.io/464e93d79ccffae111e63f19551f65de867fb14368db272263da531e04380b48-pipeline-logging-11.png "Access logs from the Jobs tab") Both of the above links will take you to the following Splunk dashboard: ![pipeline-logging-12.png](https://files.readme.io/49013ce0c0afe847deb917a9b665ce02e536e01e5505224b044f3b61b32565ed-pipeline-logging-12.png "Splunk dashboard") In this dashboard, you can change a time range to retrieve events more precisely, use the search processing language to customise the search query itself, and filter logs by source to retrieve events logged by different components such as Spark `Drivers`, Spark `Executors`, Flink `JobManagers`, Flink `TaskManagers`, etc. For more information on how to use Splunk - see the following articles: * [Splunk tutorials](https://www.splunk.com/en_us/blog/learn/splunk-tutorials.html) * [Logs, Monitoring and Alerts User Guide](https://docs.here.com/workspace/docs/readme-summary-6) ## See Also * [Splunk Enterprise User Documentation](https://docs.splunk.com/Documentation/Splunk) * [Logs, Monitoring and Alerts User Guide](https://docs.here.com/workspace/docs/readme-summary-6) * [Data Client Library Logging](https://docs.here.com/workspace/docs/dcl-client-logging) * [Pipeline API Reference](https://docs.here.com/workspace/docs/pipeline-monitoring) * [Grafana User Documentation](https://grafana.com/docs/)