Data Flow Pipelines
Last updated
Was this helpful?
Last updated
Was this helpful?
The pipeline is a series of processes or stages through which data flow systematically and efficiently. It helps to visualize the flow between nodes, rules, and filters applied for the data flow.
Click on the Explore
Option from the top menu and click on Pipelines
Hover on the Actions
and click on Create Pipeline
Enter the Pipeline name
to create your pipeline.
Once the Pipeline is created we will configure the pipeline with Rules based on which the events in Data flow will be determined.
Hover on Add Rule and select the Rule you want to set (CODE, EXTRACT, FILTER, REWRITE, SIEM, STREAM, TAG) and once done clicking on Save
button will save the configuration.
You can also preview how the pipeline will be executed using the Preview option.
There are three ways to preview:
Using the Apply Pipeline
option we will be able to apply the pipeline to multiple Dataflow
.
Select the time range and then select Namespace and Application (Dataflow) to which you want to apply the pipeline. When a namespace and application have other Pipelines linked to it they will be displayed as well, the new one we are trying to associate will be displayed at the bottom and outlined by Green. User will be able to reorder by dragging.
Once desired order has been set Click on Apply
to apply it to the Dataflow. Post this the execution will be in the order of the Pipeline set.
All the Dataflows linked are displayed under the Pipeline and can be accessed using the Arrow icon.
The Stats displays the Events Ingested, Events Processed and Saved Bytes:
Green Indicates the total Events Ingested
.
Orange Indicates the Events Processed
.
Red Indicates the Saved Bytes
.
Pipelines stats is the total amount of Events and Saved Bytes and the data for each of the associated Dataflow is displayed against the individual Dataflows.