Skip to main content

Reroute

Control Flow

Synopsis

Enables dynamic routing of logs to different target systems based on pipeline processing results.

Schema

- reroute:
destination: <string>
target: <string>
source: <string>
table: <string>
index: <string>
schema: <string>
bucket: <string>
container: <string>
stream: <string>
topic: <string>
log_type: <string>
namespace: <string>
clone: <boolean>
staging: <boolean | string>
description: <text>
if: <script>
ignore_failure: <boolean>
on_failure: <processor[]>
on_success: <processor[]>
tag: <string>

Configuration

The following fields are used to define the processor:

FieldRequiredDefaultDescription
destinationY*-Name of the target system configuration to route to
targetN-Alias for destination
sourceN-Source identifier override for the routed event
tableN-Database table or collection name (Sentinel, Data Explorer)
indexN-Search index name (Elasticsearch, Splunk)
schemaN-Schema identifier (ASIM tables, OCSF categories)
bucketN-Storage bucket name (S3, Blob Storage, GCS)
containerN-Storage container name (Azure Blob Storage)
streamN-Data stream identifier (Kinesis, Event Hubs)
topicN-Message topic name (Kafka, Pub/Sub)
log_typeN-Log type classifier (Chronicle, various SIEMs)
namespaceN-Namespace identifier (Kubernetes, multi-tenant systems)
cloneNfalseSend a copy while preserving original destination metadata
stagingNfalseStage the route for later commit instead of immediate delivery
descriptionN-Explanatory note
ifN-Condition to run
ignore_failureNfalseSee Handling Failures
on_failureN-See Handling Failures
on_successN-See Handling Success
tagN-Identifier

* Required unless one or more metadata fields (table, index, schema, etc.) are specified.

Details

The Reroute processor enables dynamic routing decisions after pipeline processing. While basic routing is configured at the source level, Reroute implements complex routing logic based on conditions or transformations.

Destination Metadata

The metadata fields (table, index, schema, etc.) set destination-specific routing information that targets use for organization. These fields support template syntax for dynamic values:

- reroute:
destination: "sentinel"
table: "{{ .target_table }}"
schema: "{{ .schema_name }}"

Staged Routing

When staging: true, the event is held in a staging area instead of being delivered immediately. Subsequent staged routes to the same destination overwrite the previous staged version. Use the commit processor to finalize all staged routes.

The staging field accepts:

  • true / false - Direct boolean value
  • Template string - Evaluated at runtime (e.g., "{{ .needs_staging }}")

This enables multi-tier pipelines where data is progressively normalized, with only the final version delivered to each destination. See Multi-Tier Pipelines for detailed patterns.

Clone Mode

When clone: true, the processor sends a copy of the event to the destination while preserving the original destination metadata for subsequent processors. This is useful when you need both the current state and a later processed state delivered separately.

warning

Make sure the destination field matches exactly with a target system name in your configuration.

Common use cases:

  • Security - Parse and normalize logs, enrich with threat intelligence, and route high-risk events to security platforms

  • Compliance - Filter sensitive data, apply transformations, and route them to compliance-mandated destinations

  • Cost reduction - Process high-volume logs, filter out unnecessary data, and route relevant logs to premium storage/analysis platforms

  • Multi-tier normalization - Progressive normalization with staged routing to deliver appropriate formats to each destination

Examples

Microsoft Sentinel

First, define your Sentinel target...

targets:
name: auto_sentinel
type: sentinel
properties:
- tenant_id: "00000000-0000-0000-0000-000000000000"
- client_id: "00000000-0000-0000-0000-000000000000"
- client_secret: "your-client-secret"
- rule_id: "dcr-00000000-0000-0000-0000-000000000000"
- endpoint: "https://your-dcr-endpoint"

then use Reroute to send the logs...

pipelines:
- name: security_pipeline
processors:
- grok:
field: message
pattern: '%{IPADDR:source_ip}'
- reroute:
if: 'source_ip matches "10.0.0.*"'
destination: auto_sentinel

Conditionals

Process logs, and route them based on the extracted data...

pipelines:
- name: firewall_logs
processors:
- checkpoint:
field: message
- reroute:
if: 'checkpoint.action == "Drop" && checkpoint.severity >= 3'
destination: high_priority_sentinel
- reroute:
destination: standard_sentinel

using different target configurations:

targets:
- name: high_priority_sentinel
type: sentinel
properties:
- tenant_id: "tenant1"
# ... high priority
- name: standard_sentinel
type: sentinel
properties:
- tenant_id: "tenant2"
# ... standard

Multi-Stage

Process logs through multiple stages before routing...

pipelines:
- name: complex_processing
processors:
- json:
field: message
- grok:
field: parsed_message
pattern: '%{DATA:app_name}'
- user_agent:
field: user_agent
- geoip:
field: ip_address
- reroute:
if: 'geoip.country_code not in ["US", "CA"] && app_name == "core_banking"'
destination: security_analytics
- reroute:
destination: standard_logs

Staged Routes

Stage routes for multi-tier normalization...

pipelines:
- name: multi_tier_normalization
processors:
# Stage raw format
- reroute:
destination: "archive"
table: "Syslog"
staging: true

# Normalize to CSL and stage
- normalize:
target_format: csl
- reroute:
destination: "sentinel"
table: "CommonSecurityLog"
staging: true

# Normalize to ASIM and stage (overwrites previous)
- normalize:
target_format: asim
- reroute:
destination: "sentinel"
table: "ASimNetworkSession"
staging: true

# Commit all staged routes
- commit:

Each staged route to the same destination overwrites the previous version, delivering only the final normalized form.

With Metadata

Route with destination-specific metadata...

- reroute:
destination: "sentinel"
table: "ASimNetworkSession"
schema: "NetworkSession"
source: "firewall-cluster-01"

Use templates for dynamic metadata values...

- reroute:
destination: "data_lake"
bucket: "logs-{{ .environment }}"
namespace: "{{ .kubernetes.namespace }}"

Clone Mode

Send raw to archive while continuing processing...

pipelines:
- name: archive_and_process
processors:
# Send raw copy to archive
- reroute:
destination: "archive"
clone: true

# Continue processing for SIEM
- normalize:
target_format: asim
- reroute:
destination: "sentinel"

Clone preserves original destination metadata for subsequent processors.