Documentation for triggers, running workflow executions
Triggers are the operators used to execute a workflow automatically. They are connected to a actions within workflows - often the starting node. Triggers usually take an execution argument that will be used to execute the workflow in question.
Triggers, along side apps and variables, can be found on the left hand side, in a tab called "Triggers".
Triggers are developed by Shuffle specifically to give the user access to multiple ways to run a workflow. The triggers that are not available on-premises are due to access requirements - not hiding of features.
There are currently four triggers, with a fifth "hidden" one:
There are currently six ways to execute workflows:
ALL triggers are available EVERYWHERE if you have a Shuffle subscription. This further allows routing executions through cloud (without saving any data) to your open source instance, without the need to open for inbound requests in your firewall(s).
Example: Say you want to get messages from a service like a SIEM, but it's in a different data center. How do you get that request all the way to your instance? This can be done by setting up cloud synchronization allowing for: 1. Remote SIEM -> Send Webhook to https://shuffler.io as a "proxy" 2. Your local instance looks for jobs from an organization you own on https://shuffler.io 3. When a webhook job is found on https://shuffler.io - it will execute in your local instance.
Read more about cloud synchronization in the organization documentation.
When a trigger runs, you will NOT be notified about it anywhere. It will instead run behind the scenes. You can however discover their data and executions by going to the specific workflow's UI, then clicking "See all executions" on the bottom (the running person).
Webhooks are the real-time handler for Shuffle data. Webhooks were initially implemented to handle data from Office365 connectors and TheHive, but have turned into a generic trigger, taking any kind of HTTP data, as we saw the need for it.
HTTP Method(s):
PS: Data in the POST request will be the execution argument. If HTTP queries are present in the GET request, these will be converted to JSON.
In later versions of Shuffle, you further have access to an authentication field. This field is based on the headers of the request, and any header added to this field will be REQUIRED from the sender of the webhook.
One header for each line. In the image below, the request would need to contain the headers "authorization" and "authorization2" with the exact valus specified.
To start using webhooks, you should have a service that supports webhooks (e.g. the two above mentioned). If you just want to test, you can also use cron.
Once you have a service ready: 1. Drag the "Webhook" trigger from the left hand side into the main view, and you should see it automatically connect to your starting node. 2. Click "start" to deploy this webhook (it might be auto-started). This will also be stopped if you remove the node or the workflow. A webhook can not be in multiple workflows, but you can deploy as many as you want. 3. Find your webhook URI in the "Webhook URI" window, and copy it to your service
Test it! Say your URI is: https://shuffler.io/api/v1/webhooks/webhook_336a7aa2-e785-47cc-85f4-31a4ab5b28b8 and you want to use {"test": "testing"} in your workflow:
curl -XPOST https://shuffler.io/api/v1/webhooks/webhook_336a7aa2-e785-47cc-85f4-31a4ab5b28b8 --data '{"test": "testing"}'
Subflows is a trigger made to run workflows within other workflows. There's further a "parent/child" element to it where there's always a reference to where the execution came from. This can also run a subflow within itself or be recursive (infinite loops).
Purposes we'll cover:
Requirements:
Here are the known missing features for subflows
Drag a subflow into the PARENT view from the left side and click it
Choose a workflow to run. This will use the STARTNODE as default, which can be changed. The red workflow is the one you're currently working on.
We'll decide a startnode and the information to send to the subflow. In our case, we'll use the list below, and send one item at a time with the IP to be searched for in Virustotal. The data with the list is named Repeat_list in our case, hence we use $Repeat_list.#
[{"ip": "1.2.3.4", "malicious": true}, {"ip": "4.3.2.1", "malicious": false}, {"ip": "1.2.3.5", "malicious": true}]
It's time we explore the results. In total, this should execute FOUR times: 1 for the first workflow and once for EACH of the objects in the list.
Here's a view of the total workflows. Notice how three of them have a different icon? This is because they are subflows. PS: It will always show the maximum amount of nodes in the workflow, whether it's a subflow or not.
Exploring the subflows, we find that the IP's have been searched for in Virustotal, and that we have a reference to the parent. Clicking the reference will take you to the parent execution.
On-prem Schedules on-prem are ran on the webserver itself. Rather than cron, it uses a scheduler that runs it every X seconds. This will become more sophisticated over time. Schedules persists in the database even when you turn off Shuffle, so don't be afraid to update.
Cloud Schedules are based on google's cloud scheduler and are schedules that run based on cron. It takes two arguments - the schedule you want to run it on and the Execution argument used in the workflow. A simple cron converter can be found here. When you are ready to run, click "start".
User Input is an app located within triggers. It provides a method to temporarily pause ongoing executions in a workflow, awaiting approval or denial from a human through a manual click before proceeding with or stopping subsequent executions.
This can currently be acheived through Subflows (Preffered for production), Email and SMS for rapid testing on the fly, but we will introduce many other options, including chat systems.
Note: If you have any suggestions pertaining to this, please let us know via a feature request on github.
The point of the user input node is that it acts as a crucial control point, allowing human oversight and decision-making within automated processes. e.g.
Scenario 1: Granting or revoking user access privileges. User Input: Authorization from appropriate personnel before modifying user roles or permissions.
Scenario 2: Rolling out software updates across a network. User Input: Approval from IT administrators before deploying updates to servers or critical systems.
Scenario 3: Investigating and responding to security incidents. User Input: Analyst decision for remediation actions, such as isolating a compromised system or blocking a suspicious IP address.
In the bottom left corner select the triggers tab
Drag in the user input node into your workflow
Set it up where you will want approval before proceeding with subsequent nodes in the workflow.
Which appears as shown below in a users email inbox requesting for the users input. Remember this action can be sent to whatever communication channel you use. It can also be sent to multiple users but can only be triggered once for now. (We may add multiple users approval for this in the future)
Frontend_continue and frontend_abort should open a new tab with a shuffle UI prompting you to either click on proceed or abort.
API_continue and API_abort this opens a new tab informing you if the operation you selected was a success or not. As shown below
A pipeline is a sequence of interconnected steps or stages that take input data, transform it through various operations, and produce an output. The data enters the pipeline at one end, undergoes transformations at each stage, and emerges as a refined output at the other end.
In Shuffle, we are currently using Tenzir data pipelines.
Syslog Listener: Used to ingest logs into the pipeline.
Sigma Support: Pipelines support Sigma rules, which allow us to define and customize detection rules. This gives Shuffle the ability to control what to detect and the rules governing these detections. Whenever logs match the Sigma rules, a workflow run is triggered in Shuffle.
Kafka Forwarder: Configures the pipeline to actively forward all messages from a specified topic to your workflow.
Additional features will be added in the future.
To start using pipelines for detection, you need to set up or download Sigma rules. This can be done by:
To view and manage the downloaded Sigma rules:
Drag the "pipeline" trigger from the left-hand side into the main view, and it should automatically connect to your starting node.
To start the syslog listener, click on the syslog listener. This will start the listener at 192.168.1.100:5162 on your host machine. You can connect to this endpoint via the TCP protocol and send your syslog data.
For running detection rules, click on the Sigma rule search option. This will create a pipeline that takes the ingested logs and applies the Sigma rules that are downloaded and enabled. Whenever logs match the defined rules, the detected logs are sent, triggering the workflow run.
For forwarding Kafka messages from a topic, click on the "follow Kafka queue" option. You will see a pop-up asking for Kafka-specific information that you need to provide, such as the topic name and bootstrap server address. Once you provide the required details, click submit and start. This will actively forward all incoming messages from your Kafka topic to the workflow.
The reason manual forwarding may be necessary is e.g. if Shuffle doesn't have access to the location you are trying to reach. This makes Shuffle unable to handle rules dynamically, but allows for customization nonetheless.
/var/lib/tenzir/sigma_rules
export | sigma /var/lib/tenzir/sigma_rules | to <webhook url>
Email triggers no longer exist, and should be handled with Email schedules instead: Gmail, Outlook