In part 1, we:
- Configured scheduled osquery query on my linux endpoint
- Pushed the resulting data to a Kinesis stream established by StreamAlert
In this post we’ll:
- Configure StreamAlert to alert when a condition is met
- Feed StreamAlert alerts to Slack
One of the first things we need to is to make sure that we update conf/logs.json and conf/sources.json
- Validate that the log schema can be found in logs.json.
- Validate that the newly created Kinesis source, as well as the expect log types, are defined in conf/sources.json.
You’ll notice that I have deleted the default sources which start with “prefix_cluster1_*”. If you do this, keep in mind that the default rule test data found in test/integration/rules will no longer be treated as valid data, as the source information with these test cases are associated with the default “prefix_cluster1” sources. You could go through these json files and modify the sources to be representative of your source.conf file, or if you don’t care about the test rules/data, you can just delete the sample json files.
Next, lets create a rule in StreamAlert that will fire when some criteria is met. An interesting event we might alert on is if netcat is used on a device.
- rules docs
I created new file in rules/ called new_rules.py.
The first 4 lines are required. Each rule is defined as a python function, which must include the @rule function decorator.
Within the @rule decorator, you can define a list of log sources and outputs that are relevant to a rule. They also allow you to define an optional “matcher”, which allows you to prevent function execution if certain criteria are not met.
The rule logic is like so:
Is the record getting past to this function related to the “listening_ports” query?
If so, is the process name “nc” or “netcat”?
If so, return True.
It would probably be safer to use dict.get(key, default=None) to ensure that the fields that we are looking for actually exist. Catching an exception with a try/except would be even safer.
Once finished with the rules, you must modify main.py to import the newly created module:
Now that we have a rule, we should probably create some test data to make sure that it will actually fire. I created a json file within test/integration/rules/ called nc_finder.json. The name of this file must match the name of rule that was just created, otherwise the test will not work.
Note that the source needs to match what is in your conf/sources.conf, and the schema needs to match what is in conf/logs.conf
We can test to see if this fires with ./opt/streamalert/stream_alert_cli.py lambda test –source kinesis –func alert
In the rule I created above, I have Slack setup as an output. Now we need to make sure that our (encrypted) Slack information is stored within StreamAlert.
The first step is to create a Slack webhook token. Note that the token required is not your personal Slack oath token.
Next, follow the steps outlined in this doc. Note that before you can execute the aws cli command, you need to:
- Ensure that your AWS account data are in your environment variables.
- Ensure that StreamAlert has been initialized using the ./stream_alert_cli.py terraform init, as the alias which is referenced will not exist prior to initialization.
Once that’s done, you should see a file in your stream_alert_output/encrypted_credentials called slack, which a bunch of garbage within.
We should be now be ready to push the changes to AWS with ./stream_alert_cli.py lambda deploy
I should have mentioned CloudWatch awhile ago, but it is a very valuable resource when debugging issues. Before you push changes to “production”, you can validate that your rule is working right by monitoring the logs in CloudWatch.
From my endpoint, I should be able to get StreamAlert to send a message to Slack soon after I create a netcat listener. Lets try it out!
With osqueryd running, I created a netcat listener: