Triaging Alerts with Threat Indicators

by Gregory Pickett
Nov. 12, 2017 0 comments 10 minute read Detection & Response ioc threat intelligence
Download PDF

Introduction

The modern-day Cybersecurity Operations Center faces many challenges. For most, the primary challenge is the workload. The number of events and the number of alerts they generate overwhelms most teams (Oltsik, 2017). What’s more, this is only going to get worse. New means of detection are being introduced. Networks expand, and new sensors get added. More and more systems are added onto the network. More software is being enabled for network communications, which means that those teams having trouble getting through all their alerts now will also have trouble for the foreseeable future.

To address this, most teams employ a technique borrowed from the medical field, triage. Currently, triage is done by severity. Those alerts with the most potential damage are processed first. This research presents an alternative, possibly a more effective approach to triage, triage by Threat Indicators.

Threat Indicators

Theory

What are Threat Indicators?

Threat Indicators are those behaviors that are consistent with a threat. The following examples are from VirusTotal: Detected files that were downloaded from this address, and Detected files that communicate with this address. If you have a favorite spam blacklist site, an address showing on that site or its list would also be a Threat Indicator.

Threat Indicators are attached to or associated with the adversary in the alert. The adversary is the outside system seen in the alert, the unknown system. For incoming connections, the adversary would be the source. For outgoing connections, the adversary would be the destination. A few examples can be seen in Figure 1.

Figure 1. Adversaries in individual alerts

What do Threat Indicators measure?

First, Threat Indicators measure the adversary’s acceptable behavior, and this is what the owner, or the provider, of the system, allows. Second, Threat Indicators measure intent. What the system operator has done in the past gives you an idea of their intent.

How are Threat Indicators measured?

Threat Indicators are observations. They are observations made by others and are represented by a number. The behavior has been seen X number of times.

How do Threat Indicators work?

Given the past behavior of the adversary, as measured by Threat Indicators, how likely is the adversary, at this moment, a threat? The greater the number of Threat Indicators, the more likely the adversary in the alert is a threat. Therefore, the alert with the greatest number of Threat Indicators gets looked at first. The alert with the next greatest number of Threat Indicators gets looked at next, and so on.

Sources of Threat Indicators

Threat Indicators are found along an adversary’s involvement in the arc, or stages of an attack campaign. The types of indicators, or where along the arc the indicator lies, are as follows: Exploitation, Post-Exploitation, and Reputation. Exploitation and postexploitation are all about specific threatening activity seen from an adversary. The indicator shows exactly what the threat looks like and what it may do. Reputation is a composite of multiple indicators and is reported as a summary. Despite being a composite, it is reported as a single indicator. Indicators can relate to either the adversary’s address or the name the adversary is using. Some sample Threat Indicators can be seen in Figure 2.

Figure 2. Sample Threat Indicators

Why do Threat Indicators work?

Threat Indicators are a variation on behavioral analysis. However, they are more about semantics than signal. Threat actors and their methods constantly change. However, their goals remain the same, and thus behavior remains constant. If they have a history of engaging in this type of activity, the likelihood or probability that they will continue is high and therefore actionable. Looking at behaviors over time and frommultiple observers, or sources ensure accuracy.

Practice

Implementing Threat Indicators

Implementing Threat Indicators will probably be the most challenging. If your current solution does not support it, and as you can see from Table 1 below many do not, you would have to develop software. Either your SIEM would have to be customized, or a tool deployed that would guide your SIEM activities. Something that while not fully part of your platform is never the less implemented through making it part of your process.

Table 1. Triage Method for Popular SIEMs

Benefits of Threat Indicators

First, and foremost, Threat Indicators allow an organization to identify material alerts; alerts are likely to involve a genuine threat, because Threat Indicator triaging is predictive in nature. Past behavior is the best predictor of future behavior. Using severity for triaging is in no way predictive while Threat Indicators are, a big improvement.

Second, using this method, an organization can get the alerts faster, because they no longer have to go through all of them to find the ones that are material. They only need to go through the ones with the high number of Threat Indicators. This number is significantly less than the total and takes less time to go through.

Third, being able to get the alerts faster, your team has more time to spend on mitigation and remediation.

Limitations of Threat Indicators

While there are many benefits of Threat Indicators, there are some limitations. Often, there are multiple systems involved in an alert. However, only one system will trigger the alert. This system on its own may not have many indicators. However, other systems involved in the event, but not reported on by the sensors, may have many. These other systems will be missed leaving the analyst to conclude that the alert does not represent a significant threat.

There will also be situations in which observers are not available because the address never engaged in “bad” acts before or because no one was around to see them. Either way, the adversary won’t have a history for you to use to measure Threat Indicators.

Some of the indicators may be of poor quality. This poor quality may result in observations being made that are not valid and would result in an adversary appearing more threating than they really are.

Finally, alerts only involving private addresses can be a challenge. When it is either your private address communicating to another one of your private addresses, or your partner’s private address communicating to one of your private addresses, it can be difficult to select the adversary. It can even be more difficult to find observers. There is no third party to report any bad behavior that the adversary may have engaged in.

Addressing Those Limitations

The first limitation is sensors not seeing all the systems and has the biggest effect on your analysis and ultimately your investigation. It is a limitation that will also probably be with us. We are never going to get one hundred percent visibility. We will just have to strive as we always do to improve visibility. It is that drive forward that will ultimately help improve our situation and ultimately mitigate this limitation.

The second limitation is lack of history. For this limitation, defense in depth will be key. Sensors exist along the attack path and through the path of any campaign. As the adversary continues moving forward in your enterprise as well as in others’, there will be more opportunities to see their activity. Every time their activity is seen, there will be an opportunity for someone to observe the bad acts and contribute to the history so that, as a whole, analysts using this approach will eventually have a history to depend on/consult. Whether someone else observes the adversary and informs you, or you get a better look at the adversary’s activities, and you inform the others, the Threat Indicators will show up. The adversary will be caught.

The third limitation is poor quality indicators, that is found quite often when Threat Indicator acquisition is automated. It is not automation itself that is the problem. In any large-scale collection, automation is necessary. It is when there is no curation. When there is no curation, poor quality data is not corrected and/or removed from the data set. For this limitation, your own curation will replace that of others. The organization will as part of their process verify the veracity of any Threat Indicator source by performing their own analysis of the results. For this, a comparison of Threat Indications to threat confirmation will be in order. Your organization will regularly have to compare what the Threat Indicators said to what the actual analysis determined. If you find that there is a deviation, examine the alerts and determine if there are Threat Indicators in common. Examine those Threat Indicators that are involved in those types of alerts that don’t turn out to be as material as the Threat Indicators suggested. Investigate the indicator and determine why the indicator was wrong. If the indicator is regularly wrong, then it will have to be replaced. If it can’t be replaced, introduce a weighting system so that higher quality Threat Indicators have a bigger effect on the decision-making process while those of lesser quality or weight have a lesser effect.

The fourth, and final limitation is private addresses. With private addresses, determining the adversary can be difficult. It is difficult because there is no outside system. There is no unknown to assess. To address this, rely on trust. The adversary is the system that your organization would trust least. When it comes to private address to private address traffic, this can be quite easy. If it is between your system and a partner system, it is the partner system that you trust least and, therefore, would be the adversary. Between a high trust zone and a low trust zone in your own network, it would be the system in the low trust zone that would be the adversary. With private addresses, there are also no outside observers that can report Threat Indicators. To address this, you will have to instrument your network in such a way that you can provide your own reporters and will mean utilizing your existing sensors to accumulate events that can be referred to during the triage process. It will also mean looking at your logs and determining if there are other ways to report on adversary activity. By coming up with enough internal sources, you should be able to produce the Threat Indicators that you need.

Triaging with Threat Indicators

Individually

To demonstrate how an alert would be triaged using this method, several examples are below. Each alert is a real alert. We will review the adversary Threat Indicators and compare the Threat Indicator results to the outcome of the analysis. We will not be using the full range of indicators available just a subset, but it should give you an idea of how it works. While the Outcome is not necessary to demonstrate the triaging process, it does show how well the Threat Indicator total predicts the Outcome.

Alert One

This alert is a flow-based alert. It triggered when a system on the network connected to a known botnet command and control server. The Threat Indicators and the Outcome of the alert can be seen in Table 2 below.

Table 2. First Alert

Alert Two

This alert is a payload-based alert. It triggered when a system on the network browsed to a site. The browsing was seen as a “Microsoft Internet Explorer invalid object access memory corruption attempt” event by the IPS. The Threat Indicators and the Outcome of the alert can be seen in Table 3 below.

Table 3. Second Alert

Alert Three

This alert is a payload-based alert. It triggered when a system on the network browsed to a site. The browsing was seen as a “Microsoft Internet Explorer invalid object access memory corruption attempt” event by the IPS. The Threat Indicators and the Outcome of the alert can be seen in Table 4 below.

Table 4. Third Alert

Process

Next, to make sure that these results held true for a larger, more realistic sample, we will look at several thousand alerts. If this approach is effective in identifying threats, we should see that as the Threat Indicator count increases, so too should the probability that the alert is a Confirmed Incident. Conversely, the probability that the alert is a False Positive should also go down. Both proved to be true according to the data found in Table 5.

There, we see a batch of 1,642 alerts taken from an operating Cybersecurity Operations Center. These alerts were triaged using Threat Indicators and then grouped by the number of Threat Indicators that the adversary had. As we move up from no indicators to one indicator to two and so on, the percentage of alerts that turned out to be, after the analysis was complete, a Confirmed Incident increased. Conversely, the percentage of alerts that turned out to be, after the analysis was complete, a False Positive decreased.

Table 5. Relationship between Threat Indicators and Outcome

This is that same data in a scatter chart, shown here in Chart 1. If you look closely, you do see that the number of Confirmed Incidents do go up as the number of Threat Indicators goes up and the number of False Positives goes down. However, the difference between the two Outcomes was not as great as anticipated.

Chart 1. Relationship between Threat Indicators and Outcome

This was primarily due to anomalies in the seven and nine Threat Indicator groups. Given the small number of alerts in those groups, it was important to see if anything was skewing the data. Any skew in those groups, due to the small number of alerts in those groups, would be significant and therefore, it would be best to eliminate any that was there to get a better idea of what the true impact of the approach was.

By looking at the alerts, it was easy to see that the primary reason for the large number of False Positives in these two groups was several connections to Dell. The connections were to Dell FTP servers and the Dell FTP servers were associated with a large number of Threat Indicators, was obviously due to misattribution. When the effect of this misattribution is removed from the data, what you see is in Table 6.

Table 6. Relationship between Threat Indicators and Outcome (Misattribution Removed)

That is that same data in a scatter chart, shown here in Chart 2. As you can see, once this misattribution is removed, the difference between the two Outcomes was much greater and proves that the approach is indeed effective in identifying threats.

Chart 2. Relationship between Threat Indicators and Outcome (Misattribution Removed)

Next, we will look at how long it would take to get through these alerts. If the Threat Indicator approach is more effective, it should take the Cybersecurity Operations Center less time to go through the alerts using it.

To see if this was true, we compared the time it took to go through just the alerts with Threat Indicator to the time it took to go through all the alerts. It is true that most SIEM solutions triage by severity. However, severity isn’t a filter in the way that Threat Indicators are a filter. Threat Indicators predict Outcome. Predicting outcome allows you to skip over an alert because it can be predicted with relative certainty that the alert does not contain a threat. Severity is not the same. Severity is not predictive. It is more about reducing mitigation efforts. Therefore, you cannot use it to skip over alerts. If you cannot skip over alerts, you have to go through all of them.

For this comparison, it was assumed that each of these alerts would take fifteen minutes to complete. It was also understood that for the Threat Indicator approach, we would only be analyzing those alerts with at least one Threat Indicator. We can see the results of the comparison below in Table 7.

In the first section, we have the total number of alerts. In the second section, we have our assumptions. With these assumptions, it only took 311 hours to go through the alerts with Threat Indicators. To go through them all, it took 411 hours. In total, it took less than 100 hours to go through the alerts using the Threat Indicator approach and proves that with this approach, yes, analysts can go through their alerts in less time.

Table 7. Time to find all Confirmed Incidents (By Method)

Conclusion

From the data above, we can see that by using Threat Indicators an organization should be better able to identify whether an alert will contain a threat. Based on a selection of 1,642 alerts from a real Cybersecurity Operations Center, the greater the number of Threat Indicators that were present the higher the likelihood that the alert was a Confirmed Incident. Inversely, the greater the number of Threat Indicators that were present the less likely that the alert was a False Positive. At six indicators, the likelihood an alert was an incident was 79%. The opposite was also true. The lower the number of Threat Indicators that were present the less likely an alert was a Confirmed Incident and the more likely that it was a False Positive. At one indicator, the likelihood the alert was a False Positive was 85%. It is very effective in identifying threats. Therefore, it would allow us to triage much more quickly.

From the data above, we can see that, yes, the alerts can be gotten through more quickly. Only going through the alerts with Threat Indicators reduces processing time by 24%. When working with a limited staff and only having a limited amount of time to go through the alerts, this can make all the difference. It can mean getting to all the threats rather than having to ignore alerts and possibly missing some of them (Tara, 2016).

In conclusion, based on the data, using Threat Indicators would allow Cybersecurity Operations Center to triage more effectively and more quickly.

Published with the express permission of the author.