Threat Intelligence – what is it good for?

Threat intelligence has a place in any mature security operation, across any sector. However, not unlike many other good concepts in other industries, its path to being commercialised has involved commoditisation and an often-suspect amount of over-simplification. What is truly needed for a robust threat intelligence platform is an intake of data from diverse sources that have different origins and add value in different ways – we then can aggregate, filter and create intelligence bundles. We then need a list, a platform to use that list, and then an analyst to use the platform. It’s complicated.

At its core, intelligence is supposed to be unnoticeable, actionable and value-added information, that can only be available through processing and interpretation. In an ideal world, this would demonstrate the methods and actions of our adversaries. However, the basic premise of most commercial products is that if an entity has been observed acting maliciously in one location, then it should be expected elsewhere. Based on this premise, threat intelligence feeds are sold at six-figure sums a year – but do they actually work?

The experiment

SecureData conducted an investigation that we presented at BlackHat Europe & Asia, of over one million internet threat indicators over a period of six months. We used a diverse set of sensors on real-world networks, and tracked port scans, web application scans, DoS and DDoS, and exploits.

We tracked the malicious IP addresses detected, looking at their behaviour over time and mapped both ‘horizontal’ correlations (the ability of one sensor to predict activity on a different sensor, or one target to predict for another target), and ‘vertical’ correlations (the ability of a sensor to predict persistence or re-appearance of an IP indicator).

By examining these two sets of correlations we shed some light on the value proposition of basic threat intelligence offerings, and improved our understanding of their place and value in security systems and processes.

The results

In the end, 68% of all unique predictions occurred within the first 48 hours. This life expectancy was interesting, in that between the first and second time we would see an IP address, the percentage of us seeing an observation again after two days was incredibly low. Unless we can act on that prediction in the first two days, we are wasting our time.

The second interesting observation was the type of suspicious activity. On average, 87% of all predictions were for the same kind of activity. If we observed an IP doing one activity, we would only see it doing the same thing again. There was no progression from port scanning to web application attacks, for example.

So, is generalised threat intelligence really that effective? The key was finding true positives. We needed to see if an IP was suspicious in the past, and correlate that with correct predictions and what figures were observed. The percentage that we would see that same IP being observed acting suspiciously again was 3.59% in our Threat Intelligence Lab Environment, and 9% in our honeypot environment. The numbers were dismal, false positives were rife and the work needed to identify them was enormous.

Essentially, for every 100 IPs we would tell you to worry about, you would probably only ever see four of them. In man-hours, over a 90-day period, the equivalent of 48.6 days would be needed to deal with all the false positives from the threat intelligence platform, and 8.26 for the honeypot network. Of the 171 manually verified true rogues that we observed and predicted, about 0.25% were predicted as being true rogues by the threat intelligence lab, and about 0.8% for the honeypots. Translating that into work, that means 108 hours of man-hours. That is a lot of time, and that time taken raised more questions about the real value of threat intelligence.

The takeaways

So, is it worth it? Let’s think about about security philosophy. Intelligence-led security forms suffer from the same tension between false positives, limited resources, and unknown unknowns. Is there ever a balance? We will never know the unknown unknowns – so is there any point in pursuing that balance? Perhaps we should be funnelling our limited resources to proactively engineering more robust systems? If we can’t demonstrate that any threat intelligence is better than another, why not take the time needed to dissect these datasets and use it to make what we already have better?

When it comes to protecting your business’s data and networks, it can be difficult to know where to start. This where a trusted partner can make life easier, by managing varied security services on your behalf, leaving your internal security teams to focus on business-critical activities.

For more information on how SecureData can help your business, contact us here.

  • Share