ThreatQuotient’s threat intelligence management platform has been operational since the summer. Usage of its output has yet to become standard, but the first benefits can already be seen.
The first tests took place in 2016. During this period, several demos – including those based on open source platforms – were developed to replace the one already deployed, which had been developed in-house. At the time, the French Ministry of Defence needed to step up its intelligence management activities regarding cyber threats.
Sébastien Bombal, an anticipation officer at Cyber Security Headquarters, explains that it was in fact the analysts who decided to choose ThreatQuotient’s platform, convinced by its features, import and export capabilities, data lifecycle management, adaptation to existing processes and also its user-friendliness. He stresses that, “It is not simply a technical product; above all it is a business product.”
Simplifying the work of analysts
Indeed, the deployment of ThreatQ was not intended to reduce the number of analysts, rather to allow them to do more and become more efficient. This includes automating certain tasks, particularly those related to the life of indicators and their associated attributes: “The classic example is confidentiality. An indicator that we would have had upstream with a certain level of sensitivity, can be made public three months later in an antivirus publisher’s report. Once in the public domain, it is no longer confidential. And this gives us different room for maneuvering. Operating procedures and working methods differ according to the level of confidentiality”.
This is for managing indicators, which is integrated into analysts’ work to supplement them and to try to “know everything that can be known about threats” within the field of cyber security.
Automating information gathering
But this is just one of three components of threat intelligence management. The priority is always information gathering. Both open and private sources are obviously used here. “There are many things available in the community, both via open source and in partnerships”. But it is necessary to apply filters to “not be inundated by things that are not necessarily relevant to us, such as banking malware”.
And this is where the platform can help, especially in processing unstructured files such as PDFs or spreadsheets: “Help with ingesting reports is invaluable. We avoid duplicating work, making mistakes, or developing yet another in-house solution to try to save time.”
In terms of input, the platform is also fed by the infrastructure: “Network indicators are quite simple. System indicators, which are often more relevant, are sometimes more complicated because of volumes and heterogeneity,” in particular.
Then comes operations
The third aspect concerns the operational use of intelligence. The ThreatQ platform has been operational since the summer and indicators have already begun to be pushed “as close as possible to all hardware and infrastructures likely to use them: in concrete terms, these may be blacklists on network equipment, perimeter protection systems, host intrusion detection and prevention systems (HIDS/HIPS)”.
For hardware using detection rules, however, the situation is somewhat complex, particularly because of heterogeneity: “You can export and push a rule, which might be relevant for small things, but this is not enough, especially when you have large quantities of indicators.” In this case, significant “local engineering efforts” are needed.
However, the benefits are already there: “Knowledge of the threat has become more comprehensive.” But the same applies to infrastructure. Enough to “know how to better detect, have real monitoring blueprints”. Ultimately, Sébastien Bombal claims “a complementary effect of efficiency gains in the whole defensive chain”.
See original article here.