Power the SOC of the Future with the DataLinq Engine – Part 1LEON WARD
Evidence continues to mount that it isn’t a matter of if, but when and how an organization will be attacked. So, we are seeing Security Operations Centers (SOCs) narrow the focus of their mission to become detection and response organizations. As they look to address additional use cases, including threat detection and monitoring, investigation, incident response and hunting, data becomes incredibly more important. Which is why cybersecurity-forward leaders recognize that a data-driven approach is the foundation to the SOC of the future.
Data has never been more important to the SOC
All data is security data because data that provides the context needed to make the best decisions and take the right actions isn’t limited to a few tools and feeds, it’s everywhere. And harnessing all that data is problematic. No one understands this better than SOC teams battling to work smarter and faster all while facing internal challenges including staffing shortages, siloed organizations and disparate technologies, plus the ever-advancing threat.
Threat actors are becoming more sophisticated in their tactics, techniques and procedures. Advances with ransomware, thanks to the ease with which it can be monetized, plus the growing attack surface resulting from cloud, remote workers and an increasingly digital supply chain, have all yielded even more data for SOC teams to consume.
However, when security is data-driven, SOC teams have the context provided by a wide range of sources including threats, vulnerabilities and identities, that enables them to focus on relevant, high priority issues, make the best decisions and take the right actions. Data-driven security also provides a continuous feedback loop that enables teams to store and use data to improve future analysis.
Harnessing data for detection and response
Data is spread throughout the typical organization, so bi-directional integrations are required to bring that data together into a common work surface, and an open integration architecture provides the best approach to do this. An open approach to data integration offers the widest access to the range of technologies, threat feeds and other third-party sources that are relevant to detection and investigation, and also enables teams to drive response back to those same technologies.
Response may take the form of machine automation or manual action. Teams have benefited by automating repetitive, low-risk, time-consuming tasks, but the need for human analysis remains. Irregular, high impact, time-sensitive investigations are best led by a human analyst with automation simply augmenting the work. A balance between manual investigations and machine automation ensures that teams always have the best tool for the job, while a data-driven approach to both improves the speed and thoroughness of the work.
The need for a data-driven approach, open integration architecture, and balanced use of automation is best when dealing with the evolving nature of attacks. As threat actors now work across the entire organization, it’s critical for SOC teams to “connect the dots” across all data sources, tools and teams to accelerate threat detection and response.
Introducing the DataLinq Engine
ThreatQ DataLinq Engine takes a unique approach to make sense of data in order to accelerate detection, investigation and response. The DataLinq Engine starts by enabling data in different formats and languages from different vendors and systems to work together. From there, it focuses on getting the right data to the right systems and teams at the right time to make security operations more data-driven, efficient and effective.
It’s common to hear it stated that “Cybersecurity is a big data problem”. This can be interpreted in a couple of ways. You can interpret the statement to mean that security problems can only be solved with big data. However, another perspective is that cybersecurity has a set of big problems, caused by the volume of data now available to teams. Many security problems can be remedied by focusing on the right, smaller sets of data.
Too much data can cause a serious impediment for organizations in terms of scale and execution. While cloud computing drastically reduces the cost of storage and processing, it ushered in a world where data proliferation is a mounting issue. With more copies of existing data being made continuously, each with minor modifications, in different locations, analysts lack a single source of truth which causes confusion. ThreatQ DataLinq Engine focuses on augmenting key existing data stores, so that they can interoperate, reference each other, and enable cross product and data workflows that simplify how defenders approach response.
For many years, ThreatQuotient has been operating inside a diverse ecosystem of hundreds of different security products, threat intelligence feeds, data enrichment services and security operations teams. We’ve seen first-hand the challenges that professionals are faced with in making sense of security data in order to determine if, and how, to respond or contain a threat, or simply ignore it. To better serve our customers, we’ve developed the DataLinq Engine with the specific goal of optimizing the process of making sense out of data to reduce the unnecessary volume and resulting burden.
The DataLinq Engine follows a specific processing pipeline leading to a dynamic end-state that is constantly updating, evolving and learning. This method of processing is vastly different from a SIEM, Log manager, or legacy Threat Intelligence Platforms and follows five key stages: Ingest, Normalize, Correlate, Prioritize and Translate.
In Part 2 of this blog series, we’ll start to dig deeper into each of these stages – how they work and how they help teams manage and use data more effectively and efficiently to accelerate detection and response across the organization.
Want to jump ahead? Download your copy of Accelerate Threat Detection & Response with DataLinq Engine.