The Speed of Security & AI
Tom AshoffThreatQuotient’s Perspective on Security Operations and vision for AI in the ThreatQ Platform
We’re all familiar with the “speed of cybersecurity”, the rapid pace at which cyber threats evolve and the corresponding need for timely and agile defense measures. We’ve been trained to understand the importance of quickly detecting, analyzing, and mitigating cyber risks to stay ahead of attackers and protect assets in an increasingly dynamic and interconnected world.
New technologies like cloud computing and automation have led to transformative changes in cybersecurity, though these changes weren’t immediate. The use of the cloud advanced amongst other IT teams much faster than in cybersecurity as security teams were hesitant to cede control to technologies in the hands of others. Similarly while automation flourished in general IT applications (e.g. Puppet, Chef, Ansible) and business units like marketing (e.g. Pardot, Eloqua, Marketo), adoption lagged within cybersecurity as teams needed to get comfortable with both the risks and rewards prior to deployment.
Facing another new technology perhaps even bigger in scale than cloud or automation, the same measured approach makes sense with Artificial Intelligence (AI) in cybersecurity. As the field of cybersecurity continues to grapple with the ever-accelerating “speed of cybersecurity,” embracing transformative technologies becomes crucial. AI holds tremendous promise in bolstering defense mechanisms, detecting threats, and enabling faster incident response. However, it is essential to approach its implementation responsibly, understanding both the rewards and risks it brings.
Let’s look at AI and how it can be applied to security operations. While the recent interest in AI has focused on generative technology, it is not the only aspect of AI to consider. At ThreatQuotient we’ve been researching and implementing AI technologies, and have identified three key pillars that each provide unique capabilities: specifically Natural Language Processing (NLP), Machine Learning (ML) and Generative AI. Let’s take a closer look at each element.
Natural Language Processing (NLP) focuses on the interaction between computers and human language. It involves analyzing and understanding language to enable machines to comprehend and respond to human created text. This is critical when parsing unstructured data that comes in many forms – reports, emails, RSS feeds, etc.
How does ThreatQ leverage NLP to streamline a variety of security operations use cases? ThreatQ ACE (Automated Context Extraction) automatically identifies and extracts Threat Intelligence such as IOCs, malware, adversaries and tags from unstructured data, using named entity recognition & keyword matching. Customers can use ThreatQ ACE to extract meaning and context from unstructured text in data feed sources and finished intelligence reports; as well as parse any reports, events or PDFs that are already in the ThreatQ Threat Library. ThreatQ TDR Orchestrator’s automation with the ACE workflow saves analysts time by removing the manual steps they spend today reading and extracting data from reports manually, freeing them to be more proactive when addressing the risk within their environment.
Machine Learning (ML) enables computers to learn from data and make predictions or take actions without being explicitly programmed. It involves algorithms that iteratively improve their performance based on training examples.
How does ThreatQ leverage ML? The ThreatQ DataLinq Engine uses proprietary Machine Learning techniques to make sense of data in order to accelerate detection, investigation and response. The DataLinq Engine starts by getting data in different formats and languages from different vendors and systems to work together. It also incorporates results from automation for learning and further action can be initiated. The more data and the more context, the more the Engine learns and improves. The Engine focuses on correlation and prioritization which ultimately aids in getting the right data to the right systems and teams at the right time to make security operations more data driven, efficient and effective. ThreatQuotient is constantly evaluating techniques to improve the DataLinq Engine’s ability to learn from the data and the process by which that data is consumed and utilized.
Generative AI focuses on creating systems capable of generating original content, such as profiles, reports or security automations. It uses deep learning to learn from data to produce realistic outputs resembling human creativity. There are several tools out there today, and is most commonly understood as ChatGPT, Google Bard, Microsoft Bing Chat.
Generative AI is the most recent form of AI that’s capturing everyone’s attention. As this technology evolves the possibilities seem endless. Generative models can learn patterns from existing malware samples and generate new ones, aiding in the identification and detection of malicious software. They can generate adversarial examples, helping in testing and strengthening the resilience of organizations against attacks. They can generate automated responses or actions based on identified threats or attack patterns, enabling faster and more effective incident response, and much more. While the promise of Generative AI is significant, we must understand the risks as well as the rewards before it can be widely adopted.
To advance this effort, we’re excited to announce our first embedded ChatGPT integration that teams can use to assist in enrichment, automation and remediation. While we are in early stages, we see great promise and possibilities. Our years of experience give us a unique perspective on how best to take advantage of the enormous potential of Generative AI. This will be a journey, and I’m sure we’ll learn a lot along the way. Our approach is to be measured, focus on specific use cases, partner with customers for feedback, learn and improve over time.
For teams interested in responsible collaboration on the exciting future that Generative AI brings to cybersecurity, we’re pleased to announce that the ThreatQ ChatGPT integration is available for restricted, early access. Please register your interest below.
ThreatQ’s ChatGPT integration, along with other forms of AI like NLP and ML, demonstrates the evolving landscape of cybersecurity. By harnessing the power of these technologies, organizations can better protect their assets in an increasingly interconnected world. The future of cybersecurity lies in the collaboration between human expertise and AI advancements, using AI to augment the human ability to adapt and solve problems. The result is forging a stronger defense against evolving threats.
0 Comments