Product Methodology: Operations vs. SharingPOSTED BY WAYNE CHIANG
During the time when we were designing the initial phases of our threat intelligence platform (TIP), we determined that there were some important core principles that should drive how we built the product. One of these core tenets focused on the importance of building a platform that provided real operational value. We believed this required an on-premises solution that would support a strategic approach to intelligence sharing.
My co-founder Ryan and I draw much of our experience from working in the Defense Industrial Base (DIB). This influenced our approach in how we chose to handle data within a TIP. The first major decision was to design an on-premises solution. Because of the unique data handling requirements within a sensitive community, we learned the importance of having your data onsite within an infrastructure that you can fully control.
In an increasingly cloud-oriented landscape, we architected our product to be deployed on-site within a security operations center setting. This approach has its own unique design challenges, but well worth the advantages in secure data handling.
- Conformity to internal data handling requirements
For example, having your threat intelligence reside on-site expedites the data flow process. Data never leaves the internal environment without having to expose your critical infrastructure to an outside entity.
- Full control over data confidentiality
Additionally, by having all data within a TIP sit on-site, our customers can ensure the confidentiality of their proprietary research in addition to making sure sensitive intelligence does not leak out into an open environment. The data security of a cloud-based design is often a black box approach and doesn’t offer the details of how that information is protected. This is especially concerning in co-mingled environments and leads to questions about data ownership conflicts.
- Ensures data security adheres to corporate policy
Another advantage of having an on-premises design is the ease in which the platform can follow existing corporate policy regarding access, retention and availability. This simplifies the data backup process and can be rolled into an existing business continuity program.
- Ensures data availability during pull-the-plug IR events
But perhaps the most important aspect is the availability of your intelligence in a data breach scenario. In a serious situation where you need to disconnect your infrastructure from the outside world, it is imperative to have a TIP that is readily accessible on-site. Many times we see IR teams struggle to respond to an event because their capabilities are crippled when their infrastructure can not continually communicate with their TIP. On-site deployment ensures ease of connection within a corporate environment as well as the most up-to-date threat information flow between critical infrastructure security devices.
Another major component for an operational design revolves around the sharing of intelligence. Our approach to sharing focuses on the capability maturity of a security team. There are key capabilities that must be mastered before sharing should be considered.
- Is your intelligence actionable? Intelligence is worthless if it can not be acted upon. Whether it be through a team’s experience or technical capability, intelligence should readily be able to be leveraged within an organization. Aggregating data with no real meaningful output is a futile exercise. A team needs to analyze how they leverage their intelligence, whether it be through business strategy decision support or developing the tactical process to respond to adversary C2 infrastructure.
- Now that a team can use their intelligence, they need the capability to cultivate high quality intel. How can you further enrich your threat intelligence in a way that it provides context that is valuable to security operations? This is where a TIP comes into play and a SOC team will need to look at their requirements for TIP enrichment capabilities. Can it integrate with a specific knowledge source that the team subscribes to? For example, VirusTotal is a defacto industry standard for looking up additional details on malware samples. Does the TIP integrate in a way that the security team is currently leveraging VT?
- Once you have high-quality intelligence that can be leveraged within your infrastructure, how can you share this intel with other partners or communities? A great TIP should be able to seamlessly flow intelligence between different sharing communities. For example, does your TIP support STIX/TAXII? Can it communicate in a data standard that your partners can readily ingest?
Building a TIP requires a lot of forethought in design approach. Here we shared a brief glimpse of our strategy in building a platform that was designed based on years of operational experience. As you research TIPs, they will all significantly vary in shapes and sizes and this stems from the different challenges that they are trying to solve. Make sure you understand your priorities, policies and processes so that you select a TIP based on a design approach that delivers operational value.
Co-founder addendum – The case against a Cloud TIP
Cloud TIPs may tactically argue their customers face little security risk as they are only submitting attack indicators and those in the bigger picture are trivial if accidentally “spilled.” However, we took the strategic approach knowing that true threat intelligence goes beyond attackers’ indicators as it MUST be overlayed with the details of an organization’s weakest links. This goes beyond the weakest links published in a public company’s annual 10-K report to include the details of intrusions. For example: Who is the employee that always clicks on the suspicious spearphish? How far behind is the endpoint’s anti-virus signatures? Which mission critical servers are still susceptible to 2013 vulnerabilities?
This level of detail and analysis is critical for a successful threat intelligence program but moves my own “operational comfort-level” beyond the cavalier approach of “it’s just attacker data.” Putting all that information in a multi-tenant cloud hosted vendor is…flat out risky! This was the biggest reason why we designed TQ to be on-premises: because we knew down the road customers would need to mold attacker data against their deepest, darkest weaknesses and the only way to comfortably do that is within their own walls.