<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2659386&amp;fmt=gif">
Request a Demo
Cysiv Blog

The Detection Value of Cybersecurity Data

Back to Blog

Welcome to part one of Cysiv's blog series on the detection value of data! In this series, I will be walking you through how to get the highest return on investment for all of the cybersecurity data you are collecting. At Cysiv, we’ve built a next-gen security information and event management platform (SIEM) with the specific purpose of collecting and correlating important data that can detect both internal and external threats. I like to refer to it as “a platform for analysts, built by analysts.” Because Cysiv is a SOC-as-a-Service company, the better our platform is, the more it enables our analysts to protect customers.

Cysiv Data Detection Value BlogLet's start with an agenda and some definitions. I’ve broken this series up into ten parts. There’s a little bit of everything so you can get a full perspective —  from what we have seen from our personal experiences as well as throughout our customer base. If you have a suggested topic, leave a comment at the bottom of this blog and we’ll try to work it in as part of the series. 

One thing we want to do in this series is provide CISO’s and Blue Teams something of value, whether it’s a detection, an idea, a checklist, or some other nugget of information you can use or implement in your environment. It’s important to us that if you take the time to read our blog that you also walk away with something you can use. 

Here’s what we'll cover:

  • Part 1: The Detection Value of Data
  • Part 2: Windows Part I – Windows Events, Advanced Logging, and Configuring for Success
  • Part 3: Windows Part II – Phishing & General Detection Engineering with Windows Events
  • Part 4: Security Tools – Why You Need Them, What to Look For, and Their Limitations
  • Part 5: Linux and MacOS Endpoints
  • Part 6: Application Data, Vulnerability Data, and Other Hidden Data Gems
  • Part 7: Cloud Data (GCP/AWS/Azure)
  • Part 8: Collecting Data for Compliance Myths
  • Part 9: The Art of Detection Engineering
  • Part 10: Assessing Your Visibility and Detection Value

Let’s jump right in with Part 1!

Defining “Detection Value”

If you could only have one data source, what would it be? Think through all the security data you collect (or want to collect) today: which source stands out as the most valuable? Feel free to leave your answer in the comments below. How did you arrive at your answer? You probably did a quick mental calculation on what you couldn’t live without, what gives you the most valid detections, or maybe you skipped right past the data part of the question and picked a security tool. However you arrived at your answer, and whichever criteria you used to get there, was an example of evaluating the detection value of data. By the way, I picked Microsoft Sysmon (or auditd or MacOS Unified Log) as my answer, and I will share why later in this post.

So what is this term, “Detection Value?” I suppose we should start by explaining what detection means, as there are a lot of competing terms in the industry, including alert, event, incident, indicator, detection, and so on. At Cysiv, we delineate this into two simple parts, and there’s an entire whitepaper on this subject if you have further interest. Download your copy here.

A Detection is when you know with high confidence that something malicious is occurring. An Indicator is something that has a chance of being legitimate or malicious.

A Detection is when you know with high confidence that something malicious is occurring. An Indicator is something that has a chance of being legitimate or malicious. An easy example:

Non-admin user runs PowerShell for the first time (Indicator 1)

The PowerShell command is encoded (Indicator 2)

PowerShell connects to a site on the internet that matches threat intelligence (Indicator 3)

PowerShell downloads a file from that site (Indicator 4)

Indicators 1 + 2 + 3 + 4 = Malicious PowerShell Detection

I think it’s fair to say that any one of the indicators, by themselves, could have at least some chance of being legitimate. But all four together clearly make a malicious pattern, or detection. 

Assessing the Detection Value of Your Cybersecurity Data

So back to Detection value. Different data sources have varying information that can contribute to indicators and detections, or I&D, as we call them. Going a few levels further, within a data source, different event types can have different value, and then within an event, certain fields or key-value pairs have higher value. There are two optics to view detection value:

  • Use-case specific
  • Highest overall value

For example, if your use-case is detecting C2 traffic, you might be interested in sources that pertain to network connections, and within those sources, fields that contain domain names, DNS queries, and IP addresses. The highest overall value optic is the measure of how many total use-cases the data source can detect, or if you weight your use-cases, the measure of how well a data source detects your priority use-cases. Let's say that you're indexing 5TB/day of data into your SIEM, and that includes 20 different data sources. Each of  those sources contain information that contributes to detections, but not all of the data has equal detection value. In part 10 of the series, we’ll walk through an easy mathematical formula for you to use in your environment to determine quantitative detection values. 

Are we talking about raw telemetry or are we talking about security tool data? The answer is both, as each has its own role in detection. Do you need some raw telemetry? This is usually a hotly contested topic at enterprises, especially when indexing or storage costs limit the amount of data a security team can collect. Here are just few examples of common raw telemetry sources:

  • Sysmon/Auditd/MacOS Unified Log
  • DNS Queries
  • Windows Domain Controller authentication data
  • PowerShell Operational Log
  • Exchange Tracking logs
  • Netflow
  • IIS or webserver logs
  • Host-based Windows Event logs
  • DHCP leases
  • Cloudtrail/Cloudwatch/Stackdriver/etc.
  • O365 Sharepoint Audit logs
  • etc.

I listed a variety of raw data sources here to show the various places within your on-prem, cloud, or infrastructure-less environment you might find them. But what about security tools, and what is the difference between data from a security tool vs. raw telemetry? Generally speaking, raw telemetry consists of unprocessed events that simply state what event took place. Security tools fall into two categories from a data perspective

  • Security tools that logs alerts/incidents/detections only (processed data)
    • AV, IPS, EDRs*, Spam Filters, etc.
  • Security tools that log processed data and raw telemetry
    • Firewalls, EDRs*, Behavioral tools, CASB, etc.

EDRs are caveated here because some, but not all, provide raw telemetry in addition to the AV/EDR functionality. 

Back to the hotly-contested question: Why do I need raw telemetry if I have best of breed security tools? Quite simply, security tools fail. A lot. If that sounds harsh or if you disagree, give me the rest of this blog series to change your mind. To be fair, security tools also succeed the vast majority of the time. Confusing? You certainly can’t go to your board and ask for more funding for tools that fail a lot, right? Here’s a better way to say it:

Security tools handle/block/prevent 99% of the volume of threats against you, while handling about 15% of the breadth of threats against you.

If you still don’t believe that assertion, then at least give me until Part 4 to change your mind. In the interim, maybe this will keep you from thinking I’m a total nut-job:

Most of the major annual “State of Cybersecurity” reports out there note phishing as the source of over ~90% of successful cyber attacks. So if 90% of successful attacks originate from phishing, there’s an inference we can make about the security tools used to stop phishing: they don’t always work. Usually phishing is defended in depth using multiple layers of protection:

  • Spam filtering
  • Email AV
  • Email AV with inline sandboxing
  • Threat intelligence & reputational analysis
  • Endpoint protection (AV, EDR, UEBA, behavioral tools)
  • Policies around allowing links and attachments to be clicked
  • Phishing awareness, education, mock campaigns (Sidenote, and fun argument for another day: Phishing awareness, education, and mock campaigns are more effective at preventing phishing than the actual security tools!)

Only by understanding the behavior of threats, and then ensuring you have the detection visibility and/or security tool capability to see or stop the threat, can we be effective in the ongoing battle of cybersecurity.

As you can see, there are multiple ways to detect and stop phishing in this list, but yet 90+% of successful attacks gain foothold through phishing. I think we have to accept, objectively, that security tools are not enough, and while they are necessary and incredibly valuable in handling that 99% of threat volume, they need augmentation from raw telemetry. One thing I have learned working APT targets is that when they want something, they save their best spear phishing for getting it. This is the stuff that will never hit on threat intelligence, AV, or mail filtering. Only by understanding the behavior of threats, and then ensuring you have the detection visibility and/or security tool capability to see or stop the threat, can we be effective in the ongoing battle of cybersecurity. That is the subject of this blog series. 

If all of this feels pretty basic to you, it means you have the knowledge you need to start assessing the detection value of your data. If not, don’t worry, these are the strategic, big picture questions that are foundational to getting the highest ROI on the data you collect, and there are some resources that can help us start asking the right questions.

How to Put the Bigger Data Picture into Practice

If you’re still with me after all of that theory, it’s time to start putting it into practice. Walking through this process will allow us to get the most out the rest of the series, which dives much deeper into actual data and detections. The process looks like this:

  1. Initial self-evaluation and introspection of your logging status
  2. Defining your priority of use-cases
  3. Visibility analysis and detection gap analysis
  4. Addressing gaps with raw telemetry or security tools
  5. Testing visibility & Purple Teaming

Cysiv has a resource we want to share with you to help complete step 1. Our Data Source Questionnaire is what we use when consulting with customers during initial onboarding or during a Proof of Concept (POC). Since we feel this is such an important first step for any enterprise security organization, and not just our customers, you can download it here and use it for your own visibility introspection and logging self-evaluation.

The second step is yet another hotly-contested question to debate internally. In many instances, the very definition of use-cases is debated among the security team. Is ransomware a specific threat? Or is it something more general like “preventing data exfiltration?” However you choose to define and prioritize your use-cases doesn’t matter because ultimately detecting your use-case will come down to log events and field values. No matter how general or specific the use-case is, it always comes down to data fields and values. Even within the detection continuum, from simple hash matches to machine learning, it always comes down to fields and values in your data. For the purpose of this series, we’ll focus use-cases mostly on the most relevant topics we hear from CISOs: phishing, ransomware, and “Living off the Land” or LOL attacks. By the end of the series, you’ll be armed with information that will help complete steps 3 through 5.

Where do we go from here? You now have the foundational components- the data source questionnaire and your use-cases. In Part 2, we’ll dive heavily into the Windows ecosystem and get into specific settings, configurations, and logs that will help you achieve detection of your priority use-cases. We’ll show how ransomware can be detected by log, by field, and by value, and we’ll walk through ensuring your logging configurations support those detections. 

Let’s end where we started — if you could pick only one data source, what would it be? I picked endpoint telemetry as my preference. Agree or disagree, let me explain my thinking. When I think about the detection value of data, my mind goes straight to process execution and the question: where can I see it in endpoint data? As the saying goes, “Malware can hide… but it has to run…”

Until next time!