How should you evaluate a security vendor’s effectiveness at responding to threats? For many years, Mean Time to “X” (MTTX) metrics — measurements on the average time it takes a provider to detect, alert, respond, and so on — have provided a “good-enough” benchmark for executives in the absence of any other quantifiable security metric. This needs to change.
Our new white paper argues that the MTTX approach to security assessment is no longer a useful differentiator in an era of widespread machine-speed threat detection. Since everyone’s speeds are now so fast, detection quality should be the name of the game. False positives and low-value, uncontextualized alerts contribute significantly to the growing problem of staff burnout and alert fatigue. To combat this, IT stakeholders need metrics that offer insights into a provider’s ability to sift signal from noise and deliver thorough, consistent, and accurate detections over time.
Our white paper will:
- Examine issues with MTTX metrics, such as the problem that looking only at averages downplays the risk of rare but damaging outlier attacks that fall outside the mean
- Make the case that signal-to-noise ratio is a more relevant approach to measuring security effectiveness in the new paradigm of AI-driven cybersecurity
- Offer considerations for prospective MDR customers looking for the right provider — and recommend useful questions to ask companies on your shortlist
Mean time to what, exactly?
These metrics still serve a purpose, but a lack of industry standardization can make them difficult to parse — and some are more useful than others. Our white paper argues, for example, that mean time to detect is overvalued, since it doesn’t track missed threats. On the other hand, knowing mean time to recover is relevant in assessing the mitigation capabilities of a provider.
Measuring a company’s ability to find signal in the noise
Unfortunately, there is no singular signal-to-noise metric — but there are ways of querying available data to better understand a vendor’s true capabilities. Things to look at include:
- Ingestion metrics — measurables like number of data sources and volume of logs analyzed
- Coverage metrics — the number of data sources per data type analyzed, such as analyzing multiple Cloud SaaS data sources
- False positives — measuring false positives generated by detections over a specific time period or by number of “things” — such as endpoints or accounts — can offer a view into noise levels
- Recall — the number of known attacks detected by the service
For interesting stats and to learn more with technical and business-facing resources, check out our Executive Summary.