Meeting 2022 Armed with Knowledge

Happy New Year!

The lessons last year taught us were very clear.  Malicious activity in the future is going to be missed by automation.  And that isn't by accident, it is entirely by design.  With the attacks and data breaches last year being reviewed, I expect many of the reports to come to the same conclusions London Security witnessed among our customers over the course of 2021:

Bad actors know how to mask their activity, making it look normal.

Log4j was a good example of this, many fully automated security technologies struggled to handle any malicious use of the vulnerability for a simple reason: it couldn't tell the difference between normal events from it, versus compromised ones.  Because... how would they?  Even seasoned engineers looking for anomalous activity wouldn't necessarily suspect normal events as a cause for concern, and AI is powerful... but very dumb.

What do I mean?  Artificial Intelligence is operating under a bunch of if then statements, if this - then that.  And normally that's fine, if we see this kind of event - generate an alert or hide that system.  But what if we see that event normally, what action do we take if we see it a couple dozen more times?  Do we take action on the event ? Do we notify? What do we do?

There are options in these scenarios, but most require human intervention.  Humans can look at daily reporting and flag excessive events of any kind, and dive in deeper to the events to discover if they were malicious.  But what if those events are only a couple, and they occur over a weekend, or at 3 AM?

Sometimes it only takes a few weirdly timed activities for a system to become fully compromised, and then your company wakes up to a full blown security breach or outage.  Or, the perimeter is breached, but the payload or action is delayed for several weeks as the attackers search for what is valuable within the organization, and how best to extract it - via Ransomware, data exfiltration, etc.  There are sophisticated attacks that can occur after an initial instance of "drive by" malware, where an attacker will send out millions of potential attacks, and only directly intervene in the situations where there's a hit.

Theoretically, this is where the savvy security engineer will come in and save the day.  But with the current business space engineers are having to do more with less: less time, less staff, and often less budget.  If there aren't available engineers to investigate, the event will be flagged and maybe looked at later - it isn't high priority so it doesn't hit front of mind.

And the problem is that when it does become a problem, you're already dealing with a security outage or data breach, and at that point no one is happy.

So the question becomes, how do we handle the growing threats that pretend to be normal windows events, or masquerade as normal computer activity?  The simple answer is we combine automation with human intervention - by working with MDR solutions that monitor networks 24/7, and provides dives deeper into the events to save your security team time and money by only sending off the critical threats for action.

Automation is the start, not the end goal anymore.  The attackers are getting too sophisticated, too smart, and too well funded to be able to rely solely on powerful AI technologies to defend your environment.

Talk to London Security today, for more information and we'll be happy to discuss how we have protected customers from these kinds of attacks through-out 2021, and how we can help you in the new year.