Thinking like the bad guys is part of being in cybersecurity. Our ability to anticipate the moves of our opponents is essential, just like in a chess game - except with real business consequences. Drawing on my prediction of worse spam to come in 2020, I had been thinking about the “perfect” phishing email for some time… then, I saw it in the wild!
I had to share it with you, because the common red flags that we train our users to look for were not there. This speaks to the evolution of malicious email, which can now deceive even the most diligent of recipients. Among the first things we instruct our users to check for (even before proper grammar and references to foreign royalty :) ) are:
1. who it is from (the name, the address, the user name)
2. what they are asking for / directing you to (the sign-in page, the bank transfer form, the video of your favorite celebrity)
But this particular hack leaves no indication that either is amiss, and results in the complete compromise of one of your most sensitive accounts – hence, the “perfect” phishing email. Read on to see how this is possible, and what to do about it.
Or, for help formulating a response to attacks like these, check out my Elite SMB IR Guide.
Imagine you receive a notification that your CEO has shared a file with you. The notification comes from an established file-sharing system, and clearly articulates that your organization is integrating new software into your environment and will require you to log in, and allow certain permissions. Verifying that the source of the notification is legitimate, you click the link, and are redirected to an API integration permissions page (as expected). You gloss over the EULA, and allow the software access to your account so you don’t need to create yet another username and password. You even enter your 2FA code when it is sent to your mobile, not thinking anything is amiss… but, of course, it is – and you’ve just allowed far-reaching permissions to a malicious piece of software.
While this hack does have some dependencies, they are common enough that most small to midsize organizations meet them. It depends upon your organization using Microsoft’s file sharing feature in o365, through OneDrive, Sharepoint, or Teams (integrated with one of them). Even if you aren’t using these solutions in your organization, you probably work with companies that do, and wouldn’t be surprised to see something shared in this way.
Basically, the attacker creates a spoof profile that mimics an authority figure within your company. All they need to do is create a Microsoft account (this is free - no software purchase nor identity verification required) and type in the authority figure’s name (publicly available on LinkedIn, or your own website). This account does not need to be part of your domain, as these products allow sharing of files across different organizations.
Then, they create an API that interacts with your Microsoft profile (or, whatever profile you’re “logging in with”), designed to do terrible things. In the context of “this is software we’re integrating” your end-user is used to glossing over EULAs (if reading them at all) and enabling applications with all sorts of permissions.
Then, the spoofed user “shares” this API with you, by clicking share and typing in your email address (likely along with those of the rest of your staff as well).
If all goes according to plan, somebody clicks through on the sharing email, allows the API the permissions it requires, and voila – the threat-actor now has complete control over your account, and anything else you use it to log in to.
Just to be clear – the email came from a legitimate source: Microsoft. The name of the person in that email from Microsoft appeared accurate. The chances that you’re already doing business with Microsoft are very high, but even if you weren’t, you can imagine other reasons that software shared would come from them – it could even be their software that “your CEO” (the attacker) tells you you’re trialing. Add to this a common assumption that files shared via this method are safe to access, and you have a volatile mix of perceived authenticity, authority, and reasonable context – all directing your user to hand over the keys to the vault.
It is not just the likelihood of successfully deceiving the user that makes this tactic “perfect”. The attacker has effectively bypassed alternative measures like two-factor authentication (2FA), because their API can now have your account do things without you (or, anybody) needing to log in. Of course, it’s the log-in attempt that would normally prompt verification via 2FA – the trigger for it is simply not there anymore.
The attacker doesn’t need to have acquired credentials to pull this off – the barrier to entry for this tactic is very low.
It's also tough to detect; activities from such APIs are rarely logged and reviewed by users of o365. Even if they are, they would need to be looking for uncommon IP addresses from which connections are coming in.
The nail in the coffin is that integrations tend to be poorly audited by technical teams - what indication do you (or your users) have of what integrated software actually does with your account? Typically, this type of connection is just used for easier sign-on, so you don’t need to remember another password. There is no “part of your day” where you are likely to uncover what the app has been doing with the wide-ranging permissions you have given it.
Preventing this attack from landing will be tough; no amount of training is likely to prevent users from clicking through on a share from a seemingly legitimate source. There are changes you can make to your business policies and processes that can help, like not using “Sign On With” options that enable far-reaching permissions for unknown applications, or having a documented/enforced process for sharing files/software within your organization.
In terms of detecting this problem, it’s unlikely that the small to mid-sized organization has the tools to detect this. My advice is to determine the appropriate processes for file sharing within your organization, and foster a culture where it’s ok to confirm that “this came from you?” Or, if you’re an ActZero client, you can integrate o365 logs into our solution, enabling our threat hunters to detect when compromises like this occur.
You can also tell your vCISO you want to investigate this possibility. Then, we can look at your Active Directory and Azure logs to determine what permissions your users have allowed, and to whom. We can also investigate whether your providers and processes for file and permission sharing could leave you vulnerable to tactics like these.