In The Art of War, Sun Tzu cautions, “Do not swallow bait offered by the enemy.” An important point through history—Odysseus and the Trojan horse provides an apt example—this axiom is even more fitting when reflected onto today’s world of information security.
In cyber-enabled information warfare, hackers are able to employ social engineering tactics, such as baiting (creating a false promise to trap a victim that allows the hacker access to their system) and scareware (bombarding victims with false alarms and threats to allow hackers access to their systems), to render their phishing scams credible. These tactics take advantage of a person’s immediate emotions to entice certain behavior online that advances the hacker’s goals, and they continue to be successful because they are inherently reliant on individual human behavior. Furthermore, advisories from the federal government on how to “prevent” cyber attacks and especially ransomware attacks often do not take this human factor into account.
Hackers and malign actors have extensively used phishing, pretexting, social proof, and other social engineering tools since the early days of the Internet. What is truly pernicious is how social engineering is fusing with ransomware attacks for incredibly destabilizing effects.
Social engineering attacks are reliant on human error. As a result, these attacks are seen as a lucrative opportunity for hackers to instigate a cyber attack. Verizon’s 2021 Data Breach Investigations Report noted that social engineering breaches have seen an “astronomical rise” since 2017. These kinds of attacks involve hackers tricking their targets into taking a particular action, such as giving up their credentials, transfer information or funds, or trigger a ransomware attack. By far the most popular of these attacks involve phishing—today, phishing makes up more than 80% of all social engineering attacks. While these tactics are certainly nothing new, hackers in recent years have found more malicious and ingenious ways to utilize social engineering tactics, getting through even the most complex firewalls and email-scanning technologies.
For instance, the CEO of a U.K.-based energy firm transferred large sums of money to a Hungarian executive In August 2019. He believed he was on the phone with his boss, the chief executive of the firm’s parent company, who asked him to urgently transfer some funds to a foreign company.
Alas, it wasn’t his boss on the phone. Whoever was behind the attack was able to successfully utilize AI-enabled voice-spoofing software to fool the CEO. The CEO thought he had recognized his boss’ slight German accent and the tone of his voice, so he followed through on the request. This is a clear example of a social engineering tactic directly preying on the human mind – specifically, trust, and the limits of human cognitive faculties.
Similarly, the Microsoft 365 attack from earlier this year has received much media attention. This attack was particularly disruptive because of how the perpetrators utilized social engineering. This attack involved disguising a malicious HTML link as an Excel document within a blank email with the subject line being along the lines of “price revision.” After opening the file, the user was prompted to a pop-up notification stating they had been signed out of Microsoft 365 and would need to log-in again in order to view the Excel spreadsheet. When the user did so, it sent the credentials directly to the hackers. This attack directly took advantage of the fact that most people generally trust pop-up links from Google, Microsoft, and other large reputable platforms. Tricking the user into enabling macros like this is a common entry point for hackers.
These two events took place almost two years apart from each other. The latest advisory on social engineering attacks from the Cybersecurity and Infrastructure Security Administration from August 2020 provides useful advice, including paying attention to URLs, verifying email requests, and enabling multi-factor authentication, but nothing beyond that when it comes to addressing the actual root of the problem—the limits of human cognitive ability.
Cybersecurity is inherently technical—and human. Navigating around this issue is difficult: humans all think, interact with, and absorb information in different ways. Thus, the human factor may as well be the most vulnerable node of the entire threat vector. It is no surprise that these attacks are common entry points for hackers. Worse yet is the proliferation of social engineering combined with sophisticated AI techniques. The federal government and the private sector should stay vigilant about not just the technical advancements in cyber attacks, but the manipulation that makes these attacks so effective against the human mind and its increasingly overwhelmed cognitive faculties.
Another Sun Tzu quote is appropriate here: “The whole secret lies in confusing the enemy, so that he cannot fathom our real intent.”