Tag#cybersecurity

The Death Of Security Awareness Training: Why AI Is Making It Obsolete

The Problem: AI-Powered Phishing Is Unstoppable

On August 10, 2024, I received an email from Disney+ reminding me to renew my subscription. As a cybersecurity expert, I scrutinized it carefully—checking the sender’s address, domain legitimacy, and authentication records. Everything seemed authentic. Yet, it wasn’t. The email was part of a sophisticated phishing campaign that exploited Proofpoint’s email relay system to bypass security protections. Over six months, these attackers sent three million spoofed emails daily, targeting brands like IBM, Coca-Cola, and Best Buy (Guardio Labs, 2024).

Source: Guardiao Labs

This case exemplifies the new reality: when phishing emails look, behave, and originate from seemingly legitimate sources, they are real as far as users are concerned. The advent of large language models (LLMs) and generative AI is only exacerbating this problem, allowing cybercriminals to generate sophisticated phishing messages, deepfake voices, and multimodal attacks with minimal effort. Security awareness training, designed to help users recognize scams, is no longer effective in this AI-driven landscape.


How AI Breaks Security Awareness Training

Photo by Dima Solomin on Unsplash

1. From Equality of Opportunity to Equality of Outcomes

Previously, cybercriminals needed technical expertise to craft convincing attacks. AI has eliminated this barrier, providing “equality of outcomes” by enabling even novice attackers to launch highly convincing phishing campaigns. AI-generated deepfakes allow scammers to impersonate real voices with millisecond-level accuracy (TrueCaller Insights, 2021).

For instance, consider the rise of phone scams. Americans receive 4.5 billion robocalls monthly, resulting in $29.8 billion in losses in 2021. With AI voice replication tools, these scams are now hyper-personalized—attackers can clone a loved one’s voice, making the deception virtually indistinguishable from reality.

Security awareness training relies on users identifying “tells” like poor grammar or unusual accents. When AI eliminates those flaws, traditional training becomes useless.


2. Hyper-Personalized Attacks

Photo by Kasia Derenda on Unsplash

Data breaches have exposed personal details of billions, including sensitive financial and employment records (Coyer, 2024). AI can now integrate this stolen data into phishing emails tailored to individual users, increasing their effectiveness.

For example, a scam email may reference past purchases, social media interactions, or even personal conversations—details AI pulls from public and stolen datasets. This level of hyper-personalization renders generic security awareness training ineffective.


3. Multimodal Attacks at Scale

Unsplash

Historically, cybercriminals relied on a single attack method—email, phone calls, or text messages. AI changes this by allowing attackers to orchestrate multimodal attacks across multiple channels simultaneously. A phishing email might be followed up with a deepfake phone call or a WhatsApp message from a seemingly familiar contact.

This synchronized, multi-channel approach overwhelms users’ ability to verify authenticity. Security awareness training, which focuses mainly on email-based scams, fails to prepare users for these AI-driven attack strategies.


The Consequences: A Trust Crisis in Digital Communication

Photo by Artem Budaiev on Unsplash

1. The Mafia Code: Avoiding Digital Communication

Historically, mob bosses like John Gotti evaded wiretaps by conducting conversations while walking outside. A similar shift is happening online: as AI-powered fraud grows, users are beginning to distrust digital communication altogether (TrueCaller Insights, 2021).

Today, 87% of Americans no longer answer unknown phone calls, fearing scams. Email, messaging, and video conferencing platforms may soon face similar distrust, fundamentally altering digital communication.


2. AI Pilots and Data Passengers

Photo by Rayyu Maldives on Unsplash

Cybersecurity is shifting from user-based decision-making to AI-driven risk assessments. Just as an average museum visitor cannot distinguish a forged painting without expert verification, users will need AI-powered tools to detect AI-generated cyber threats (Vishwanath, 2023).

In this AI vs. AI arms race, end-users will become “passengers,” relying on AI-based filters and security protocols to determine authenticity, rather than making their own informed decisions.


3. Increased Centralization and Mono-Technology Culture

AI-driven cybersecurity solutions will consolidate control among a few major tech providers. The CrowdStrike incident in 2024 demonstrated the risks of this concentration: a single poorly coded patch crippled millions of endpoints worldwide (Vishwanath, 2024).

As AI centralizes security decision-making, organizations will lose control over individual security measures, increasing their dependency on tech giants.


Solutions: Rethinking Security Awareness for the AI Era

1.

Instead of one-size-fits-all training, security awareness must be personalized using cognitive-behavioral risk assessments. AI can tailor security training to users’ specific vulnerabilities, providing real-time, adaptive training rather than generic phishing simulations (Vishwanath, 2023).


2. Dynamic Security Policies

Current security policies apply blanket restrictions without context. AI-driven policies should adjust dynamically based on individual user risk profiles and real-time activity, enhancing both security and usability.


3. Creation of Private Modalities

Organizations should develop AI-driven, private communication channels that authenticate users beyond traditional two-factor authentication. AI-generated suspicion scores could flag messages that exhibit signs of deception, helping users assess risks in real-time.


Conclusion

Security awareness training, as we know it, is obsolete. The same AI technologies that make phishing more effective must now be leveraged to protect users. By embracing adaptive training, dynamic security policies, and AI-driven trust mechanisms, organizations can counter AI-driven threats and establish a more resilient cybersecurity future.


*Earlier versions of this paper were presented at the NSF Workshop on LLMs and Network Security, held at NYU in October 2024, and at ConnectCon 2024 in Las Vegas, Nevada. This article is a forthcoming chapter in Large Language Models for Network Security, edited by Quanyan Zhu and Cliff X. Wang, Springer-Nature, Boston, MA.

The Colonial Pipeline Hack Was Avoidable

The Colonial Pipeline hack is now making the news and many cyber security experts are providing their take on how to recover from it.

Of course, while this attack is new, such attacks aren’t. The Sony Pictures hack was also ransomware. And in 2016, there were many such attacks occurring. In response to them,  I’d written a piece on CNN asking if 2016 was the year of online extortion? This was after ransomware attacks on hospitals in California and Kentucky.

I had provided pointed solutions and called for a focus on users, rather than solely on technology. After all, they are the ingress points for ransomware, which almost always coming via spear phishing.

Unfortunately, every year since 2016 has led to bigger and more successful ransomware heists. The Verizon DBIR 2020 shows exactly how these attacks come in–and they come in through spear phishing.

And all along, we have–and we continue to– ignore user weaknesses and focus on the technical issues—almost always after a crippling breach.

This time, we are all paying a direct price at the gas pumps. Who knows what’s coming next?
The solutions from then are just as pertinent today.  Here’s my article in CNN from 2016. [Original can be found on the CNN website]

 

“This week, a hospital in western Kentucky was the latest organization to fall victim to a “ransomware” attack – a class of malware that encrypts all the files on a computer, only releasing them when a ransom is paid to the hacker holding the encryption key.

In this case, the hospital did not pay up. However, other hospitals, law firms, small businesses and everyday citizens have already paid anywhere from $200 to $10,000 in ransoms. Indeed, based on complaints received between April 2014 and June 2015, the FBI estimated that losses for victims from just one of these malware strains were close to $18 million.

Sadly, this year could well be worse.

Ransomware has existed for some time, the earliest dating back to the late 1980s. Back then, most was developed by enthusiasts – individuals testing out their skills. In contrast, today’s ransomware is often developed by global software teams that are constantly updating their codes to evade anti-virus software and selling them as off-the-shelf products.

Already, newer strains appear capable of infecting mobile devices, of encrypting files stored on cloud servers through mapped, virtual drives on computers, and of transitioning to the “Internet of Things” – infecting gadgets like watches and smart TVs that are going online. In the near future, the likelihood of an attack locking us out of our car, or worse yet in it, while we drive, demanding an immediate ransom, is becoming increasingly possible.

Thanks to the Internet, this malware-for-hire is available to virtually anyone, anywhere with criminal intent. Making things easier for hackers is the availability of Bitcoins, the online currency that makes monetary transactions untraceable. And making things even easier for them is our inability to stop spear phishing – those innocuous looking emails whose attachments and hyperlinks conceal the malware.

All this makes anyone with minimal programming skills and a free email account capable of inflicting significant damage, and with everyone from presidents to pensioners using emails today, the virtual pool of potential victims is limitless. No surprise then that cybersecurity experts believe that 2016 could well be the “Year of Online Extortion.”

But we can stop these insidious attacks, if everyone – individuals, organizations and policy makers – works towards a solution.

First, everyone must be taught to spot, sequester, and deal with spear phishing emails. This requires cybersecurity education that is free and widely available, which is presently not the case. While different training programs exist, most cater to large organizations, and are outside the reach of households, senior citizens and small businesses, who remain vulnerable.

What we also need is training that helps people develop better “cyber hygiene.” This includes teaching people to frequently update anti-virus software, appropriately program firewalls, and routinely back up their computers on discs that are then disconnected from the network. In addition, people should be taught how to deal with a ransomware attack and stop its spread by quickly removing connected drives and disconnecting from the Internet.

Second, organizations must do more to protect computer networks and employees. Many organizations continue to run legacy software, often on unsupported operating systems that are less secure and far easier for hackers to infiltrate. Nowhere is this problem more pressing than in small businesses, health care facilities, and state and federal government institutions, which is why they are the sought-after targets of ransomware.

Besides updating systems, organizations need to overhaul the system of awarding network privileges to employees. The present system is mostly binary, giving access to employees based on their function or status in the organization. Instead, what we need is a dynamic network-access system that takes into account the employees’ cyberrisk behaviors, meaning only employees who demonstrate good cyber hygiene are rewarded with access to various servers, networks, and programs through their devices.

Finally, policy makers must work to create a cyber crime reporting and remediation system. Most local law enforcement today is ill-equipped to handle ransomware requests, and harried victims usually have limited time to comply with a hacker’s demand. Many, therefore, turn to their family and friends, who themselves have limited expertise. Worse yet, some have no choice but to turn to the hacker, who in many cases provides a chat window to guide the victim through the “remediation” process.

What we urgently need is a reporting portal that is locally available and staffed by cybersecurity professionals, so people can quickly report a breach and get immediate support. Such a system currently exists, in the form of the existing 311 system for reporting nonemergency municipal service issues. It’s a system that has already been adopted by many cities in the nation, and allows for reporting via email, telephone, and smartphone apps. Strengthening this system by providing it the necessary resources to hire and train cyber security professionals, could go a long way towards stopping ransomware attacks that are now making their way past Main Street to everyone’s homes.

Perhaps the best way to look at the problem is this: How safe would we feel in a city where people are routinely being held hostage? Well, cyberspace is our space. And we have to make it safe.”