A growing share of the world’s spam is now being generated with the help of AI tools, according to new research from Barracuda, Columbia University, and the University of Chicago.
Analysis of a dataset covering unsolicited and malicious emails from February 2022 to April 2025 found that AI is being used in 51% of spam messages and 14% of business email compromise (BEC) attacks.
The study, released June 18, marks one of the first data-driven efforts to measure the real-world use of generative AI in large-scale cyberattacks.
Researchers trained detection tools to identify AI-generated content by comparing messages sent before and after the release of ChatGPT in November 2022.
AI drives scale, not sophistication — yet
Researchers found that AI-generated spam emails tend to use more formal, grammatically accurate, and fluent language than their human-written counterparts. These linguistic improvements likely help messages avoid spam filters and appear more convincing to recipients.
Threat actors are also using AI to test subtle wording variations, assessing which ones are more successful at bypassing detection systems and persuading recipients to engage. The result is a higher volume of slightly tweaked messages sent in rapid succession that increases the odds of success without requiring new tactics.
Tactics remain the same, language improves
Despite advancements in language quality, the core strategies employed in phishing and BEC remain largely unchanged. Urgency, impersonation, and financial lures continue to be the primary tactics. What AI appears to change is the polish and scale of these attacks.
“Our analysis suggests that by April 2025, the majority of spam emails were not written by humans, but rather by AI,” said Asaf Cidon, associate professor of electrical engineering and computer science at Columbia University.
“For more sophisticated attacks, like Business Email Compromise, which require more careful tuning of the content to the victim’s context, the vast majority of emails are still human-generated, but the volume that is generated by AI is steadily and consistently increasing,” he continued.
AI-written emails in the dataset frequently mimicked native-level English fluency, even when targeting regions where English is the primary language. This trend makes it more challenging for end users to detect anomalies or recognize phishing attempts solely based on tone or phrasing.
What MSPs can do for clients in the new landscape
As generative AI becomes a common feature of spam infrastructure, MSPs should look beyond basic filtering to address the evolving threat. AI-enabled email security solutions with multilayered detection are increasingly necessary. But equally important is training users to recognize credible-looking phishing messages and report them early.
MSPs can also use phishing simulations, customize policy rules, and apply behavioral analytics to flag suspicious activity. These tools, combined with user education, give clients a better chance to detect and block AI-powered threats before damage is done.
Barracuda has also recently unveiled its new platform for MSPs seeking to effectively scale their security offerings. Read our interview with Brian Downey, VP of product management at Barracuda, to learn more.