Tech
AI-Powered Phishing Scams Surge, Threatening Online Security in 2025
As 2025 begins, cybersecurity experts are warning of a sharp rise in AI-powered phishing scams that are more personalized and convincing than ever before. These attacks, which leverage artificial intelligence to craft hyper-targeted emails, are bypassing traditional defenses and posing a significant threat to individuals and businesses alike.
According to recent reports, cybercriminals are using AI tools to analyze online profiles and social media activity, enabling them to create bespoke phishing emails that mimic trusted sources. Major companies, including eBay, have already issued warnings about the rise of fraudulent emails containing personal details likely obtained through AI analysis.
Check Point, a leading cybersecurity firm, predicted this trend in late 2024, stating that AI would enable cybercriminals to craft highly targeted phishing campaigns and adapt malware in real-time. “Security teams will rely on AI-powered tools, but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns,” the firm warned.
The Financial Times reports that AI bots can quickly ingest large amounts of data about a company or individual’s tone and style, replicating these features to create convincing scams. Nadezda Demidova, a cybersecurity researcher at eBay, told the FT, “The availability of generative AI tools lowers the entry threshold for advanced cybercrime. We’ve witnessed a growth in the volume of all kinds of cyberattacks.”
Jake Moore, a cybersecurity expert at ESET, emphasized the growing challenge of mitigating these attacks. “Social engineering has an impressive hold over people due to human interaction, but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate unless people really start to think about reducing what they post online,” he said.
In response to the escalating threat, Google has introduced new AI models to strengthen Gmail‘s cyber defenses. However, experts warn that AI can also break traditional detection patterns, making every email unique and harder to identify as fraudulent. “AI has increased the power and simplicity for cybercriminals to scale up their attacks,” Moore added.
As phishing scams become more sophisticated, cybersecurity professionals urge individuals and organizations to remain vigilant. Key recommendations include enabling two-factor authentication, using strong and unique passwords or passkeys, and avoiding clicking on suspicious links. “Ultimately, whether AI has enhanced an attack or not, we need to remind people about these increasingly more sophisticated attacks and how to think twice before transferring money or divulging personal information,” Moore concluded.