The advent of generative artificial intelligence has reshaped the digital landscape, offering innumerable opportunities for improved efficiency, creativity and automation. We are already experiencing many advances in medicine, for example, including improvements ranging from the interpretation of radiological scans to psychotherapy and telehealth triaging.
However, there have been adverse developments such as "deepfakes," which use a form of artificial intelligence called deep learning to make highly realistic images of fake events.
The worst may be yet to come, and soon. The impending emergence of "agentic" AI presents a significant shift in the cybersecurity domain — one that could usher in a terrifying era of cybercrime.
When OpenAI unveiled ChatGPT in 2022, the world marveled at its capabilities. AI tools could draft emails, generate poetry, create digital artwork, and mimic human conversation. While it had tremendous potential, security experts worried about its possible misuse.
In the years since, generative AI has mainly enhanced existing cyberattack strategies rather than introduced novel threats. Phishing scams, social engineering attacks and fraudulent communications have all become more sophisticated, with AI-generated text eliminating the traditional red flags of poor grammar and spelling errors. However, despite these refinements, AI tools like ChatGPT still operate within predefined boundaries that prevent outright malicious activity.
These limitations have led cybercriminals to create underground AI models designed specifically to bypass ethical safeguards. Some of these illicit AI tools have been used to generate deepfake content, including fake nude photographs for blackmail. Others have been exploited to enhance business email compromise scams by fabricating convincing fraudulent messages. While these threats are serious, they are still variations of traditional cyberattacks.
However, with the development of "agentic" AI, cybercrime may take on an entirely new dimension.
Unlike the AI chatbots we know today, agentic AI represents a transformative leap forward. Companies such as Google, Amazon, Microsoft and Salesforce are developing AI-powered "agents" capable of independently analyzing data, planning tasks, and executing actions without continuous human oversight. These agents are poised to become invaluable assistants for businesses and individuals, automating customer service, financial planning and medical consultations.
What happens when these AI agents are used by bad actors? Cybercriminals leveraging agentic AI could unleash a torrent of autonomous cyberattacks. Unlike traditional scams, which rely on human effort to identify, target and manipulate victims, agentic AI could fully automate these processes. Imagine an AI-driven cybercriminal operation capable of:
—Ransom-based extortion: Scanning massive databases of stolen personal information to pair leaked Social Security numbers with email addresses, then crafting convincing ransom threats.
—Automated social engineering: Scraping public social media feeds to collect baby photos, which are then used by another AI agent to fabricate fake kidnapping threats.
—Targeted business scams: Analyzing LinkedIn profiles to infer corporate email structures, then sending high-quality phishing emails that appear to come from executives, tricking employees into compromising security.
—Romance scams at scale: Using public divorce records to identify vulnerable targets, then deploying AI agents to engage in long-term, emotionally manipulative conversations designed to extract money.
Cybercrime is no longer just a corporate concern. Very soon, individual users and small-business owners may find themselves on the front lines of AI-driven cyberattacks. A single compromised device could lead to a broader network attack, affecting entire organizations. Conversely, breaches within companies could expose customer data, amplifying the risk of identity theft and financial fraud.
As AI-generated cyber threats grow more sophisticated, businesses and individuals must adopt a proactive approach to cybersecurity. Enhanced authentication methods, AI-driven security monitoring, and continuing education about evolving threats will become indispensable defenses against AI-powered attacks.
While agentic AI poses significant risks, it also offers new opportunities for defense. The same AI technology that could be weaponized by cybercriminals can also be harnessed to protect against them. Future cybersecurity measures may involve:
—AI-driven threat detection: Automated systems that scan for anomalies, detect phishing attempts, and neutralize threats before they cause harm.
—Personalized security assistants: AI agents that guide users on safe online practices, helping individuals recognize potential scams and avoid digital pitfalls.
—Proactive defense networks: AI-powered systems that continuously monitor global cyberthreats and share real-time intelligence across industries.
The AI revolution is irreversible. Major corporations are investing heavily in agentic AI, ensuring that it will continue to advance. The question is no longer whether AI will change the digital security landscape but how we will adapt to it.
Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the FDA's Office of Biotechnology.