AI Agents Open Door To New Hacking Threats

AI Agents Open Door To New Hacking Threats

AI agents are autonomous systems powered by large language models. They can perform multi-step tasks, like scheduling your meetings, coding for you, handling your marketing, especially email and WhatsApp marketing, and browsing. AI chatbots are limited to conversations only; however, AI agents can interact with external tools and APIs. It makes them more powerful in the AI world but also more vulnerable. 

They can follow plain-language instructions, which makes their manipulation easier because even non-technical users or attackers can influence them. The perplexity AI warns that “We’re entering an era where cybersecurity is no longer about protecting users from bad actors with a highly technical skillset.”

The New Breed of Cyber Threats

The New Breed of Cyber Threats

Attackers may inject malicious inputs that trick the agents into ignoring original instructions. These inputs may be embedded in emails, websites, and documents. They push agents to execute harmful commands and pose hybrid threats when combined with traditional exploitation methods. 

That’s why Dr. Marwan Omar, a senior AI security engineer, says that “These vulnerabilities exploit the autonomous nature of agentic AI, allowing attackers to hijack agents into performing unauthorized actions, such as data exfiltration or account compromise.”

Thomas Urbain stated that “Cybersecurity experts are warning that artificial intelligence agents, widely considered the next frontier in the generative AI revolution, could wind up getting hijacked and doing the dirty work for hackers.”

Agents attached with tools and have access to the file system and APIs, and the browser can be hijacked to download malware. A security researcher, Johann Rehberger, emphasizes that “The real nightmare is autonomous propagation, which can infect an entire AI system without human input.”

Now, a new hacking trick, a zero-click exploit, is prevalent in the market that puts devices in activation mode even without user interaction. This is a harmful technique that increases the risk of stealth attacks. 

Similarly, dark web AI tools like fraudGPT and WormGPT are also built for phishing. They are well-trained in crypto scams and optimized for malware and business email compromise. They are sold at a very low price, just for $100, which makes advanced scamming too easy and accessible. 

Jitendra Vaswani confirms it, saying that “Cybercriminals now peddle ‘evil’ language models like FraudGPT and WormGPT on darknet forums for as little as $100.” Maya Pillai explains, “WormGPT is the most aggressive, optimized for phishing, malware, and BEC attacks at scale.” She further adds that “FraudGPT is financially targeted, built for identity theft, fake docs, and crypto scams.”

There have been numerous high-profile attacks on AI agents. In August 2025, Claud AI also went through a similar attack.  According to the Anthropic Threat Intelligence Report, “The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, government, and religious institutions.”

They further added that “Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000.”

What Needs to Change?

Now, human-in-the-loop oversight is more required than ever. AI agents shouldn’t automate tasks; rather, they should require human approval before performing sensitive actions, such as data exports. All financial data access should be limited to humans only, especially for tasks involving transactions and account details.

AI agents’ access to external tools should be more limited, and in case a complete handover is needed, the user should first try sandboxing. Before deploying any new tool, adversarial testing should be conducted using simulated prompt injections. It is also crucial to detect how the agent makes decisions while detecting anomalies and silent breaches. 

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *