11/05/2025 | Press release | Distributed by Public on 11/05/2025 06:11
Artificial intelligence is often praised as a game-changer in the fight against cybercrime, powering smarter defences, faster detection, and more efficient security operations. But what happens when the same technology falls into the wrong hands? Increasingly, hackers are exploiting AI to automate attacks, create convincing phishing campaigns, bypass traditional defences, and even generate malicious code at scale.
This dark side of AI is forcing organisations to rethink how they protect their systems. Not just against human adversaries, but against intelligent, adaptive threats that learn and evolve. In this blog, we'll explore how cybercriminals are weaponising AI, the risks it poses to businesses and individuals, and why proactive security strategies are more critical than ever.
Spear phishing is a targeted form of phishing where attackers use personal information about their victims to increase credibility. Unlike a generic "Dear customer" email, a spear phishing attempt might reference a person's employer, colleagues, or recent activity-making the message feel far more convincing. What once required days of manual research can now be automated with AI, this automated exploitation gives attackers unprecedented scale and precision.
AI is being used heavily in these spear phishing campaigns. Anthropic, a major AI research company, revealed in a blog post Detecting and countering misuse of AI that its Claude model was leveraged in "large-scale extortion operations" and even in "a fraudulent employment scheme from North Korea." In the extortion case, attackers used Claude to generate psychologically tailored ransom demands, analyse stolen financial data to set ransom amounts, and with the help of Claude Code, automate reconnaissance, credential harvesting, and even network penetration. This is a glimpse of "agentic AI," where malicious actors grant an AI system autonomy to execute every stage of an attack. The implications are chilling: the barrier to cybercrime is lowered dramatically, enabling near "zero-knowledge" attacks that require minimal skill from the human operator.
The fraudulent employment scheme paints an equally alarming picture. According to Anthropic, North Korean state actors used AI to create convincing résumés, ace remote job interviews, and even perform day-to-day technical tasks inside U.S. Fortune 500 companies. While employment fraud isn't new, the FBI has issued multiple warnings on the tactic-AI makes it scalable, persistent, and far harder to detect. Beyond data leaks, these schemes could result in salaries being funnelled directly to sanctioned regimes under the guise of legitimate employment.
AI-driven automated exploitation transforms traditional threats into industrialised operations. What used to be the work of advanced cybercriminals or state-sponsored groups is now within reach of anyone with access to a chatbot, reshaping the threat landscape in dangerous new ways.
Developers often talk about "vibe coding" - the practice of leaning on AI large language models (LLMs) to write or tweak code. At first glance, it feels like magic: type a prompt, get back a function, and move on faster than ever before. For filling in small gaps or speeding up boilerplate, it can be a useful shortcut. But underneath that efficiency lurks a serious problem: AI-generated code is frequently insecure.
Veracode's recent GenAI Code Security Report shines a harsh light on just how dangerous this can be. In their testing of four popular languages-C#, Python, JavaScript, and Java, AI-generated code repeatedly failed to meet even basic security standards. The numbers are staggering: C# produced insecure code 44.73% of the time, introducing exploitable vulnerabilities nearly half the time it was used. Other languages fared no better, with all four producing unsafe code across the board. Even more worrying, Veracode tracked performance across different model releases and sizes, and found no improvement. Larger, newer models are still making the same fundamental mistakes.
The vulnerabilities uncovered weren't obscure edge cases either. Veracode focussed on four of the most common and well-documented weaknesses:
In every case, AI-generated output proved insecure. Additional research from professors at the University of Naples reinforced this picture, finding that AI-written code often contained more high-severity vulnerabilities than human-written code and when tasked with memory-unsafe languages like C, the results were catastrophic, with models generating code riddled with fatal security flaws.
The lesson is clear: vibe coding isn't a free pass to safe, production-ready code. Every line still requires the same level of scrutiny, review, and testing you'd apply if you had written it yourself. Treat AI as a helper for small, simple functions, not as a replacement for your own knowledge or diligence. Never rely on it to bridge large gaps in understanding, because that's where hidden vulnerabilities thrive. And always, always be mindful of what you share: uploading proprietary code to cloud-based AI tools risks exposing sensitive information. History has already shown how fragile these platforms can be-ChatGPT, DeepSeek, Meta, and X's Grok have all suffered leaks or vulnerabilities that exposed private conversations.
Vibe coding may feel fast and futuristic, but without discipline, it can just as easily turn into a fast track for attackers. The convenience of AI should never come at the expense of security.
AI is transforming cybersecurity but not always for the better. On one side, attackers are exploiting automation to industrialise their campaigns, using AI to scale spear phishing, extortion, and even insider threats through fraudulent employment schemes. On the other, well-meaning developers risk introducing critical flaws into their own systems by relying too heavily on AI-generated code. Both paths highlight the same uncomfortable truth: when misused, AI doesn't just lower the barrier to cybercrime, it lowers the barrier to catastrophic mistakes.
The challenge isn't whether AI will shape the future of security. It already has. The challenge is whether organisations and individuals will adapt quickly enough to defend against it. That means staying vigilant, testing and verifying relentlessly, and never mistaking convenience for safety.
Now is the time for action:
AI is a powerful tool but like any tool, it can be used to build or to break. The difference comes down to how carefully we choose to wield it.