Sophos Group Ltd.

10/31/2025 | Press release | Distributed by Public on 10/31/2025 05:01

Phake phishing: Phundamental or pholly

On paper, it sounds so simple: you prepare for the real thing by running simulations. After all, the same principle applies to countless disciplines: sports, the military, transport, crisis preparedness, and many more. And, of course, to various aspects of cybersecurity, including red teaming, purple teaming, Capture-The-Flag (CTF) contests, and tabletop exercises. Is phishing any different?

The answer: it's not, at least in theory. It all comes down to execution, and we've seen several mistakes organizations make when implementing phishing training. Four of the most common, in our experience, are:

  • Making phishing simulations an exercise in tick-box compliance, without putting much thought into the design of campaigns, the quality of the lures, or the cadence of simulations - which means that training campaigns don't bear much resemblance to genuine attacks, and users can become fatigued
  • Skewing results by making phishing simulations 'unfair' - crossing ethical boundaries and causing users stress and uncertainty with scare tactics designed to deceive them. For example: sending emails via a legitimate corporate domain; using pretexts relating to financial hardship and job security; and basing phishing emails on personal information scraped from social media. While we acknowledge that threat actors may use some or all of these methods in the real world, the fact is that organizations doing this to their own employees risk backlashes, loss of trust, and erosion of company culture that outweighs any potential benefits.
  • Punishing users who 'fail' phishing tests, whether that's by enforcing extra-dull mandatory training, 'naming and shaming,' or applying disciplinary measures. This can make users resentful, and less likely to engage with phishing training and other security efforts in future
  • Focusing on failure rather than success - more on this later, as it's critical to how we run phishing simulations internally at Sophos

Phriend or phoe?

These issues, and a few others, have come up time and again in debates over the effectiveness of phishing training.

Supporters of phishing training laud its supposed effectiveness, especially when combined with awareness training, at boosting learning retention rates and return on investment. Some argue that simulated phishing helps train users' instincts, forcing them to question whether emails may be malicious; others point to risk reduction, cost-effectiveness (versus the cost of an actual breach), and the development of a 'security-first' culture.

On the other hand, in addition to the pitfalls we mentioned earlier, detractors argue that phishing simulations may not reduce risk at all, or only by a miniscule amount.

Two recent studies - one in 2021, the other in 2025 - involving thousands of participants suggest that phishing simulations have only a very small effect on the probability of falling for a phishing lure. The 2025 study also concludes that annual awareness training makes no significant difference to susceptibility, and that employees who fail phishing simulations tend not to engage with training materials afterwards. And both studies also indicate that, counter-intuitively, training could actually make users more susceptible to phishing attempts - possibly due to fatigue or overconfidence (i.e., in assuming that their organization has invested in cybersecurity, users may become less vigilant).

We should note that there are some caveats to the 2025 study; as noted by Ross Lazerowitz of Mirage Security, it only focuses on click rates, uses participants from a single organization in one industry, and doesn't take training design and quality into account.

Nevertheless, it seems clear that, if incorrectly designed and executed, phishing simulations may at best have no effect at all, in which case they're a waste of time, effort, and money. Worst-case: they may even be counter-productive, however well-intentioned.

So what's the solution? Are phishing simulations, like many other things in cybersecurity, a Hard Problem that's just too difficult to solve?

It's obvious that we can't ignore the problem, because phishing is usually the most prevalent entry point for cyber attacks: attackers know it works, it's cheap and easy (and will only become cheaper and easier with generative AI), and it's often the simplest way for them to gain a foothold. Would your organization be better off investing in additional or better email controls, then, or more e-learning packages and awareness training? Is phake phishing phutile?

Our phishing philosophy

At Sophos, we don't think so. We've been running internal phishing simulations ourselves since 2019, based on scenarios we review annually and taking into account shifts and trends that we've observed in the threat landscape. We're under no illusion that these simulations will by themselves eliminate the risk of a successful attack (see here for an illustration).

But we still think phishing exercises are worthwhile, and here's why: we don't measure by failure. We measure by success.

Counting clicks misses tricks

Click rates (the percentage of recipients that clicked a fake phishing link) are not particularly informative or helpful, because we know, from many, many incidents and decades of experience that it only takes one user to click a link, enter some credentials or run a script, and let an attacker in.

Yes, organizations still need to continually bolster their resilience to human error, but measuring by failure frames users as a problem, not an asset. It also provides a false sense of security. You're very unlikely to ever get down to a 0% click rate, or even anything approaching that - and you certainly won't be able to sustain it over time. So going from a 30% click rate down to 20%, for example, or even to 10%, might sound impressive, and moves the needle a bit, but it doesn't really mean much. Crucially, it also doesn't help you prepare for a genuine attack.

Instead, our key metric at Sophos is how many users report phishing emails. We very deliberately make this easy for users to do, with a simple, large, highly visible Report button on our email client that automatically forwards the email in question to our security teams. (A reminder to Sophos Email users: this feature is available to you too. Users can also use the Outlook add-in to send suspicious emails to SophosLabs for analysis.) This avoids putting the onus on users to forward emails themselves, or take screenshots, or download the message and send it as an attachment to the security team along with a preamble.

Reporting for duty

One of the reasons why we emphasize reports over clicks is that, in a real-world attack, the number of users who clicked a link is largely irrelevant, at least early on in an incident. It's something you won't know until someone reports the email, or until you spot suspicious activity elsewhere and investigate - by which time, of course, the attacker is already in.

In contrast, reports are a highly tailored source of actionable threat intelligence. Phishing emails are very rarely customized for and targeted at one individual. Even if they are unique, the infrastructure behind them (C2, hosting, etc) typically isn't.

So when a user reports a suspicious email, a security team can immediately triage it and follow an established, ideally automated, process that involves detonating attachments, looking up IOCs, hunting for visits to credential-harvesting sites, threat hunting across the estate, blocking malicious domains, and clawing back emails sent to other users.

We also measure report speed, because that's critical too. A phishing attack is a race against time. If an attacker persuades a user to enter credentials, download a file, or execute a script, they can quickly obtain a foothold in the environment. The faster a user reports a phishing email, the more time a security team has to evict an attacker, and the less time the attacker has to dig in.

Changing the vibes

Of course we don't want users to click links in phishing emails, but we also don't want them to simply delete the email, or move it to their junk/spam folder, or ignore it entirely - because that puts us behind the pace. We can't respond to a threat if we don't know about it.

Report rates therefore change the traditional dynamic when it comes to phishing simulations. Rather than congratulate people for something they didn't do (i.e., click the link, engage with the email) - or, worse, punish them for clicking a link - we congratulate them for something they did do. It's a case of providing an incentive to take a positive action, rather than a negative or neutral one - and of empowering users to be a crucial line of defense, instead of treating them as the "weakest link."

So phishing simulations become less about trying to catch users out and trick them into clicking links, and more about training them to remember to hit the Report button. The way we like to frame it is this: we're not trying to deceive our staff. We're playing a game, to help refresh their memory and reinforce the reporting mindset.

Of course, some users inevitably do click links in phishing simulations. When they do, they're not reprimanded at Sophos. Instead, they receive an email that informs them of what happened, reminds them of the procedure for reporting suspicious emails, and points them towards internal educational resources on phishing. Users who do report a simulated phishing attempt receive an identical email, just with a different subject line, to maintain positivity and reinforce prompt and proactive reporting.

Phoolproof phake phishing

We've put together some tips for organizations to consider when planning phishing simulations:

  • Find the right cadence. Weekly is too much, yearly not enough. You may have to experiment with different intervals to find the sweet spot between user fatigue and lack of retention. Soliciting feedback from users and your security teams, and comparing metrics across simulation campaigns, will help
  • Pretexts should be realistic, but not unreasonable. We all know that, in the real world, threat actors often lack any kind of ethical restraint and think nothing of using cruel and manipulative lures. But we are not threat actors. Pretexts should incorporate common social engineering tactics (appeals to urgency, incentives, etc) without the risk of alienating staff and losing their trust. Basing lures on hardships or job security, for example, can cause users to disengage with company culture and security initiatives - a bad outcome, when users are such an important asset
  • The goal is to reinforce positive behaviors, not to catch people out. Crafting a campaign that deceives a record number of users is not a win. The objectives are to empower users to be a critical line of defense, and to remind them what to do when they spot something suspicious. Well-designed phishing awareness training, in conjunction with simulations, can help users know what to look out for
  • Prioritize reports (and reporting speed) over clicks. Measure by, and incentivize, success rather than failure. As per the above, the aim is to get users to react by reporting - because in a genuine attack, it provides actionable threat intelligence, and the best chance of intercepting a threat actor early. Counting clicks (and punishing users who click) can be counter-productive, even if well-intentioned, because it frames users as a point of weakness, can demotivate them, and provides little useful information
  • Look beyond the click. Of course, you might still record clicks anyway - but remember to also record what happens next, because there's more nuance to the issue. As Ross Lazerowitz says, other behaviors are equally critical. Did someone click, and then report after realizing something was off? Perhaps they didn't click, but later visited the website in a browser out of curiosity? If the link in the email led to a simulated credential-harvesting site, did they enter any credentials? (Anecdotally, some pentesters have reported that some users will deliberately enter false credentials, sometimes in the form of insulting messages aimed at the 'threat actor.' Strictly speaking, these could be counted as 'failures,' even though those users clearly recognized the phishing attempt - but only a slight behavioral nudge was needed, to get them to report the email in the right way.)
  • Doing nothing helps no one. You might think that users not engaging with a phishing email is a good result, because it means they didn't click. But that won't help in the event of a real attack, because you won't know about the threat until someone does click, and you subsequently get an indication of suspicious activity somewhere else in your estate. At that point, you're playing catch-up while the threat actor has got a foothold; the opportunity to be a step ahead has already gone
  • Complement simulations with novel forms of learning. At Sophos, we try to be transparent about discussing phishing attacks targeting us. A recent article and public root cause analysis (RCA) covered one such case - but before we reported it publicly, we held an internal webinar, open to the whole company, in which our security team discussed the incident, why it happened, and what we did in response. We saw extensive, positive engagement with this webinar, and a lot of interest from users in learning how the attack worked and how we stopped it - making it a great complement to our phishing simulations and regular awareness training. It also helps to remove some of the stigma around phishing. Nobody wants to fall for a phishing email, simulated or not - but accepting that people do, and learning from the consequences without attaching blame, is a valuable exercise
  • Not just for end users. Phishing simulations can be useful in themselves, but they also provide security teams with an opportunity to hone their response procedures. From the first successful report, you can walk through what you'd do if the phishing email was real: detonate attachments, find and block infrastructure, categorize and block IOCs, claw back emails from other users' inboxes, and so on. It can also be a good chance to test automation of these steps
  • Include everyone (within reason). Phishing simulations should ideally involve all teams, departments, and seniority levels, or a randomized sample of users across an organization. This helps provide a representative picture
  • Build systems tolerant to human failure. More a strategy than a goal, but it's important to recognise that any security control that is reliant on human behaviour is inherently weak. In any modern fast-paced environment we inevitably spend a lot of time in our "System 1" mode of thinking. Control design should accept that, not fight it. We've come a long way here - 0-day 0-click drive-by-downloads are exceptionally rare. Phishing-resistant multi-factor authentication (MFA) exists and, arguably, is on the cusp of mass-adoption. Time spent managing phishing assessments is time that could potentially be spent tightening up more robust and reliable technical controls.

Conclusion

Phishing isn't going away. In fact, generative AI may make it even more of a threat, because attackers can use it to overcome the traditional telltale signs: spelling mistakes, grammatical errors, and shoddy formatting. So it's increasingly important that we use every tool at our disposal to defend against it.

Of course, AI is available for defenders too, but we also recognize that humans are one of our most powerful assets when it comes to defense. People pick up on cues and context, both consciously and unconsciously, and can often feel when something is not quite right about an email.

If designed, executed, used, and measured in the right way, regular phishing simulations can help to develop those skills even further, provide you with a ready-made intelligence pipeline in the event of an attack, and enhance your security culture - all of which increases the chances of you disrupting the next real attempt.

Share this:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • More
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to print (Opens in new window) Print
  • Click to email a link to a friend (Opens in new window) Email
Sophos Group Ltd. published this content on October 31, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on October 31, 2025 at 11:01 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]