Palo Alto Networks Inc.

04/21/2025 | News release | Distributed by Public on 04/21/2025 23:37

False Face: Unit 42 Demonstrates the Alarming Ease of Synthetic I...

Executive Summary

Evidence suggests that North Korean IT workers are using real-time deepfake technology to infiltrate organizations through remote work positions, which poses significant security, legal and compliance risks. The detection strategies we outline in this report provide security and HR teams with practical guidance to strengthen their hiring processes against this threat.

In our demonstration, it took just over an hour with no prior experience to figure out how to create a real-time deepfake using readily available tools and cheap consumer hardware. This allows adversaries to easily create convincing synthetic identities, enabling them to operate undetected and potentially generate revenue for sanctioned regimes.

While we can still detect limitations in current deepfake technology, these limitations are rapidly diminishing. Organizations must implement layered defenses by combining enhanced verification procedures, technical controls and ongoing monitoring throughout the employee lifecycle.

Palo Alto Networks customers are better protected from the threats discussed in this article through Unit 42 Insider Threat Services.

Organizations can engage the Unit 42 Incident Response team for specific assistance with this threat and others.

Interviewing North Koreans

Talent acquisition and cybersecurity communities have recently reported a surge in candidates employing real-time deepfakes during job interviews. Investigators have documented cases where interviewees presented synthetic video feeds, using identical virtual backgrounds across different candidate profiles as shown in Figure 1.

The Pragmatic Engineer newsletter documented a case study involving a Polish AI company that encountered two separate deepfake candidates. Interviewers suspected the same individual operated both personas, particularly when the operator showed notably increased confidence during the second technical interview after previously experiencing the interview format and questions.

Unit 42's analysis of indicators shared in the Pragmatic Engineer report aligns with known tactics, techniques and procedures (TTPs) attributed to Democratic People's Republic of Korea (DPRK) IT worker operations. This represents a logical evolution of their established fraudulent work infiltration scheme.

North Korean threat actors have consistently demonstrated a significant interest in identity manipulation techniques. In our 2023 investigation, we reported on their efforts to create synthetic identities supported by compromised personal information, making them more difficult to detect.

We found further evidence when we analyzed the breach of Cutout.pro, an AI image manipulation service, which revealed scores of email addresses likely tied to DPRK IT worker operations. Figure 2 shows such image manipulation in face-swapped headshots.

DPRK IT workers incrementally advanced their infiltration methodology by implementing real-time deepfake technology. This offers two key operational advantages. First, it allows a single operator to interview for the same position multiple times using different synthetic personas. Second, it helps operatives avoid being identified and added to security bulletins and wanted notices like the one shown in Figure 3. Combined, it helps DPRK IT workers enjoy enhanced operational security and decreased detectability.

Zero to Passable

A single researcher with no image manipulation experience, limited deepfake knowledge and a five-year-old computer created a synthetic identity for job interviews in 70 minutes. The ease of creation demonstrates how dangerously accessible this technology has become to threat actors.

Using only an AI search engine, a passable internet connection and a GTX 3070 graphics processing unit purchased in late 2020, they produced the sample shown in Figure 4.

Figure 4. A demonstration of a realtime deepfake on cheap and widely-available hardware.

They used only single images generated by thispersonnotexist[.]org, which permits the use of generated faces for personal and commercial purposes, as well as free tools for deepfakes. With these, they generated multiple identities, as shown in Figure 5.

Figure 5. A demonstration of identity switching.

A simple wardrobe and background image change could be all it takes to come back to a hiring manager as a brand-new candidate. In fact, the most time-consuming part of this entire process was creating a virtual camera feed to capture in video conferencing software.

With a little more time and a much more powerful graphics processing unit, a higher resolution version of the same process produced more convincing results, as shown in Figure 6.

Figure 6. A higher quality deepfake using a more resource-intensive technique.

Detection Opportunities

There are several technical shortcomings in real-time deepfake systems that create detection opportunities:

  1. Temporal consistency issues: Rapid head movements caused noticeable artifacts as the tracking system struggled to maintain accurate landmark positioning
  2. Occlusion handling: When the operator's hand passed over their face, the deepfake system failed to properly reconstruct the partially obscured face
  3. Lighting adaptation: Sudden changes in lighting conditions revealed inconsistencies in the rendering, particularly around the edges of the face
  4. Audio-visual synchronization: Slight delays between lip movements and speech were detectable under careful observation

At this time, there are several ways to make life difficult for the would-be deepfakers. The most effective method appears to be passing a hand over a face, which disrupts facial landmark tracking.

Govind Mittal et al. of New York University suggest additional strategies :

  • Rapid head movements
  • Exaggerated facial expressions
  • Sudden lighting changes

These techniques exploit weaknesses in real-time deepfake systems, causing visible artifacts that help humans detect fakes with high accuracy.

We'll demonstrate three more options to add to an interviewer's repertoire in Figures 7a-c.

Figure 7a. The "ear-to-shoulder."

Figure 7b. The "nose show."

Figure 7c. The "sky-or-ground."

Mitigation Strategies

The DPRK IT worker campaign demands close collaboration between human resources (HR) and information security teams. When both work together, it affords an organization more detection opportunities across the entire hiring and employment lifecycle.

Disclaimer: The following are mitigation strategies meant to offer insights and suggestions for the reader's consideration. They are being provided for informational purposes only and should not be considered legal advice. Prior to implementing any of these practices, consult with your own legal counsel to confirm alignment with applicable laws.

For HR Teams:

  • Ask candidates to turn their cameras on for interviews, including initial consultations
    • Record these sessions (with proper consent) for potential forensic analysis
  • Implement a comprehensive identity verification workflow that includes:
    • Document authenticity verification using automated forensic tools that check for security features, tampering indicators and consistency of information across submitted documents
    • ID verification with integrated liveness detection that requires candidates to present their physical ID while performing specific real-time actions
    • Matching between ID documents and interviewee, ensuring the person interviewing matches their purported identification
  • Train recruiters and technical interviewing teams to identify suspicious patterns in video interviews such as unnatural eye movement, lighting inconsistencies and audio-visual synchronization issues
  • Have interviewers get comfortable with asking candidates to perform movements challenging for deepfake software (e.g., profile turns, hand gestures near the face or rapid head movements)

For Security Teams:

  • Secure the hiring pipeline by recording job application IP addresses and checking they aren't from anonymizing infrastructure or suspicious geographic regions
  • Enrich provided phone numbers to check if they are Voice over Internet Protocol (VoIP) carriers, particularly those commonly associated with identity concealment
  • Maintain information sharing agreements with partner companies and participate in applicable Information Sharing and Analysis Centers (ISACs) to stay current on the latest synthetic identity techniques
  • Identify and block software applications that enable virtual webcam installation on corporate-managed devices when there is no legitimate business justification for their use.

Additional Indicators:

  • Monitor for abnormal network access patterns post-hiring, particularly connections to anonymizing services or unauthorized data transfers
  • Deploy multi-factor authentication methods that require physical possession of devices, making identity impersonation more difficult

Organizational Policy Considerations:

  • Develop clear protocols for handling suspected synthetic identity cases, including escalation procedures and evidence preservation methods
  • Create a security awareness program that educates all employees involved in hiring about synthetic identity red flags
  • Establish technical controls that limit access for new employees until additional verification milestones are reached
  • Document verification failures and share appropriate technical indicators with industry partners and relevant government agencies

By implementing these layered detection and mitigation strategies, organizations can significantly reduce the risk of synthetic identity infiltration while maintaining an efficient hiring process for legitimate candidates.

Conclusion

The synthetic identity threat typified by North Korean IT worker operations represents an evolving challenge for organizations worldwide. Our research demonstrates the alarming accessibility of synthetic identity creation, with continuously lowering technical barriers as AI-generated faces, document forgery tools and real-time voice/video manipulation technologies become more sophisticated and readily available.

As synthetic identity technologies continue to evolve, organizations must implement layered defense strategies that combine:

  • Enhanced verification procedures
  • AI-assisted countermeasures for deepfake detection
  • Continuous verification throughout employment

This approach significantly improves an organization's ability to detect and mitigate against not only North Korean IT workers but also a variety of similar threats.

No single detection method will guarantee protection against synthetic identity threats, but a layered defense strategy significantly improves your organization's ability to identify and mitigate these risks. By combining HR best practices with security controls, you can maintain an efficient hiring process while protecting against the sophisticated tactics employed by North Korean IT workers and similar threat actors.

Palo Alto Networks customers can better protect against the threats discussed above through Unit 42 Insider Threat Services to holistically improve detection and remediation.

If you think you might have been compromised or have an urgent matter, get in touch with the Unit 42 Incident Response team or call:

  • North America: Toll Free: +1 (866) 486-4842 (866.4.UNIT42)
  • UK: +44.20.3743.3660
  • Europe and Middle East: +31.20.299.3130
  • Asia: +65.6983.8730
  • Japan: +81.50.1790.0200
  • Australia: +61.2.4062.7950
  • India: 00080005045107

Palo Alto Networks has shared these findings with our fellow Cyber Threat Alliance (CTA) members. CTA members use this intelligence to rapidly deploy protections to their customers and to systematically disrupt malicious cyber actors. Learn more about the Cyber Threat Alliance.

Additional Resources