09/29/2025 | News release | Distributed by Public on 09/29/2025 15:38
October is Cybersecurity Awareness Month, a nationwide effort to highlight digital security threats and equip individuals with the tools they need to protect themselves online. One of the biggest concerns is the growing use of artificial intelligence in cyberattacks, especially the rise of deepfakes, which are AI-generated videos, images or audio clips that convincingly mimic real people. Cybercriminals increasingly leverage deepfakes to deceive victims into sharing personal information, clicking on malicious links or sending money and sensitive data.
In this conversation with UCLA Newsroom, Chris Mattmann, UCLA's chief data and artificial intelligence officer, discusses how AI is transforming cybersecurity and offers practical advice for Bruins interested in AI-driven defense strategies.
How are AI and deepfakes changing the cybersecurity threat landscape in higher education?
AI and deepfakes add new complexities to cybersecurity threats, especially on campus. A major concern is reputational harm. AI-generated videos or emails can make it seem like someone said or did something they never did. These impersonations can be very convincing, often deceiving individuals into unwittingly providing personal information. This information is then used in identity theft, direct deposit fraud or other cyber extortion tactics. On campus, deepfakes can also infiltrate daily academic activities. For example, a student might use an AI-created avatar to appear in a Zoom class and earn participation credit without actually attending. This kind of impersonation undermines academic integrity and underscores the importance of digital literacy and detection skills. As co-chair of the Advisory Committee on AI in Teaching and Learning, I work to help faculty and students recognize these threats and raise awareness of how AI can be misused in academic settings.
What are the most common signs of a phishing attempt, and how is AI involved in both creating and detecting these scams?
Phishing attempts often rely on creating a sense of urgency and pressuring recipients to click links or open attachments without thinking. Spear phishing takes this further by researching the target and crafting personalized messages that feel genuine. AI increases the danger by automating and scaling these attacks across platforms such as email, SMS, phone and social media. It can generate thousands of customized messages, increasing the chances of success. Conversely, AI is also used to detect phishing by analyzing patterns, language and sender behavior in real time.
In what ways can AI-powered social engineering tactics manipulate individuals more effectively than traditional methods?
AI makes social engineering more realistic, scalable and multimodal. Deepfakes and AI-generated messages mimic trusted contacts with alarming accuracy, exploiting visual and emotional cues. Attackers use AI to analyze social media, replicate identities and launch coordinated campaigns across email, SMS and social platforms. This realism causes targets to trust and engage with malicious content - especially when it appears to come from familiar sources like university leaders to create an additional sense of urgency, or colleagues or friends. AI also allows attackers to target multiple vectors simultaneously, increasing the likelihood of success.
Are AI-driven DDoS attacks becoming more sophisticated, and how can universities prepare their infrastructure to defend against them?
Yes. Distributed Denial of Service (DDoS) attacks, once simple floods of traffic, are now powered by AI, making them faster, stealthier and more difficult to trace. These attacks can overwhelm servers with thousands of requests per second, disrupting services and damaging reputations. Traditional defenses involve detecting traffic patterns or geographic origin, but AI can simulate traffic from multiple locations worldwide, bypassing filters and hiding the true source. To protect against these threats, universities need layered, AI-enhanced infrastructure - including firewalls, traffic scrubbing tools and real-time anomaly detection systems. Advanced AI-powered networks can proactively detect and block malicious threats before they ever threaten members of the Bruin community. As part of the Bruin Connect and Secure program, UCLA has begun a network transformation that will implement many of these advanced capabilities. AI can also help identify new attack methods by learning from past incidents and adjusting defenses accordingly.
What role does cybersecurity education play, and how can students and staff get involved in AI-focused defense strategies?
Cybersecurity education is foundational. AI literacy increases awareness of the risks in usage and how to exercise sound judgment, creates a shared vocabulary and empowers individuals to spot and respond to threats. Training helps people understand what to do, whether it's reporting phishing or responding during a breach. Bruins are encouraged to visit the UCLA Cybersecurity Awareness Month website for information and resources to promote online safety throughout the year. While there, you can complete a quiz for a chance to win two tickets to the UCLA vs. USC football game on Saturday, Nov. 29. Additionally, UCLA Digital and Technology Solutions will host tabling events on Wednesday, Oct. 1, at McClure Stage (Bruin Plaza) and in Dickson Court North from 11 a.m. to 2 p.m. There will be a prize wheel with giveaways, and staff will be available to help UCLA students, faculty and staff register for a free 1Password account, which allows users to manage their passwords securely.