12/10/2025 | Press release | Distributed by Public on 12/10/2025 15:25
The backlash to Sydney Sweeney's latest ad campaign. Fierce debates over Sabrina Carpenter's album cover art. The internet's fury at creators making Goldfish crackers from scratch or smashing Nintendo Switch consoles for views. This year, social media has mastered one thing: getting us riled up.
Oxford's choice of "rage bait" as the 2025 Word of the Year reflects just how much of our culture now unfolds online - and how strongly anger, anxiety and moral judgment drive engagement. When negative comments count as "quality signals," algorithms gladly push provocation to the top of our feeds.
To explore why this happens, we turned to two experts in UC San Diego's School of Social Sciences. Andrew Kehler, a professor of linguistics, studies how people interpret language. Piotr Winkielman, a professor of psychology, researches emotion, social cognition and the unconscious processes that shape our reactions to what we see online. Together, they break down why "rage bait" caught on - and what it reveals about how we communicate today.
Kehler: Languages evolve to allow speakers to convey their messages efficiently, and common words evolve to be short. Once people repeatedly encounter a phenomenon like posts designed to provoke anger, the concept becomes common enough that it "demands" a short name.
A term like "rage bait' then allows people to categorize the experience and talk about it as a shared, recognizable pattern. Naming it also signals membership in an online, social media-literate community. As the term spreads from younger users to the general population, it becomes part of the common parlance - likely one reason Oxford selected it.
Kehler: Yes. New words typically succeed when they're built from familiar pieces. "Rage bait" is a compound noun: two known words rubbed together in a way that lets people infer the meaning easily. That partial transparency makes it easy to adopt and reuse, unlike an arbitrary new word like "blickation," which doesn't similarly wear its meaning on its sleeve.
We've seen this pattern before: for example, with the spread of the "-gate" suffix. It began with the Watergate scandal that brought down the Nixon administration, but speakers now use "-gate" to mean "scandal" even if younger speakers may not know its history. Language often evolves by tweaking what's already there, and "rage bait" fits that pathway perfectly.
Winkielman: Humans react more strongly to negative information - psychologists call this "negativity bias." Bad is stronger than good: negative events require action, whereas positive ones don't.
Rage bait taps into this. It engages moral emotions, which are powerful drivers of commenting, sharing and arguing. Positive stories spread too, but they don't create the same level of intense back-and-forth. Negative content pulls us in because it feels like something we need to respond to.
Winkielman: It shows people understand the incentives of the system. Platforms reward engagement, and outrage produces comments and views, so creators engineer content to provoke strong reactions.
A clear example is my son's favorite YouTuber, PlainRock, whose most popular videos involve buying Nintendo Switch consoles just to smash them. When a new model came out, he bought one and destroyed it outside the store while people waited in line. It's meant to provoke and it works.
This also reflects a broader loneliness epidemic. When people feel less tied to a community, it's easier to provoke, insult or attack strangers online - people you'll never meet and don't feel accountable to.
Winkielman: It highlights how much of our emotional life now unfolds online. Rage bait exploits basic psychological processes - reacting more strongly to bad than good, wanting to act when something feels morally wrong - and pairs them with low-effort digital engagement.
Online moral outrage often has a different function than real moral outrage. Genuine moral anger can, and historically has, motivated meaningful, costly action - like civil rights activism. These behaviors take time, effort and sometimes personal risk. But on social media, the threshold for expressing outrage is much lower. Posting or retweeting feels like action, yet it rarely leads to the kinds of real-world behavior that moral outrage can produce. This is what is referred to as "slacktivism."
Social incentives play a role too. People sometimes express outrage to show they are "good" (virtue signaling), or to show they belong to a political or social group (coalition signaling), even if their private attitudes are more mixed.
Winkielman: There isn't one perfect fix, because different interventions work differently for different people. But research on misinformation gives us a few ideas. Some approaches ask people to "think before you forward" or to pause and assess whether something is accurate. These can help, but they don't always work - especially if someone is very partisan, because reflecting can sometimes make them more likely to share the content.
Another angle is to look at who is behind the information. Troll farms, political groups or businesses may be trying to polarize you or use your engagement for their own purposes. Most people don't like feeling like "puppets," so realizing someone is manipulating your emotions can be motivating.
There's also a mental health aspect. You don't want your emotions to be controlled by algorithms or strangers online. Being aware that platforms push whatever triggers emotional reactions - not necessarily what's true - can help people step back and feel a bit more autonomous.