02/19/2026 | Press release | Distributed by Public on 02/19/2026 10:28
You see it over your social feeds: Videos of adorable babies saying oddly grown-up things, public figures making wildly uncharacteristic statements, nature photos too far-fetched to be true. In the era of AI, seeing isn't always believing.
Deepfakes threaten trust in news, elections, brands and everyday interactions, leading us to question what's real. Determining what's authentic or manipulated is the subject of Microsoft's "Media Integrity and Authentication: Status, Directions, and Futures " report, published today. The study evaluates today's authentication methods to better understand their limitations, explore potential ways to strengthen them and help people make informed decisions about the online content they consume.
The authors conclude that no single solution can prevent digital deception on its own. Methods such as provenance, watermarking and digital fingerprinting can offer useful information like who created the content, what tools were used and whether it has been altered.
People can be deceived by media if they lack information like its origin and history, or if its information is low-quality or misleading. The goal of the report is to provide a roadmap to deliver more high-assurance provenance information the public can rely on, according to Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.
Helping people recognize higher-quality content indicators is increasingly important as deepfakes become more disruptive and provenance legislation in various countries, including the U.S., introduce even more ways to help people authenticate content later this year.
Media provenance has been evolving for years, with Microsoft pioneering the technology in 2019 and cofounding the Coalition for Content Provenance and Authenticity (C2PA) in 2021 to standardize media authenticity.
Young, co-chair of the study, explains more about what it all means:
What prompted the study?
"The motivation was two-fold," Young says. "The first is the recognition of the moment we're in right now. We know generative AI capabilities are becoming increasingly powerful. It's becoming more challenging to distinguish between authentic content - like content that was captured by a camera versus sophisticated deepfakes - and as a result, there's a huge uptick right now in interests and requirements to use those technologies that exist to disclose and verify if content was generated or manipulated by AI.
"The moment has been building, and we have a desire to help ensure that these technologies ultimately drive more benefit than harm, based on how they're used and understood."
Young adds that the paper is meant to inform the greater media integrity and authentication ecosystem, including creators, technologists, policymakers and others to understand what is and isn't possible currently and how we can build on it for the future.
What did the study accomplish, and what did you learn?
The report outlines a path to increase confidence in the authenticity of media. The authors propose a direction they refer to as "high-confidence authentication" to mitigate the weaknesses of various media integrity methods.
Linking C2PA provenance to an imperceptible watermark can bring relatively high confidence about media's provenance, she says.
She notes the report has a lot of caveats too, such as how provenance from traditional offline devices like cameras, which often lack critical security features, can be less trustworthy because it's easier to alter.
It isn't possible to prevent every attack or stop certain platforms from stripping provenance signals, so the challenge, Young says, "is figuring out how to surface the most reliable indicators with strong security built in - and, when necessary, reinforce them with additional methods that allow recovery or support manual digital-forensics work."
How is this study different from others?
Young says their study investigated two "underexplored" lines of thought for the three methods of verification. They define the first as sociotechnical attacks, where provenance information or the media itself could be manipulated to make authentic content appear synthetic or fake content seem real during the validation process.
"Imagine you see an authentic image of a global sporting event with 80% of the crowd cheering for the home team," she says. "The away team engages in an online argument claiming, 'Hey, no, that's all a fake crowd.' Someone could make one small, insignificant edit to a person in the corner of the picture and current methods would deem it AI generated - even if the crowd size was real. These methods that are supposed to support authenticity are now reinforcing a fake narrative, instead of the real one.
"So, knowing how different validators work, even through really subtle modifications, you could manipulate the results the public would see to try to deceive them about content," she says. The second key topic builds on the C2PA's work to make content credentials more durable, while also addressing reliability. This is where the research is especially novel, Young says. "We looked at how provenance information can be added and maintained across different environments - from high-security systems to less secure, offline devices - and what that means for reliability."
Why is verifying digital media so difficult?
Authenticating media is complex because there's not a one-size-fits-all solution, Young says.
"You have different formats that have different limitations or trade-offs for the signals they can contain," she explains. "Whether it's images, audio, video - not to mention text, which has a whole different array of challenges - and how strong the solutions can be applied there."
Young says there are different requirements and opinions about what level of transparency is appropriate as well. In some cases, users might not want any of their personal information included in the digital provenance of a piece of media, while in others, creators or artists might want attribution and to opt-in for having their information included.
"So, you have different requirements or even considerations about what goes into that provenance information," she says. "And then, similar to the field of security, no solution is foolproof. So, all the methods are complementary, but each has inherent limitations."
Where do we go from here?
Young says that as AI-made or edited content becomes more commonplace, the use of secure provenance of authentic content is becoming increasingly important. Publishers, public figures, governments and businesses have good reason to certify the authenticity of the content they share. If a news outlet shoots photos of an event, for example, tying secure provenance information to those images can help show their audience the content is reliable.
"Government bodies also have an interest in the public knowing that their formal documents or media are reliable information about public interest matters," Young says.
She adds that as AI modifications to media become "increasingly common" for legitimate purposes, secure provenance can provide important context to help prevent an average reader or viewer from simply dismissing that content as fake or deceptive.
"For the industry and for regulators, we note how important continued user research in this area is to drive towards more consistent and helpful display of this information to the public - to make sure it's actually meaningful and useful in practice," Young says.
"We have a limited set of technologies that can assist us, and we don't want them to backfire from being misunderstood or improperly used."
Learn more on the Microsoft Research Blog.
Lead image: Mininyx Doodle/Getty Images
Samantha Kubota reports on everything AI and innovation for Microsoft Signal, with a recent focus on how AI agents are reshaping everyday work, Microsoft's research breakthroughs and the responsible use of emerging technologies. Prior to Microsoft, Kubota was a journalist at NBC News. Follow her on LinkedIn and X.