NAB - National Association of Broadcasters

09/20/2024 | News release | Distributed by Public on 09/20/2024 07:22

Why AI Rules for Broadcasters Miss the Target

Artificial intelligence (AI) is reshaping the entire political landscape, influencing not only how campaigns are conducted but also how voters access and process information about them. Its rise brings serious risks, including the spread of deepfakes - AI generated images, audio or video that distort reality. These deceptive tactics threaten to undermine public trust in elections and NAB supports government efforts to curtail them.

The Federal Communications Commission (FCC) is considering new rules that would require broadcasters to insert a disclaimer on political ads that use AI in any form. Unfortunately, due to the FCC's limited regulatory authority, this rule risks doing more harm than good. While the intent of the rule is to improve transparency, it instead risks confusing audiences while driving political ads away from trusted local stations and onto social media and other digital platforms, where misinformation runs rampant.

Deepfakes and AI generated misleading ads are not prevalent on broadcast TV or radio. These deceptive practices thrive on digital platforms, however, where content can be shared quickly with little recourse. The FCC's proposal places unnecessary burdens on broadcasters while the government ignores the platforms posing the most acute threat. This approach leaves much to be desired.

The Proposed Disclaimer Would Create Consumer Confusion

The FCC's proposed rule includes a requirement for broadcasters to add a disclaimer to political ads that use AI, which says:

"[The following] or [This] message contains information generated in whole or in part by artificial intelligence."

This generic disclaimer doesn't provide meaningful insight for audiences. AI is often used for routine tasks like improving sound or video quality, which has nothing to do with deception. By requiring this blanket disclaimer for all uses of AI, the public would likely be misled into thinking every ad is suspicious, making it harder to identify genuinely misleading content.

Pushing Ads to Digital Platforms Makes Things Worse

Since the rule would only apply to broadcasters, political advertisers may decide to move their ads to digital platforms where these rules don't exist. This shift would make the problem worse, pushing more political ads into spaces where misinformation spreads like wildfire. In addition, viewers might see an ad on TV with the AI disclaimer, then switch to a streaming service and see the same ad without it, creating confusion rather than clarity.

Congress Should Take the Lead

To truly tackle the issue of deepfakes and AI-driven misinformation, we need a solution that addresses all platforms, not just broadcast TV and radio. Congress is the right body to create consistent rules that hold those who create and share misleading content accountable across both digital and broadcast platforms. Instead of the FCC attempting to shoehorn new rules that burden only broadcasters into a legal framework that doesn't support the effort, Congress can develop fair and effective standards that apply to everyone and benefit the American public.

For decades, broadcasters have gone to great lengths to deliver factual, vetted information to their viewers and listeners. The trust we have built with our audiences is paramount, and no station wants to jeopardize that status. NAB supports efforts to prevent misleading AI generated content in elections, but the FCC's current proposal falls short. It risks confusing viewers and driving more political ads into the unregulated digital space. Congress, not the FCC, should lead in creating comprehensive rules that address the root of the problem and ensure that misleading content is tackled across all platforms.