Chairwoman Jessica Rosenworcel made the official proposal Wednesday that the FCC investigate and seek comment on such a rule. (The agency already ruled AI-generated robocalls illegal, but that was more about them not conforming to automated call rules than the AI piece.)
“Consumers have a right to know when AI tools are being used in the political ads they see, and I hope [the commissioners] swiftly act on this issue,” she said in a statement accompanying the announcement.
Under the envisioned framework, both candidate and issue ads would be required to include an on-air and filed disclosure that AI-generated content was used. This would apply to “cable operators, satellite TV and radio providers,” but not streamers or, say, YouTube, which the FCC lacks the statutory authority to regulate. There would first have to be an agreed-upon definition of AI-generated content.
The proposal is of the fact-finding type, the first public step in developing a new regulation. If adopted, the FCC would solicit comment on whether the regulation is necessary to begin with, how the content should be defined and so on. Unlike a rule-making document, these can be voted on whenever, so it’s conceivable — though unlikely — that the other Commissioners could give the thumbs-up before close of business Wednesday.
The FCC document describes “a clear public interest obligation for Commission licensees, regulatees, and permittees to protect the public from false, misleading, or deceptive programming and to promote an informed public.”
Certainly it seems intuitively true that most people would like to have some kind of indication when imagery, audio or anything else in a campaign ad is AI-generated; such a regulation would likely also deter low-effort attempts at doing such and help build a basis for going after bad actors like the shady company behind the fake Biden calls.