fb tracking

Public Citizen Comment: FCC Proposal on AI- Generated Content in Political Advertisements

Download PDF version of comment

September 19, 2024
Federal Communications Commission
Washington, D.C. 20554

Re: Comments In the Matter of Disclosure and Transparency of Artificial Intelligence- Generated Content in Political Advertisements, MB Docket No. 24-211

To the Federal Communications Commission:

Public Citizen is a national consumer rights and pro-democracy organization with more than 500,000 members and supporters. We submit these comments to support strongly the Commission’s proposed rule on disclosure and transparency of artificial intelligence-generated content in political advertisements, and to suggest some refinements.

In these comments, we make the following key points:

  1. Misleading artificial intelligence-generated audio and video content is proliferating in the United States and around the world and poses a serious threat to democratic integrity.
  2. The harms from deceptive deepfakes can be substantially mitigated or avoided through disclosures that inform viewers and listeners that they are seeing or hearing AI-generated content. A Federal Communications Commission disclosure requirement for AI-generated content is thus strongly justified by the 47 USC §303(r)’s public interest standard and easily satisfies a cost-benefit calculus.
  3. The threats from misleading artificial intelligence-generated audio and video content apply equally – or perhaps even more seriously – to cable operators, DBS providers and SDARS licensees and the Commission’s rule should extend to these entities.
  4. Disclosure should be calibrated so as not to apply to all or virtually all political advertisements. Recognizing potential ambiguity in the Commission’s proposed definition of “artificially generated content,” we propose a modest revision to the definition to clarify that the disclosure requirement would apply to content that is completely or predominantly generated by AI or significantly edited by AI tools.
  5. The disclosure obligation proposed by the Commission is completely compatible with the First Amendment and can withstand even heightened levels of scrutiny.
  6. The disclosure obligation is compatible with the Communications Act Section 315(a)’s anti-censorship provision. It is important that the Commission make a formal interpretation to this effect, because, currently, the anti-censorship provision is working to preempt state action on AI and broadcasters, as well as to impede broadcasters from requiring common sense disclosures even if they know a political ad contains deceptive AI-generated content.

Deepfakes’ Threat to Broadcast and Cable Integrity and the FCC’s Authority to Act

Extraordinary advances in artificial intelligence now provide everyone — from individuals to political candidates to outside organizations to trillion-dollar companies — with the means to produce campaign ads and other communications with computer-generated fake images, audio or video of candidates that convincingly appear to be real. These ads can fraudulently misrepresent what candidates say or do and thereby influence the outcome of an election.  Already, deepfake audios can be almost impossible to detect; images are extremely convincing; and high-quality videos appear real to a casual viewer – and the technology is improving extremely rapidly.

The risks that AI-generated political ads pose to the information ecosystem are plain. First, deepfakes heighten the risk that false information or impressions will affect election outcomes. A late-breaking deepfake – for example, falsely showing a candidate making a racist statement, or slurring their words, or accepting a bribe – aired days before an election could easily sway the election’s outcome. Even if aired earlier in the election cycle, political deepfakes can leave lasting, fraudulent impressions. Deepfakes are extremely difficult to rebut, because they require a victim to persuade people not to believe what they saw or heard with their own eyes and ears. Stated differently, deepfakes appear to be “direct evidence” rather than “hearsay.” This is qualitatively different than campaign ads simply making false or misleading statements about a candidate. Because the risk they pose is categorically different from the risk posed by untrue or misleading statements about a candidate.

Second, in addition to the direct fraud they perpetrate, the proliferation of AI-generated content offers the prospect of a “liar’s dividend,” in which a candidate legitimately caught doing something reprehensible claims that authentic media is AI generated and fake.[1] It will also be possible for candidates to generate their own deepfakes to cover up their actions. Conflicting images and audios of what a candidate said or did could be used to lead a skeptical public to doubt the authenticity of genuine audio or video evidence.

Third, the stakes of an unregulated and undisclosed Wild West of AI-generated campaign communications are far more consequential than the impact on candidates; it will erode the public’s confidence in the integrity of the broadcast ecosystem itself. Congress intended FCC-licensed stations to serve the public interest by, among other things, providing them with special rights and responsibilities with respect to election communications. But if voters cannot discern reality from verisimilitude because the use of deepfakes is not disclosed, they will increasingly lose confidence in the ability of broadcast and cable TV and radio to deliver trustworthy election information to the public.

Political deepfakes are both growing in sophistication and becoming more common. Deepfakes have already influenced elections around the world. A Wired magazine database highlights examples across the globe.[2] Deepfakes are said to have impacted election results in Slovakia,[3] damaged election integrity in Pakistan[4] and spread disinformation in Argentina[5] and Mexico.[6]

In the United States, political deepfakes – of varying levels of sophistication – are also proliferating. Governor Ron DeSantis’s campaign published a deepfake image showing former President Donald Trump embracing and kissing Dr. Anthony Fauci,[7] a deepfake appeared to show Chicago mayoral candidate Paul Vallas condoning police brutality[8] and a Super PAC posted deepfake videos falsely depicting then-congressional candidate Mark Walker saying his opponent is more qualified than him.[9]

A political consultant used a deepfake version of President Biden’s voice on robocalls discouraging New Hampshire voters from turning out for the state’s primary[10] (and has now been indicted on felony charges).[11] In July 2024, Elon Musk posted a deepfake video of Vice President Harris,[12] in violation of X/Twitter’s own deepfake policy.[13] Pop star Taylor Swift has criticized deepfakes falsely depicting her as endorsing Donald Trump,[14] and deepfake celebrity endorsements of political candidates are mushrooming.[15]

Deepfakes pose special concern for communities of color and non-English speakers. There is a long and disturbing history of mass communications targeted at Black and Spanish-speaking voters that aim to discourage them from voting, using both disinformation and implicit threats. Generative AI provides these malicious actors with a new tool of targeted, fraudulent communications, for example with advertisements with misleading voting information, apparently spoken by credible figures, on Spanish-language TV and radio that may not have the same visibility as such tactics would have on English-language programming.[16]

In short, political deepfakes are a here-and-now problem, poised to become much more severe as quickly evolving generative AI technologies make deepfakes easier to produce at ever higher levels of quality. And, in the absence of regulation, deepfakes could become normalized in the public’s mind, making it much more difficult address the problem later.

The good news is, these harms are easily curable. We agree with the Commission’s conclusion in ¶14 of the Notice of Proposed Rulemaking (NPRM): If viewers and listeners are given clear and prominent disclosures that they are viewing images or listening to audio that was generated using AI technologies, they can evaluate the content appropriately and not be tricked into believing they are viewing or hearing authentic content. Generative AI technologies offer many creative benefits and are becoming increasingly enmeshed in video and audio software. The specific harm from AI-generated imagery and audio is that people may believe they are looking at authentic content when in fact it is AI-generated. Robust disclosures can largely cure this problem.

Against this backdrop, it is imperative that the FCC act. In NPRM ¶27, the Commission asks if it has the authority to adopt the proposed on-air disclosure and political file requirements for AI-generated content in political ads. We agree with the Commission’s conclusion that 47 USC §303(r)’s public interest standard provides ample authority for the proposal. The proposal will go far to ameliorate the very likely harms that will occur in the absence of regulation and serve the Commission’s mission of protecting the integrity of the broadcast system and ensuring transparency in campaign advertisements.

Similarly, in ¶22, the Commission proposes that the obligation for on-air disclosure and political file requirements for AI-generated content in political ads be applied to cable operators, DBS providers and SDARS licensees. We agree with this conclusion, which follows logically from the existing application of political programming and sponsorship requirements to these entities. Indeed, because these entities often engage in more narrowcasting than TV and radio broadcasters, it is arguably more important that the disclosure and record-keeping requirements apply to them; narrowcasting raises the prospect of more targeted fraudulent and deceptive deepfakes, which may seem more effective and more likely to evade detection, including especially when targeting non-English speaking audiences.

In ¶36, the Commission asks about costs and benefits. For the reasons stated by the Commission, we agree that the costs would be small. None of the elements of the proposed rule – a simple request requiring no additional data gathering by the advertiser, an entry into the political file and a straightforward, pre-established disclosure – should entail more than very modest costs. By contrast, for the reasons stated by the Commission and elaborated in this comment, we believe the benefits would be substantial and consequential.

In ¶37, the Commission asks about impacts on digital equity. For the reasons explained above about potential AI-generated advertising targeted at communities of color and especially non-English speaking audiences, we believe the proposal would meaningfully advance digital equity and inclusion.

Definition and Scope of Disclosure

The Commission defines artificial intelligence in NPRM ¶11, relying on the definition provided in President Biden’s AI Executive Order, which itself draws from a prior statutory definition. Recognizing there is no precise definition of AI possible, we believe this definition is appropriate.

In ¶12, the Commission proposes to define “AI-generated content” as “an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.”

The definition of “AI-generated content” is extremely important in the context of the rule, because it defines the scope of political advertisements for which a disclosure must be made. While it is vital that the Commission mandate disclosures of AI-generated content, the benefits of those disclosures will be lost if the disclosure requirement is overbroad. As AI editing tools become commonplace, we may soon reach a position where almost all ads could be said to contain AI-generated content.

While we believe the Commission’s proposed definition could fairly be read to avoid this result, we recommend a refinement to avoid ambiguities that may lead to an overbroad definition.  We recommend modifying the definition of “AI-generated content” to cover only content that has been primarily generated or significantly edited by AI technologies. We propose specific language further below.

Examining the policies of major social media platforms that have already been forced to address this issue provides some useful examples of well-developed, experience-based, generative AI labeling policies.

TikTok: TikTok requires the labeling of videos that are “completely generated or significantly edited by AI.”[17] TikTok elaborates on this standard as follows:

We consider content that’s significantly edited by AI as that which uses real images/video as source material, but has been modified by AI beyond minor corrections or enhancements, including synthetic images/video in which:

  • The primary subjects are portrayed doing something they didn’t do, e.g. dancing.
  • The primary subjects are portrayed saying something they didn’t say, e.g. by AI voice cloning; or
  • The appearance of the primary subject(s) has been substantially altered, such that the original subject(s) is no longer recognizable, e.g. with an AI face-swap.

Twitter/X: Twitter/X’s policy on generative AI is included in its overall policy on misleading media. X requires labeling or removal of material “that is significantly and deceptively altered, manipulated, or fabricated.”[18] X elaborates on these standards by listing relevant factors to determine if media meets these criteria:

  • Whether media have been substantially edited or post-processed in a manner that fundamentally alters their composition, sequence, timing, or framing and distorts their meaning;
  • Whether there are any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added, edited, or removed that fundamentally changes the understanding, meaning, or context of the media;
  • Whether media have been created, edited, or post-processed with enhancements or use of filters that fundamentally changes the understanding, meaning, or context of the content; and
  • Whether media depicting a real person have been fabricated or simulated, especially through use of artificial intelligence algorithms.

Meta/Facebook/Instagram: Meta requires labeling of AI-generated content.[19] It has recently updated its policy to distinguish between content that is generated with AI and content that was edited using AI tools. This reflected the company’s experience that using modest retouching AI tools would lead content to be labeled as “Made with AI,” which created user confusion and dissatisfaction due to perceived over-labeling.

Meta’s recently revised policy requires prominent labeling only for content that was generated by AI. When content is modified or edited by AI tools, the information is still available for users, but not prominently displayed:

For content that we detect was only modified or edited by AI tools, we are moving the “AI info” label to the post’s menu. We will still display the “AI info” label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed.

Youtube: YouTube requires labeling of content that was “meaningfully altered or synthetically generated when it seems realistic.”[20] This standard is further elaborated:

To help keep viewers informed about the content they’re viewing, we require creators to disclose content that is meaningfully altered or synthetically generated when it seems realistic.

Creators must disclose content that:

  • Makes a real person appear to say or do something they didn’t do
  • Alters footage of a real event or place
  • Generates a realistic-looking scene that didn’t actually occur.

This could include content that is fully or partially altered or created using audio, video or image creation or editing tools.

Altogether, the lessons from the platforms’ AI disclosure policies are:

  1. Avoid overbroad labeling from use of AI editing tools.
  2. Require labeling based on a standard around concepts like: “completely generated” by AI and “significantly edited or altered” by AI.
  3. Elaborate the overarching principle with more specific sub-principles, such as showing a person saying or doing something they did not say or do.

Taking these lessons into account, we propose the following definition of “Artificial Intelligence-Generated Content” for purposes of the rule:

“Artificial Intelligence-Generated Content is defined for purposes of this section as an image, audio, or video that has been completely or primarily generated or significantly edited using computational technology or other machine-based systems that depict an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including an image, audio or video in which:

  • The primary subjects are portrayed doing something they didn’t do or in realistic looking scenes that did not occur or which have been substantially altered;
  • The primary subjects are portrayed saying something they didn’t say;
  • The appearance of the primary subjects has been substantially altered in a realistic-looking manner (e.g., a person is realistically depicted wearing a T-shirt with a slogan that they did not wear).”

Implementation Issues

The Commission requests comment on a number of specific implementation issues.

In NPRM ¶16, the Commission asks about the timing of required disclosures. Disclosures should be required immediately prior to/at the start of a political ad. An upfront disclosure is necessary so that viewers and listeners have the appropriate context in which to assess AI-generated content. An after-the-fact disclosure does not empower viewers and listeners in real time to evaluate the content with knowledge that it was AI generated. An after-the-fact disclosure is also easier for viewers or listeners to miss or mentally tune out.

In ¶17, the Commission raises the scenario of a broadcaster being informed by a credible third party that an ad was generated with artificial intelligence where there was no previous disclosure by the advertiser. Where the credible third party’s information is compelling (for example, where a depicted person attests that they did not say or do the things the ad realistically shows them as saying or doing), the broadcaster should attach an AI disclosure. This disclosure should come at the beginning of the ad, for the same reasons that all such disclosures should be made at the start of ads. If the credible third party’s information raises legitimate questions but falls short of compelling evidence, the broadcaster should immediately follow up with the advertiser.

In ¶21, the Commission asks about syndicated programming. We recommend a multi-tiered response. Broadcasters should inform network and syndication partners of the broadcasters’ duty to include disclosures for AI-generated content in political ads. They should inquire of their partners if any such ads are included in their programming, as they would inquire of direct advertisers. This inquiry should occur at the start of each television season or calendar year; but, more importantly, it should occur within a window near every primary and general election, say each month within 120 days before the election. Because each inquiry involves de minimis effort, frequent inquiries would not be burdensome.

The case of syndicated and network programming creates longer chains of responsibility and makes it particularly important that robust systems are in place for broadcasters to insert disclosures when they receive compelling information from credible third parties that an ad was generated with artificial intelligence where there was no previous disclosure by the advertiser.

First Amendment Considerations

Public Citizen agrees with the Commission’s tentative conclusion that the First Amendment does not bar the imposition of disclosure requirements on broadcast and other licensed facilities to combat the improper use of AI in political advertising. See NPRM ¶ 29. As the NPRM explains, different levels of First Amendment scrutiny apply in different contexts, but all the various tests examine the nature of the governmental interests at stake and the means used to achieve them. We agree that the proposed rules would be consistent with the First Amendment even under heightened forms of scrutiny.

At the outset, the Supreme Court has recognized that the government has a legitimate interest in protecting the public from being misled and that disclosure requirements tailored to advance that interest can survive “exacting scrutiny.” Citizens United v. FEC, 558 U.S. 310, 366–67 (2010) (citing Buckley v. Valeo, 424 U.S. 1, 64 (1976), and McConnell v FEC, 540 U.S. 93, 201 (2003)). In Citizens United, for instance, the Court upheld a requirement that televised electioneering communications “include a disclaimer” identifying the name and address of the person responsible for the communication and explaining that the advertisement was not funded by a candidate or candidate committee. Id. at 366. The Court explained that the requirement advanced the government’s interests in “provid[ing] the electorate with information,” “insur[ing] that the voters are fully informed about the person or group who is speaking,” and “making clear that the ads are not funded by a candidate or political party.” Id. at 368 (cleaned up). The Court rejected the argument that the disclosure requirement was underinclusive because it applied only to broadcast advertising and communications on other media. Id. at 368. And although the Court recognized that “[d]isclaimer and disclosure requirements may burden the ability to speak,” it observed that a disclosure requirement “impose[s] no ceiling on campaign-related activities,” … and “do[es] not prevent anyone from speaking.’” Id. at 366.

The Court also applied “exacting scrutiny” to uphold a requirement that signatories on a referendum be disclosed. Doe v. Reed, 561 U.S. 186, 196 (2010). The Court explained that the state’s “interest in preserving the integrity of the electoral process is undoubtedly important,” id. at 197, and “not limited to combating fraud,” id. at 198. The Court also concluded that the disclosure requirement was a valid means of advancing that interest because it “can help cure inadequacies” in other methods used to protect against invalid signatures. Id. at 198. See also Nat’l Ass’n of Manuf. v. Taylor, 582 F.3d 1, 11 (D.C. Cir. 2009) (holding that law requiring disclosure of lobbying information survived strict scrutiny).

As the NPRM observes, the government has a vital interest in ensuring that broadcasters “assume responsibility for all material which is broadcast through their facilities” and “take reasonable measure to address any false, misleading, or deceptive matter.” NPRM ¶ 30. That is especially true for election-related advertising, where Congress has imposed important public-interest responsibilities on broadcast licensees. Id. In the context of political advertising, the government and the public have an especially strong interest in prohibiting the use of AI-generated content to disseminate “deceptive, misleading, or fraudulent information to voters.” Id. When a candidate makes a deceptive verbal representation about an opponent, it is possible to mitigate the impact of the misrepresentation by persuasively exposing it as untrue. See United States v. Alvarez, 567 U.S. 709, 727 (2012) (plurality opinion) (“The remedy for speech that is false is speech that is true.”). AI technology, however, allows advertising to be created that deceptively appears to the public to be authentic evidence of a claim for or against a candidate or an issue.

Absent a disclosure requirement, such advertising can manipulate public opinion illegitimately, precisely because it is far more difficult to expose through counter-speech. Cf. FTC v. Colgate-Palmolive Co., 380 U.S. 374, 386 (1965) (recognizing that a “representation to the public” that “a viewer is seeing” evidence of a claim “for himself” may be a “false” representation that influences consumer purchasing decisions). Informing the public that political advertising is AI-generated ensures that they have the information they need to correctly “evaluate the arguments to which they are being subjected.” First Nat’l Bank of Bos. v. Bellotti, 435 U.S. 765, 792 n.32 (1978). “[D]isclosure is a less restrictive alternative to more comprehensive regulations of speech,” Citizens United, 558 U.S. at 369, which in this case might otherwise be a ban on advertisements primarily generated or substantially edited by AI. The First Amendment does not prevent the Commission from requiring disclosures designed to avoid misleading political advertising on a medium charged with operating in the public interest. See FCC v. League of Women Voters of Cal., 468 U.S. 364, 378 (1984) (recognizing that broadcasters “bear the public trust”).

Content-Neutral Disclaimers and Section 315(a)

In NPRM footnote 54, the Commission tentatively concludes that content-neutral disclaimers of the sort proposed by the Commission are consistent with section 315(a) of the Communications Act and do not constitute censorship. The Bureau’s precedent and common sense both support this conclusion, and we agree with the Commission’s tentative conclusion. The proposed disclaimers do not prevent any ad from airing, and they do not discriminate based on viewpoint; they merely provide transparency to viewers and listeners and prevent fraud and deception that is otherwise significantly incurable by voters or negatively impacted candidates themselves.

The Section 315(a) issue is crucial because it underscores that the FCC effectively is already regulating related to the use of artificial intelligence in political ads. More than 20 state legislatures have passed legislation to regulate and require disclosure of political deepfakes.[21] Most states and commentators believe that Section 315(a) exercises a preemptive force against such state regulation and most or all of the state laws exempt broadcasters from coverage. The broadcasters believe that Section 315(a) prevents them from requiring disclosures on deceptive AI-generated ads even if they are certain the ads were generated by AI.

Thus, clarifying that content-neutral disclaimer requirements are consistent with Section 315(a) is necessary and important independent of any other regulatory action the Commission takes related to the use of artificial intelligence in political ads. Such a clarification would free states to regulate the use of artificial intelligence in political ads consistently across all platforms, as they are now constrained from doing; and it would enable the broadcasters to exercise common sense and require disclosures on content that they know is primarily AI-generated or AI-meaningfully edited.

If the Commission is not able to move expeditiously – before the 2024 election — to finalize this proposed rule, we encourage the Commission to issue a standalone interpretive finding that content-neutral disclaimers relating to the use of AI in political ads are consistent with section 315(a) of the Communications Act and do not constitute censorship.

Conclusion

We applaud the Commission for proactively taking up this issue and helping protect democracy and reduce distrust with broadcast material. The challenges of AI-generated political ads are fast rushing at America and the world, and we urge the Commission to move as quickly as possible to refine and finalize its proposal.

Sincerely,

Robert Weissman
Co-president, Public Citizen
1600 20th St., NW
Washington, D.C.
(202) 588-1000

[1] Bryan McKenzie, “Is that real? Deepfakes could pose danger to free elections,” UVA Today, August 24, 2023,

https://news.virginia.edu/content/real-deepfakes-could-pose-danger-free-elections#:~:text=A%20deepfake%20is%20a%20computer,to%20entertainment%2C%20hoaxes%20to%20harassment. See also: Josh Goldstein and Andrew Lohn, “Deepfakes, Elections and Shrinking the Liar’s Dividend,” Brennan Center, January 23, 2024, https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend.

[2] The Wired AI Elections Project, May 30, 2024, https://www.wired.com/story/generative-ai-global-elections.

[3] Olivia Solon, “Trolls in Slovakian Election Tap AI Deepfakes to Spread Disinfo,” Bloomberg, September 29, 2023, https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo; Morgan Meaker, “Slovakia’s Election Deepfakes Show AI is a Danger to Democracy,” Wired, October. 3, 2023), https://www.wired.co.uk/article/slovakia-election-deepfakes.

[4] Nilofar Mughal, Deepfakes, Internet Access Cuts Make Election Coverage Hard, Journalists Say, VOA, February 22, 2024, https://www.voanews.com/a/deepfakes-internet-access-cuts-make-election-coverage-hard-journalists-say-/7498917.html.

[5] Jack Nicas and Lucía Cholakian Herrera, “Is Argentina the First AI Election?” New York Times, November 15, 2023, https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html.

[6] Emily Kohlman, “Deepfakes, Identity Politics, and Misinformation Campaigns Target the Mexican Election,” Blackbird.AI, https://blackbird.ai/blog/mexico-election-deepfakes-identity-politics-misinformation-disinformation.

[7] Nicholas Nehamas, “DeSantis campaign uses apparently fake images to attack Trump on Twitter, New York Times, June 8, 2023, https://www.nytimes.com/2023/06/08/us/politics/desantis-deepfakes-trump-fauci.html.

[8] Megan Hickey, “Vallas campaign condemns deepfake posted to Twitter,” CBS News, February 27, 2023,

https://www.cbsnews.com/chicago/news/vallas-campaign-deepfake-video.

[9] Danielle Battaglia, “‘Deepfake’ videos target Mark Walker in NC congressional campaign,” The News and Observer, February 29, 2024, https://www.newsobserver.com/news/politics-government/election/article286043906.html.

[10] Alex Seitz-Wald and Mike Memoli, “Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday,” NBC News, January 22, 2024, https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984.

[11] Ross Ketschke, “Political consultant indicted for AI robocalls with fake Biden voice made to New Hampshire voters,” WMUR9, May 23, 2024, https://www.wmur.com/article/steve-kramer-ai-robocalls-biden-indictment-52224/60875422.

[12] Mirna Alsharif, Alexandra Marquez and Austin Mullen, “Elon Musk retweets altered Kamala Harris campaign ad,” NBC News, July 28, 2024, https://www.nbcnews.com/news/us-news/elon-musk-retweets-altered-kamala-harris-campaign-ad-rcna163985.

[13] “Letter to Elon Musk and X,” Public Citizen, July 29, 2024, https://www.citizen.org/article/letter-to-elon-musk.

[14] https://www.instagram.com/taylorswift/p/C_wtAOKOW1z/?hl=en.

[15] Fake celebrity endorsements become latest weapon in misinformation wars, sowing confusion ahead of 2024 election

https://www.cnn.com/2024/08/22/media/fake-celebrity-endorsements-social-media-2024-election-misinformation/

[16] Astrid Galván, “First AI election renews battle against Spanish-language misinformation,” Axios, February 22, 2024, https://www.axios.com/2024/02/22/misinformation-generative-ai-deepfakes-spanish-language; “Comments on Public Citizen’s Petition,” Unidos, October 16, 2023, https://unidosus.org/wp-content/uploads/2023/10/unidosus_commentsonpubliccitizenspetitionforrulemakingonartificialintelligence_incampaignads_docket2023%E2%80%9313.pdf.

[17] “About AI-Generated Content,” TikTok, https://support.tiktok.com/en/using-tiktok/creating-videos/ai-generated-content.

[18] “Synthetic and Manipulated Media Policy,” X, https://help.x.com/en/rules-and-policies/manipulated-media.

[19] Monika Bickert, “Our Approach to Labeling AI-Generated Content and Manipulated Media,” Meta, https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media.

[20] “Disclosing Use of Synthetic or Altered Content,” Youtube, https://support.google.com/youtube/answer/14328491?hl=en.

[21] “Tracker: State Legislation on Deepfakes in Elections,” Public Citizen, https://www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections.