Federal Testimony: The Integral Role of AI Tools in Modern Political Discourse

Testimony Before the United States Senate Committee on Rules and Administration Hearing: AI and the Future of our Elections
Click here for the pdf version
September 27, 2023

Thank you, Chairwoman Klobuchar, Ranking Member Fischer, and committee members for the chance to speak today on artificial intelligence’s influence on elections. I’m Neil Chilson, a senior fellow at the Center for Growth and Opportunity at Utah State University (CGO), a former Chief Technologist at the Federal Trade Commission, and a past advisor to acting FTC Chair Maureen K. Ohlhausen.

At CGO I study and promote the conditions that best enable technology to create widespread human prosperity and abundance.

Now is a crucial time for this mission, as the many technologies of artificial intelligence launch us toward prosperity and abundance.

Intelligence is our most vital resource. It is the tool with which humanity has overcome countless challenges. Every breakthrough technology, scientific discovery, business success, art piece, and literary work stems from human intellect. Often, this involves massive collaboration and coordination, generating intelligent outcomes beyond any one person’s capabilities.

Tools amplifying human intelligence, therefore, have vast potential. AI promises to help humanity create a healthier, more productive, more artistic, more interesting, and more enjoyable world.

While powerful tools can be misused, generative AI tools seem unlikely to materially affect election results because political speech already uses AI tools, and has for years. Generative AI will lower the cost—in time and money—to generate high-quality creative content. When costs decrease, but demand persists, these are conditions of abundance. We’ll see more speech, including political speech. But we shouldn’t expect a shift in the truth-to-lies ratio.

In fact, if a lie is halfway around the world before the truth gets its shoes on, generative AI is a rocket-powered running shoe. AI tools will enable real-time fact-checking, cheaper voter education, and messages tailored to voter needs. These tools can strengthen democracy.

I recommend this committee focus on the following as it explores how to respond to the use of computation and artificial intelligence in the creation of political speech.

  • First, defining “AI” is hard. It encompasses many commonly used technologies and most political ads contain content “generated in whole or part by the use of artificial intelligence.”1REAL Political Advertisements Act, S.1596, 118th Cong. §4(a) (adding 52 U.S.C. 30120(e)(1)).
  • Second, existing technology-neutral laws and emerging private norms can tackle potential election manipulation.
  • Third, generative AI’s threat to elections may be less than other technologies.

AI can improve political discourse. There are risks, but they are not novel. AI should not be used as a pretext for limiting political speech. Congress should build on existing expertise and capabilities, not create new regulatory regimes.

AI-generated political speech is already here—and has been, for years

Defining AI is hard. Experts have debated the term “artificial intelligence” for decades. The widely-used AI textbook by Peter Norvig and Stuart Russell dedicates a whole section to defining AI.2Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th ed. (London: Pearson 2021). They describe how the scope of AI is fluid, and has included different types of software and algorithms over the decades. Indeed, John McCarthy, who coined the term, remarked that once an AI algorithm works, “we stop calling it AI.”3Moshe Vardi, “Artificial Intelligence: Past and Future,” Communications of the ACM 55, no.1 (Jan. 2012): 5.

Norvig and Russell identify four historical approaches to defining AI: acting humanly, thinking humanly, thinking rationally, and acting rationally. They emphasize “acting rationally” as the prevailing model, defining AI as the study and construction of agents that “do the right thing.”4Russell and Norvig, Artificial Intelligence, 3–4. The authors further note that this “right action” should align with human benefits; Vardi, “Artificial Intelligence: Past and Future,” 5. In other words, after much discussion, Norvig and Russell advocate a functional approach: categorize something as AI or not based, not on its design or nature, but on its uses and actions.

So what counts as AI today? There is no definitive answer, but it is an expansive category. Norvig and Russell provide an incomplete catalog of example AI applications, including such varying technologies as robotic vehicles, machine translation, speech recognition, recommendation algorithms, image understanding, game playing, and medical diagnosis.5Russell and Norvig, Artificial Intelligence, 28–30.

This hearing is focused on AI and the future of our elections. Let’s first take a walk through the present production and delivery of a digital ad for a political campaign, and explore where artificial intelligence might intersect with ad creation.

Sarah, an ad account manager, has an idea for a new political ad. She fires up ChatGPT, asks it for ten variations on that idea, and further iterates on phrases for ad copy and script.6See ChatGPT, http://chat.openai.com; See also, “ChatGPT-4 – Brainstorming,” W3 Schools, https://www.w3schools.com/gen_ai/chatgpt-4/chatgpt-4_brainstorming.php; “AI Advertising Script Generator,” Taskade, https://www.taskade.com/generate/marketing/advertising-script. As she edits the script in Microsoft Word, text predictions speed her work. Once the script is finalized, she translates it into Spanish and Hmong using Google Translate.7“Translation AI,” Google Cloud | Cloud Translation,  https://cloud.google.com/translate.

Her media team starts gathering assets. One team member easily searches a large database of stock photos, thanks to automatic tagging using a computer vision algorithm.8iStock Staff, “How iStock Search Helps You Find the Best Possible Image,” iStock, February 19, 2020, https://marketing.istockphoto.com/blog/how-istock-search-helps-you-find-the-best-possible-image/; “Visual Search Powered by Shutterstock.AI,” Shutterstock, https://www.shutterstock.com/developers/solutions/computer-vision. Another heads out to get some establishing shots in still and video. Her camera’s physical light sensor has sophisticated embedded algorithms that adjust how the picture is turned into data.9“Understanding Color Interpolation,” Teledyne Flir, May 11, 2017, https://www.flir.com/support-center/iis/machine-vision/application-note/understanding-color-interpolation/; Ron Lowman, “How Cameras Use AI and Neural Network Image Processing,” Synopsys, June 29, 2022, https://www.synopsys.com/blogs/chip-design/how-cameras-use-ai-neural-network-image-processing.html. Such algorithms even adapt depending on which lens she uses.10“In-camera Lens Corrections,” Canon, https://www.canon-europe.com/pro/infobank/in-camera-lens-corrections/. This helps the camera capture the most information from the scene and correct for known lens defects. Facial detection and eye detection keep the photo subjects in focus and properly exposed.11Matthew Saville, “Autofocus Technology is Changing, Here’s Why It’s Not Just Bells & Whistles Anymore,” SLR Lounge, https://www.slrlounge.com/autofocus-technology-is-changing-heres-why-its-not-bells-whistles-anymore/. Some recent cameras even have animal and bird tracking. See, “Everything You Wanted to Know about Autofocus (AF),” Canon, https://www.canon-europe.com/pro/infobank/autofocus/. Electronic image stabilization algorithms constantly revise the raw data from the sensor to eliminate shaky video without expensive rigs.12TDK, “Electronic Image Stabilization,” https://invensense.tdk.com/solutions/electronic-image-stabilization/. Apple’s recent iPhone 15 announcement takes computational photography to a whole new level: the iPhone 15 uses multiple custom neural nets to process images. It’s not an exaggeration to say that every photo taken by an iPhone 15 will be AI generated in part. Jaron Schneider, “Apple Explains What the iPhone 15 Camera Can and Can’t Do – and Why,” PetaPixel, https://petapixel.com/2023/09/18/apple-explains-what-the-iphone-can-and-cant-do-and-why/. Similarly, as she records audio, algorithms suppress background noise.13Christina Gwira, “10 Best AI Audio Tools in 2023 (For Podcasts, Music & More),” August 23, 2023, https://www.elegantthemes.com/blog/business/best-ai-audio-tools.

The production team uses editing tools to refine and reassemble the content. Speech recognition tools sync transcripts to videos, enabling the editors to make text-based video edits.14“Text-Based Editing in Premiere Pro,” Adobe, September 13, 2023, https://helpx.adobe.com/premiere-pro/using/text-based-editing.html; Probhas Pokharel, “Video Text Editing: Easily and Accurately Edit Videos by Editing the Text,” Reduct, July 2021, https://reduct.video/blog/video-text-editing. One editor uploads multiple raw video and audio feeds into software that can identify who is speaking at any point and produce a final cut with appropriate closeups and wide-angle shots.15NapSaga, “Revolutionize Your Video Podcast Editing with AutoPod AI,” Artificial Intelligence in Plain English, Medium, April 27, 2023, https://ai.plainenglish.io/revolutionize-your-video-podcast-editing-with-autopod-ai-d961c71df602. A producer removes background noise using algorithms that isolate and remove specific noises, such as filler words, traffic honking or crowds cheering, and then inserts an instant voiceover to one of the ads with text-to-speech.16“Remove Background Noise from Video,” VEED.IO, https://www.veed.io/tools/remove-background-noise-from-video. The photos and videos are edited for sharpness, contrast, cropped, and cleaned up. The editors automatically replace overhead power lines in one shot with blue sky and to erase unplanned bystanders from a video.17“Remove Objects from Your Photos with Content-Aware Fill,” Adobe, July 25, 2023, https://helpx.adobe.com/photoshop/using/content-aware-fill.html. They use similar tools to enhance actor appearances and to change the lighting in one scene.18Joe Fedewa, “How to Adjust Photo Lighting with Google Photos on Pixel,” How To Geek, October 22, 2023, https://www.howtogeek.com/696698/how-to-adjust-photo-lighting-with-google-photos/. These types of tools generate new content to replace the original content. Some long precede the existing wave of generative AI.

In post-production, the team realizes that Hindi speakers could be a key audience. Rather than film an entirely new version of the ad, they upload the finished ad to HeyGen, which translates the audio seamlessly with one click and modifies the video so that the actors appear to be speaking Hindi in “an authentic speaking style.”19“Video Translate,” HeyGen Labs, https://labs.heygen.com/video-translate.

But the ad’s creation is only part of the journey. The team has sophisticated plans for delivering the ad to viewers. Personalized advertising algorithms help deliver the ads to the desired audience. As viewers interact with the ad, these algorithms adjust the delivery and even the content of the ads, dynamically testing out options and going with what achieves the most email sign-ups. As the ad is uploaded to different platforms, the team uses filters and other automated tools to easily tweak the content for each platform.

To sum up: AI is everywhere. Today’s content creation, editing, and distribution rely heavily on AI technologies. Likely, every senator has employed one or more of these technologies in their ad campaigns.

In other words, AI-generated ads already exist. There are no principled distinctions between traditional technologies and newer “generative AI.” The boundary is so indistinct that lawyers advising clients will struggle to definitively say that an ad lacks AI-generated content.

Given this ambiguity, any law mandating AI-content disclosures could mean all ads carry such notices, regardless of the tools used to create the ad and with no connection to the ad’s truthfulness. Such ubiquitous disclosures would be useless to viewers. Worse, they would displace speakers’ political speech, particularly in online ads, which are brief and small. Notably, the FEC took twelve years, up until December 2022, to adapt its own existing disclosures for paid online public communications due to these challenges.20“Commission adopts final rule on internet communications disclaimers and the definition of public communication,” Federal Elections Commission | FEC Record Regulations, December 19, 2022, https://www.fec.gov/updates/commission-adopts-final-rule-internet-communications-disclaimers-and-definition-public-communication/.

Technology-neutral laws and targeted private norms already exist

AI-generated political advertising is already here. So too are governance mechanisms. Existing technology-neutral laws and emerging norms in the private sector can help address concerns.

Any attempts to limit deception in political advertising should learn from the Federal Trade Commission’s experience defending consumers from fraudulent business practices, including deceptive commercial advertising.21“About the FTC,” Federal Trade Commission, https://www.ftc.gov/about-ftc. The FTC applies a straightforward standard for deceptive commercial advertising: an ad is deceptive if it contains a “representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.”22“Policy Statement on Deception,” Federal Trade Commission, October 14, 1983, 2, https://www.ftc.gov/legal-library/browse/ftc-policy-statement-deception.

This standard is technology neutral, remaining consistent regardless of the tools used to create the deceptive content. The FTC evaluates outcomes rather than methods, keeping the standard relevant even as technologies evolve. This consistent standard has allowed the FTC to bring deception cases spanning from the internet’s infancy to a recent 2023 case about misleading AI claims.23Compare, “Internet Site Agrees to Settle FTC Charges of Deceptively Collecting Personal Information in Agency’s First Internet Privacy Case,” Federal Trade Commission, Press Release, August 13, 1998, https://www.ftc.gov/news-events/news/press-releases/1998/08/internet-site-agrees-settle-ftc-charges-deceptively-collecting-personal-information-agencys-first; with, “FTC Action Stops Business Opportunity Scheme That Promised Its AI-Boosted Tools Would Power High Earnings Through Online Stores,” Federal Trade Commission, Press Release, August 22, 2023, https://www.ftc.gov/news-events/news/press-releases/2023/08/ftc-action-stops-business-opportunity-scheme-promised-its-ai-boosted-tools-would-power-high-earnings.

Of course, the FTC polices commercial speech, which receives less First Amendment protection than political speech. The FTC’s deception standard would be unconstitutional if applied to political advertising. However, it does have the virtue of being technology neutral. Similarly, speech that receives full First Amendment protection deserves that protection regardless of the technology used to create it.

Within the constraints of the Constitution, defamation law can help check deceptive political speech.24David Bauder, Randall Chase, and Geoff Mulvihill, “Fox, Dominion Reach $787M Settlement Over Election Claims,” AP News, April 18, 2023, https://apnews.com/article/fox-news-dominion-lawsuit-trial-trump-2020-0ac71f75acfacc52ea80b3e747fb0afe. Malicious lies about others can incur severe financial consequences, irrespective of the creation or distribution method.25Tom Hals and Jonathan Stempel, “Alex Jones Files for Bankruptcy Following $1.5 Billion Sandy Hook Verdicts,” Reuters, December 2, 2022, https://www.reuters.com/world/us/alex-jones-files-bankruptcy-following-sandy-hook-verdict-court-filing-2022-12-02/. Like the FTC’s approach, defamation law is technology neutral.

Private firms distributing ads are setting norms for generative AI in political advertising. For example, Google now requires advertisers to “prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events.”26“Updates to Political Content Policy,” Google | Advertising Policies Help, September 2023, https://support.google.com/adspolicy/answer/13755910. The rule targets deceptive uses, providing exemptions for content additions or manipulations that are inconsequential to the claims made in the ads.

The threat from generative AI tools to elections may be small relative to other threats

Assessing a new technology’s potential impact on elections requires more than viewing it in isolation. One must consider alternatives that malicious actors could exploit for the same goals, as they always aim for the most impact at the least cost. The real question is whether the technology offers unique cost or efficiency advantages over other methods.27James R. Ostrowski, “Shallowfakes: The Danger of Exaggerating the AI Disinfo Threat,” The New Atlantis, Spring 2023, https://www.thenewatlantis.com/publications/shallowfakes. “Chroniclers of disinformation often assume that because a tactic is hypothetically available to an attacker, the attacker is using it. But state-backed actors assigned to carry out influence operations face budgetary and time constraints like everyone else, and must maximize the influence they get for every dollar spent.”

People worry that generative AI will be used to manipulate elections. Yet voice imitation and media editing aren’t new. Past misinformation and voter intimidation tactics have used simpler, more cost-effective means to pursue their goals. They do not typically involve candidate advertising at all. One doesn’t need AI to create a deceptive text message or email. The real challenge often lies in distribution rather than content creation, and generative AI doesn’t significantly alter this cost dynamic.

Brief thoughts on specific legislative proposals

S.1596 – REAL Political Advertisement Act. This bill isn’t primarily about AI. Its main shift is amending the Federal Election Campaign Act of 1971 to increase regulation of internet communications. It wraps the Honest Ads Act in an AI cloak. The operative language would newly apply FEC regulation to online ads whether or not they contain AI-generated content, expanding the scope of FEC regulation.28Compare, REAL Political Advertisements Act, S.1596, 118th Cong. §3 (“Expansion of Definition of Electioneering Communications”); with, Honest Ads Act, S.486, 118th Cong. §6 (“Expansion of Definition of Electioneering Communications”). This approach seems provoked by Russian-sponsored online ads during the 2016 presidential campaign, even though recent evidence suggests those ads did not sway the election.29Gregory Eady, et al., “Exposure to Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 Election and Its Relationship to Attitudes and Voting Behavior,” Nature Communications 14, no. 62 (2023), https://www.nature.com/articles/s41467-022-35576-9.

In addition, the bill would require disclaimers on ads containing “image or video footage that was generated in whole or in part with the use of artificial intelligence (generative AI).”30S.1596, 118th Cong. §4(a) (adding 52 U.S.C. 30120(e)(1)). Given AI’s widespread use in content creation, this could sweep in every political advertisement.

S.2770 – Protect Elections from Deceptive AI Act. This proposed bill could chill every U.S. individual’s creation and sharing of AI-generated content on political issues. In one problematic provision it labels as “deceptive” any AI-modified ad content that gives a reasonable person “a fundamentally different understanding or impression of the appearance, speech, or expressive conduct” as compared to the original content—even if the final product is truthful.31S.2770, 118th Cong. §2(a) (adding to 52 U.S.C. 30101 et seq., a new Sect. 325 (a)(2)).

Let me say that again: if a truthful ad gives a “different understanding or impression” than the ad’s source content, this bill labels it defamatory.

Given AI’s prevalent use, most campaign ads could be subject to this bill. Even simply using modern video editing tools to quote an opponent might risk a defamation lawsuit under this bill.

There is also a jurisdictional issue. Defamation law is state law. Using a federal law to impose per se liability under state defamation law is a mechanism unknown to the American legal system.

As for the effect on the use of newer generative AI tools, this isn’t an election protection act; it’s an incumbent protection act. New generative AI tools allow smaller teams to produce high-quality content affordably. There’s no proof such tools are more misleading than other content creation tools. This bill restricts affordable content creation, disadvantaging new candidates without significant funds. Costly traditional hand-editing would still be allowed.

If the bill’s backers truly worry about altered political communications about candidates, why limit this to AI tools?

What Congress can do

Given the rapid evolution and broad scope of AI technologies and their uses, Congress needs to increase its ability to understand and respond. Hearings like today’s are a first step in building the institutional knowledge necessary to create effective legislation. In addition, this committee should work to prepare the Federal Election Commission and other relevant agencies to monitor the use of new technologies throughout the election cycle and to assess the relevant effects.

More broadly, Congress should establish a permanent central source of advisory technical expertise on AI and algorithmic issues. The National Institute of Standards and Technology is likely a good fit. This expert body could provide technical education and advice to the many federal agencies that are grappling with applications of AI technology within their specific sectors.32See Adam Thierer, “Is AI Really an Unregulated Wild West?” Technology Liberation Front, https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/. While some have called for creating a new overarching AI regulator, supporting the emerging sector-specific approach to AI governance by offering a shared hub of technical expertise has many practical and legal benefits.33See, Matthew Mittelsteadt and Brent Skorup, “Comments Urging a Sectoral Approach to AI Accountability,” Mercatus Center at George Mason University, https://www.mercatus.org/research/public-interest-comments/comments-urging-sectoral-approach-ai-accountability; Alex Engler, “A Comprehensive and Distributed Approach to AI Regulation,” Brookings Institute, https://www.brookings.edu/articles/a-comprehensive-and-distributed-approach-to-ai-regulation/; Neil Chilson, “Does Big Tech Need Its Own Regulator?” The Global Antitrust Institute Report on the Digital Economy 21, SSRN, November 19, 2020, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3733726.

Conclusion

AI is hard to define. But AI technologies are already ubiquitous in digital advertising, not to mention daily life. Every industry relies on algorithms and AI. Its use seldom poses new problems. To the extent there are issues, they are generally handled by existing laws and norms.

We should be vigilant for novel issues raised by AI. But the problem of deceptive and misleading political ads is not novel, nor is it particularly connected to AI. Repackaging past proposals to control and censor political speech as “AI regulation” will not solve misinformation in ads and will chill political speech.34Unfortunately, using the AI moment to advance unrelated agendas is a growing trend. Neil Chilson and Adam Thierer, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations,” The Regulatory Transparency Project of the Federalist Society, SSRN, November 2, 2022,  https://ssrn.com/abstract=4267101.

AI technology can enhance human intellectual endeavors, including speech about important topics such as politics. If we merely use this new technology as an excuse to intervene, we will squander a substantial opportunity to strengthen democratic values, expand human prosperity, and move toward a world of abundance.

Thank you again for allowing me to share my views. I look forward to your questions.

CGO scholars and fellows frequently comment on a variety of topics for the popular press. The views expressed therein are those of the authors and do not necessarily reflect the views of the Center for Growth and Opportunity or the views of Utah State University.