Can AI steal the 2024 election? Not if America uses this weapon to combat misinformation.

Can democracy survive artificial intelligence? As the 2024 election approaches, there are mounting fears of an explosion in false and manipulative AI-generated content, especially by hostile foreign governments.

The New York Times has warned of deepfakes that could “swing elections.” The Biden administration has already urged caution around the use of AI, and the possibility of electoral manipulation puts even greater pressure on the White House and Congress to slow or even stop AI’s development.

Yet, the real solution is to speed up AI’s rollout. The most powerful defense against electorally weaponized artificial intelligence is the defensive application of artificial intelligence.

Any discussion of AI and elections must start with an uncomfortable fact: There’s no way to stop the creation of AI-generated misinformation and disinformation. Powerful AI tools already exist, and so long as the United States has foreign enemies, they will use AI to try to influence elections.

While the genie can’t be put back in the bottle, its power can be nullified with even more powerful technology.

The closest analogy is missile technology. Enemies make and use missiles, but we can intercept them with missiles of our own. The most obvious application of this approach is Israel’s Iron Dome, which uses interceptor missiles to destroy incoming rockets.

That’s what America needs: an AI-powered information iron dome. Such a system would leverage AI’s unprecedented pattern-recognition ability to identify coordinated attacks using false AI-generated content, whether articles, images or videos.

AI is more than powerful enough to fulfill this role. Consider efforts over the past several years to train an AI model to detect lip-sync deepfakes.

Humans struggle to say words beginning with M, B or P without closing their mouth (try saying “peanut butter brittle”). But deepfakes frequently ignore this reality. The casual observer might not realize it, but researchers have found that a trained model can identify these instances with surprising accuracy. Trained on videos of former President Barack Obama, their model spotted deepfakes over 90% of the time.

AI is better than humans at finding deepfakes

Given the potential volume of such content, it would take an army of humans to identify AI-driven attempts to influence elections, requiring enormous sums of money and time while still suffering from human error.

By contrast, AI tools could scan the internet far more quickly, efficiently and effectively, labeling deepfakes almost as soon as they enter the public discourse.

To use the rocket analogy, it’s the difference between the high-tech, integrated missile-defense system and a swarm of soldiers with shoulder-mounted rockets pointed toward the sky. There’s a reason Israel chose the Iron Dome.

Private companies should drive the creation of this cutting-edge technology. Putting election-focused AI in the hands of politicized agencies would invite abuse and undermine the very democracy we seek to protect.

Nor is it wise to give government agencies vast regulatory power over AI and elections. The resulting burdens would likely stifle innovation, leaving the technology’s development to a handful of larger companies.

That would also undermine Americans’ trust in democracy, given widespread concerns over bias within particular models.

AI development should be encouraged at the widest and most diverse array of companies possible. Fierce competition will encourage companies to avoid ideology and provide the most effective products.

Tech firms can develop the tools needed to safeguard elections

American companies are the world leaders in AI, so they’re well-suited to develop this technology. The startup DeepMedia is already helping the Pentagon detect deepfakes that threaten national security, like the deepfake of Ukrainian President Volodymyr Zelenskyy telling his forces to surrender.

This same technology could be further developed to protect elections by helping voters spot deepfakes as they encounter them.

Similar technology discovered that a famous artistic work by a previously unknown artist is actually a Raphael masterpiece, while another AI program determined that part of a different Raphael work was likely painted by someone else, based on microscopic clues across the artist’s known paintings.

If AI can decipher the secrets of Renaissance masters, it can ferret out deepfakes from Russia.

Even rudimentary AI programs could quickly identify the still-widespread use of comparatively low-tech election interference efforts − think troll farms in North Macedonia and the Philippines.

As my colleague Neil Chilson, a former chief technologist at the Federal Trade Commission, explained in his recent testimony before the Senate, “Malicious actors don’t use cutting-edge tech. Cheap fakes, selective editing, overseas content farms, and plain old Photoshop are inexpensive and effective enough.”

An AI iron dome would help identify these threats even faster, much as the actual Iron Dome protects against artillery shells as well as rockets.

Uncertainty caused by the threat of regulation, along with ongoing and loud calls to “pause” AI development, has held back the maximum innovation in the election-protection space. If policymakers make clear that companies can act within the bounds of existing law, the benefits will almost certainly be felt before the November election.

Foreign countries like Russia, China and Iran are acutely aware of AI’s power and are guaranteed to apply it in their election-manipulation efforts, if they aren’t already.

If America does anything other than fully unleash AI to protect our democracy, it will be a particularly foolish form of unilateral disarmament.

CGO scholars and fellows frequently comment on a variety of topics for the popular press. The views expressed therein are those of the authors and do not necessarily reflect the views of the Center for Growth and Opportunity or the views of Utah State University.