New AI poll reveals elites are way out of step with the rest of us

Artificial intelligence (AI) is shaping up to be yet another public policy issue where elites are out of step with the public.

In Silicon Valley, Congress and the Biden administration, leaders are buzzing about AI. For those in the Valley, killer robots with lasers for eyes dominate the conversation. The Beltway, in contrast, is spending a lot of political capital to deal with bias in algorithms.

And yet, the general public isn’t primarily worried about either machines gaining control or about algorithms being biased. What concerns them about AI are the national security implications and the potential for job losses.

Last month, our organization asked 1,000 U.S. voters to rank their top benefits and concerns related to AI. The top concerns include killer computers, transparency, bias, job loss and national security. The results of our poll, which was conducted with YouGov, show there is a gap between the concerns of the public and the concerns of those in power.

Over one-third of people rank job loss as their No. 1 worry when it comes to AI. This matches findings from a poll the CGO conducted earlier this summer that found four out of five Americans worried that AI will displace jobs “generally.” (Notably, only two out of five Americans were worried AI would displace their own job.) Following that, about a quarter rank national security as the most important aspect of AI. At the bottom of the rankings are bias and killer computers at around 10% each.

In other words, people aren’t as worried about the esoteric harms of killer robots and bias and are more worried about concrete harms like job loss and losing the AI race to China.

It is a commonly held notion that the government has done little on AI but the Biden administration has been quite active, especially when it comes to countering bias. To name just a couple of recent efforts, they have taken “New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment,” have tackled “Racial and Ethnic Bias in Home Valuations,” and have laid out a “Blueprint for an AI Bill of Rights.” The common theme among all the initiatives in the executive is the goal of reducing algorithmic bias.

Of course, the government does have a role to play in policing bad actors in housing, finance and education. But there are already rules on the books that would prevent biased hiring, for instance, and crafting new rules for AI is much easier said than done because there is no agreement on the definition of AI. Poorly crafted rules meant to regulate AI could easily affect all software.

Similar definitional issues are at play when analyzing bias and different experts come to different conclusions. What isn’t often admitted in these high-level discussions is that the same data can lead to radically different conclusions depending on who analyzes it. In a number of different studies, independent research teams have been given the same data to decipher, resulting in opposite and conflicting results. Teams working to reduce unfairness might not converge on the same model or parameters.

However, the government needs to be mindful of the potential unintended consequences of its actions. Overly restrictive regulations could limit innovation and slow the adoption of new technologies. Furthermore, the government needs to be aware that its efforts to address algorithmic bias may be perceived by the public as an attempt to regulate data privacy.

Bias and physical safety shouldn’t be ignored. However, just like the American public, policymakers would do well to prioritize these areas of potential job loss and national security to ensure the public understands the benefits and risks of AI systems in these sectors.

Aligning solutions to match the clearer, more pressing problems, is an approach that works well in AI policy and all other public policies.

CGO scholars and fellows frequently comment on a variety of topics for the popular press. The views expressed therein are those of the authors and do not necessarily reflect the views of the Center for Growth and Opportunity or the views of Utah State University.