A Closer Look at Federal Policy on AI

Just over a year ago, Open AI released ChatGPT surprising analysts and users around the globe with its amazing capabilities. Within two months it surpassed 100 million users and became the most quickly adopted internet application in history. Because its capabilities were so surprising and usage so popular, the once-fringe debate about AI existential risk and concerns over safety became mainstream. These discussions then made their way to the public policy context.

This is the primary reason why, on October 30, 2023, the White House released an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Below are the six things you should know about it.

First, and fundamentally, it’s important to know what this document is and is not. An “Executive Order” or EO is a regulatory tool unique to the Executive Branch that cannot create any new laws. EO’s are a President’s means of directing cabinet-level federal agencies, as opposed to independent agencies like the Federal Communications Commission, to tackle new initiatives or priorities that are currently within their jurisdiction. Therefore, it’s not a new law or regulation, but rather the President applying existing law like directing federal agencies to conduct new studies.

Second, it’s helpful to know the timing and context. This EO had been expected for a few months prior to its publication. It was then released just days before the United Kingdom hosted its “AI Safety Summit” November 1 and 2. The US and other major countries attended this high profile summit and needed to arrive with a concrete policy position to discuss. This EO filled that need. It has been one year since ChatGPT’s release and the calls for Congress to regulate AI have not produced any new legislation. There is also a level of international competitiveness to be a leader on AI regulation. Both domestically and internationally, there is a level of political pressure for the US to “take action.”

Third, the follow-up to this EO will take place over the next two years. In total there are 91 directives in the EO that agencies are meant to complete. Timing on these directives range from 30 days after publication to the latest deadline in April 2025. Tech companies involved in AI development, civil society organizations, and the agencies themselves should track the development of all these directives because they could have an impact on the industry.

Fourth, as my colleague Neil Chilson noted in his recent C SPAN appearance discussing this EO, there is one major requirement in the EO and that is for “Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records…” These required reports are to disclose any “ongoing or planned” training, development, “ownership and possession” of model weights, and the results of tests, among other disclosures. This requirement is enabled by an older national security law, the Defense Production Act of 1950 (DPA). Out of the whole order, this section might be the most troubling for a few reasons. First, the application of the DPA here might stretch legal boundaries according to some analysts. Second, as Steven Sinofsky notes, this section of the EO “[I]s simply premature. People are racing ahead to regulate away potential problems and in doing so will succeed in stifling innovation before there is any real-world experience with the technology.

Fifth, related to the point above, this EO uses definitions and conceptions of AI that are current but could become swiftly dated. My colleague Will Rinehart notes that government language that locks in current iterations of technology can have bad consequences for the future of that technology. For example, Will writes that a law passed back in 1986 which was meant to protect email privacy didn’t take into account the fact that users would store emails for more than 6 months at a time. Back in the 1980s computer storage of the kind and scale we have today was much more expensive. This caution is especially important given the rapid changes in AI development and use cases.

Next, the tone set by this EO is a tone of government control not a tone of market leadership. Although it’s not important from a legal or regulatory perspective, Presidential tone is important when it comes to technological development as technology policy analyst Adam Thierer notes in his analysis of the AI EO. Indeed, tone and policy were a key reason the commercial internet became the global driver of economic growth it is today. As Adam notes this is because President Bill Clinton’s proactive police in the early 1990s let the market lead in the development of the internet.

Finally, and in conclusion, there are some useful studies and inventories, especially about the federal government’s use of AI systems, that will be helpful outputs of this EO. Nevertheless, the tone, scope, and potential misuse of existing law make for an overall poor set of policies coming out of the White House. Policymakers at the federal and state level should resist the “race to regulate” and instead focus on how current laws are protecting against risk while ensuring rules enable innovation which, as is historically proven, produce maximum benefit for all people.

CGO scholars and fellows frequently comment on a variety of topics for the popular press. The views expressed therein are those of the authors and do not necessarily reflect the views of the Center for Growth and Opportunity or the views of Utah State University.