Submit to Digest

Good Policymaking in the Age of AI

Artificial Intelligence

Kevin Frazier is the AI Innovation and Law Fellow at the University of Texas School of Law, co-host of the Scaling Laws podcast, and author of the Appleseed AI Substack.


“We believe that the growth of (even well-intentioned) governmental regulation and institutional sclerosis more broadly is artificially creating scarcity, raising costs and slowing progress.”

That sounds like a quote pulled directly from the GOP’s platform or an advocacy organization backed by big business — perhaps the Chamber of Commerce. You’d be forgiven if you guessed that Milton Friedman or one of his disciples included that language in a book. And, it’d surely come as no surprise if a pithier version — “don’t regulate my progress” — was on the bumper sticker of a Cybertruck.

This frank admission of the limits of even the most thoughtful regulation is instead lifted from a recent announcement from Open Philanthropy (“Open Phil”), an organization rarely associated with conservative or libertarian thought. Open Phil operates out of San Francisco, was partly inspired by the thinking of Bill Gates and Warren Buffet, and earned public recognition, in part, through its support of vaccines. Its general focus on global health and wellbeing as well as global catastrophic risks aligns with values and policies espoused by the left.

Nevertheless, the group plans to spend more than one hundred million dollars to drive economic growth and accelerate scientific and technological progress — and they plan to exhaust those funds over the course of just three years. It’s an admirable and necessary project. It’s also an opportunity for a larger set of stakeholders to embrace a pro-growth mentality without feeling as though they’re abandoning core ideological and moral principles.

The fact that this pedal-to-the-progress mentality appears in funding announcements, campaign documents, and research reports issued by groups across the political spectrum signals an incredible opportunity. At a time when so many issues drive people into entrenched corners, this consensus deserves attention and investment. For all those of us who believe that artificial intelligence carries the potential to unleash incredible advances that lead to a drastically better quality of life for many people around the world, it’s essential to not let this moment pass.

Whether healthy skepticism of excess proceduralism — as embraced in one way or another by Open Phil and many other individuals and institutions, such as the Niskanen Center, the Arnold Foundation, and the Abundance Institute — translates into progress is very much an ongoing question. Current AI policies have few checks in place to ensure that regulations do not just become barriers for progress.

That’s the bad news. The good news is we’re still very much in the first stages of shaping the AI regulatory ecosystem that will dictate AI development, diffusion, and adoption. Creative regulatory tools can diminish the odds of AI rules and standards leading to the outcomes feared by Open Philanthropy and others — “creating scarcity, raising costs and slowing progress.”

SUNSET AND SUNRISE CLAUSES

Applying old laws to new tech is a recipe for slowed progress. The argument that old laws reflect the wisdom of prior generations only goes so far, given that the primary test of any law should be grounded in whether it is fulfilling its intended purpose rather than pleasing its sponsors or beneficiaries.

An example makes this clear. Prominent tech writer Timothy B. Lee recently pointed out that transportation regulations require long-haul truckers to place triangle warning signs a few feet beyond their stopped rig. It’s easy to see why this made sense back in the day. Problem: cars might miss a large object parked on the side of the road. Solution: task drivers with making their rig more visible. Fast forward to a world in which we no longer have human drivers and this solution suddenly looks out of date. Compliance may mean merely putting a bored teenager in the cab of a long-haul truck and paying them to put out some triangles every 12 hours or so. While non-sensical, some unions are insisting that these regulations remain in place. A quick update to the rule — requiring the illumination of warning lights on the side of trucks — has yet to be approved.

I suspect that Tim and others will find other laws just like this. Yet, we can avoid this sort of regulatory headache with sunset clauses. This regulatory tool is a simple, yet wildly underused move to stop misguided regulations from calcifying into laws resilient to change.

Sunset clauses can take a number of different forms but all center on the simple idea: rather than defaulting to permanence, they specify an end date for the law, absent some affirmative act by the relevant lawmaking body. The simplest form is an unqualified sunset — dictating that a law will expire with no questions asked. This approach is often warranted in emergency situations that result in bespoke laws tailored precisely to the resolution of that crisis. Pandemics or natural disasters are ripe for this kind of sunset clause. By setting forth a clear end date for laws allowing extensive uses of power, funds, or both, legislators and the public may be more willing to support such bold proposals.

Additional nuance comes into play when sunset clauses allow legislators to extend the legislation after an initial test period. For example, a clause may set an evaluation period of several months or years before the legislature can vote on its continuation. This introduces a slew of opportunities to adjust the terms of the clause, including the length of the initial evaluation period, the criteria that legislators will use in that evaluation, and the voting threshold to extend the law and, if extended, whether that initiates another review period or puts the law on the books for perpetuity.

Practice has often failed to realize the full theoretical benefits of sunset clauses. Legislators may lure their opposition into supporting a bill under the promise of a faithful review at a later date. Political winds may shift, an evaluation may omit critical information, and suddenly that temporary legislation becomes permanent. Poorly constructed sunset clauses that fail to set out clear criteria and do not adequately support the institutions charged with evaluating the legislation introduce another concern: regulatory uncertainty. A hazy sunset clause may leave regulated entities guessing as to whether the law will remain on the books and the extent to which they should adjust their operations if the legislation is likely to be short-lived.

These are solvable problems. The already vast and growing AI research community, if properly directed, could aid state and federal lawmakers with developing objective criteria by which to assess legislation. They may also serve as independent evaluators of those criteria — offering legislators (and their constituents) with thorough and transparent assessments in advance of an evaluation period. Imagine, for example, if the burgeoning number of AI law centers at leading law schools, such as the University of Texas, Vanderbilt Law School, and the University of Miami School of Law, directed their students and scholars toward studying the impacts of legislation. Knowledge of a deep and trusted bench could make those skeptical of state AI regulations more willing to let the states act as laboratories of democracy. States would actually be conducting experiments rather than simply forging ahead with laws that may have unintended consequences in a highly complex field.

Doubts about the merits of AI regulation at the early stages of development and diffusion could also be addressed by sunrise clauses, which mandate that the regulatory verify its capacity to enforce a law prior to that law coming into effect. Many AI regulatory proposals involve standing up new bodies tasked with promulgating and enforcing broad and highly impactful rules and frameworks. In related contexts, such as privacy laws, states have struggled to recruit necessary individuals and obtain resources required to do that work in a robust fashion — leading to arbitrary enforcement. A few case studies warrant mentioning. Whereas Fortune 500 companies spent around $16 million to comply with the EU’s General Data Protection Regulation (GDPR), medium-sized businesses dished out $3 million — a regulatory burden of vastly greater implications relative to their respective budgets. Similarly, the California Privacy Protection Agency quickly fell behind on statutory deadlines for setting rules to enforce the California Privacy Rights Act (CPRA), leading to a court-imposed extension before those rules could go into effect.

Sunrise clauses may have prevented or at least mitigated these poor outcomes. The EU could have delayed enforcement of the GDPR until it had prepared a clearer compliance process for small- and medium-sized businesses to ease compliance burdens. California legislators may have avoided the flawed introduction of the CPRA by specifying that obligations would not go into effect until the Agency had recruited a certain number of technical and operational staff. These and other provisions reduce the odds of legislatures painting the walls before laying the foundation. In short, sunset clauses force legislatures to spend more time to think deeply about whether the regulatory infrastructure is in place to ensure the intended outcomes.

RETROSPECTIVE REVIEW

If a sunset clause is off the table, legislators should minimally set up transparent, rigorous, and frequent assessments of AI legislation and regulations. This sort of retrospective review is commonsensical. In many cases, it is an obligation imposed on federal agencies following the promulgation of a rule. During the George W. Bush administration, from 2001 to 2006, for example, nine federal agencies analyzed more than 1,300 regulations in line with retrospective review obligations or at their own discretion. Thousands of reviews, however, led to very few changes. By 2014, in the middle of the Obama administration, agencies seemed to have taken this effort a little more seriously. A study of hundreds of retrospective reviews completed by 22 executive agencies found that agencies revised, clarified, or eliminated regulatory text 90 percent of the time. Still, external observers urged reform. Joseph Aldy at the Harvard Kennedy School suggested that “[f]or a given select, economically significant rule, agencies should present in the rule’s preamble a framework for reassessing the regulation at a later date.” Aldy’s advice is indicative of the fact that quality retrospective review hinges on better policymaking — specifically, more deliberate and outcome-oriented policymaking.

Two key lessons emerge from the recent history of retrospective reviews. First, mandatory reviews can lead to wildly different outcomes from administration to administration. In some contexts, regulatory reviews amount to paperwork creation acts — generating reports with findings destined for the dustbin. In others, however, administrators can drastically improve a regulatory ecosystem by paying close attention to the findings of such reviews. Second, if rules promulgated after extensive public engagement and expert insights are determined to require reform as much as 90 percent of the time, the odds seem good that legislators may miss the mark on statutes. Yet, retrospective review of legislation is far less common. That’s a mistake. If we’re concerned about regulations and statutes “creating scarcity, raising costs, and slowing progress,” then this sort of review should be table stakes. Aldy lists a few case studies that make this point clear:

The Department of Labor modified its chemical hazard labeling requirements so that they would conform to the international standard, thereby reducing costs to U.S. manufacturers — especially those looking to export to foreign markets — by about $2.5 billion over five years. The Department of Health and Human Services streamlined reporting requirements and burdensome regulatory obligations on hospitals that will deliver $5 billion in cost savings over five years. The Environmental Protection Agency, recognizing regulatory overlap with the Department of Agriculture, removed requirements on the dairy industry that will deliver about $650 million in cost savings over five years.

Unexpected changes in the nature and pace of AI legislation warrant investing in regular assessments of statutes and regulations. My hunch is that states and the federal government will come across similar opportunities to save money, streamline compliance, and adopt superior standards. What’s more, publication of such reports can help interested parties propose amendments, lobby officials, and otherwise ensure laws operate as intended.

SOCIAL IMPACT BONDS

A final suggestion for all those interested in AI policymaking that furthers human flourishing: now’s the time to lean into social impact bonds (“SIBs”) as a means to accelerate the use of AI to solve pervasive and wicked problems ranging from congestion to climate change.

SIBs invite public, private, and philanthropic actors to identify and prioritize known issues and direct innovation toward solving those issues. This financing mechanism can take several forms. For now, here’s a simplified hypothetical of how a SIB might work: a political community — let’s say Austin, Texas — through the ballot box or community forums identifies traffic as their paramount concern over the next several years. The city then collaborates with local and regional philanthropies to pool funds to reward any small- or medium-sized AI lab that develops a traffic planning tool capable of informing real-time redirection of traffic at any point in time with a specified 25 percent reduction of total time spent in congestion. Any qualifying lab that survives an initial threshold assessment can receive a grant to assist with the requisite research. Labs that develop tools proven to achieve the specified outcome can earn a specified award. Investors are then repaid by the city at a predetermined rate of return.

It’s a classic win-win-win-win. Yes, four wins. The public gets a problem solved. The investors get a return (and a reputational boost). The labs get their costs covered and develop a product ripe for replication in other markets. And, everyone benefits from directing AI toward positive use cases.

SIBs have real potential to address concerns that AI development is already veering off course. Just as social networking sites with lofty pro-social visions turned into social media platforms fine-tuned to encourage anti-social behavior, AI labs with an initial eye toward advancing human flourishing now seem more interested in generating extreme profits with little consideration of negative externalities. By leveraging philanthropic resources and the stability of local, state, and federal government coffers, SIBs create an alternative pathway that keeps AI labs in a pro-social direction.

Conclusion

The convergence of pro-growth sentiment across the political spectrum — from Open Philanthropy's commitment to reducing regulatory barriers to conservative advocacy for economic dynamism — creates an unprecedented opportunity to establish AI governance frameworks that accelerate beneficial innovation while maintaining essential safeguards. This bipartisan recognition that excessive proceduralism can "create scarcity, raise costs, and slow progress" provides the foundation for implementing creative regulatory approaches that avoid the pitfalls of rigid, permanent oversight structures.

The three mechanisms outlined — sunset and sunrise clauses, retrospective review processes, and social impact bonds — offer complementary pathways to ensure AI regulation remains adaptive, outcome-focused, and innovation-enabling. Sunset clauses prevent the regulatory equivalent of requiring triangle warning signs on autonomous vehicles, while sunrise clauses ensure enforcement infrastructure develops alongside regulatory obligations. Retrospective review mechanisms, drawing from federal agencies’ experience with regulatory assessment, provide systematic opportunities to course-correct as AI capabilities evolve. SIBs create market incentives for directing AI development toward solving pressing public challenges rather than optimizing solely for commercial returns.

State and federal policymakers should prioritize establishing these mechanisms during the current formative period of AI governance. The regulatory ecosystem taking shape today will determine whether AI development accelerates human flourishing or becomes constrained by outdated frameworks designed for previous technological paradigms. Early adoption of these adaptive governance tools positions jurisdictions to serve as effective laboratories of democracy, generating evidence-based insights that inform broader policy development.

A barrier to human flourishing — artificial scarcity imposed through regulatory inflexibility harms innovation and economic growth — necessitates the development of AI governance strategies that evolve with technological capabilities. By embedding flexibility, accountability, and outcome measurement into AI regulatory frameworks from the outset, policymakers can harness the current pro-growth consensus to establish oversight mechanisms that enhance rather than inhibit AI's potential to address humanity's most pressing challenges.

The moment for implementing these approaches is now, while regulatory frameworks remain malleable and the cross-partisan commitment to progress-oriented governance provides political momentum for creative policy solutions.