Submit to Digest

AI, Bioweapons, and Corporate Charters: The Case for Delaware Revoking OpenAI’s Charter

Commentary

Delaware has the power to revoke OpenAI's corporate charter due to the potential risks posed by AI, including the creation of bioweapons. This measure, grounded in long-standing state authority, could serve as a crucial check on corporate actions that endanger public safety.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He also serves as a Director of the Center for Law and AI Risk.


Americans are not powerless before the awesome power of AI labs. A forgotten check on corporate power remains available in every state and to the public writ large. The real threats posed by AI justify the imminent use of this neglected form of oversight—revocation of a corporate charter. In particular, the possibility that popular AI models will ease the creation and deployment of bioweapons justifies reviving the power of every state to revoke any previously granted charter.

Many keys have been clacked and podcasts recorded analyzing the sorts of risks posed by AI. A common framework has emerged: short-term and long-term risks. Short-term risks include everything from algorithmic bias in sentencing decisions to the spread of misinformation. Such risks have already manifested or will manifest in the near future, are readily observable, and are, to an extent, quantifiable. Long-term risks tend to include existential risks: risks that pose “an unrecoverable harm to humanity's potential.”

Increased use of bioweapons due to the spread and adoption of AI models covers this spectrum. As Jeff Alstott of RAND points out, a lack of technical knowledge has historically prevented bad actors from using bioweapons. But AI labs are closing and will continue to close that gap, as recognized by OpenAI and other researchers. The result will be greatly increased risk to the public—a risk that no state should neither condone nor facilitate with a corporate charter. This Commentary makes the case for Delaware to revoke OpenAI’s charter, given the company’s immoral decision to deploy AI models with unknown, but non-zero and likely quite significant, short- and long-term risks.

Evaluating Whether Existing AI Models Increase the Odds of a Bioweapon Attack

The relationship between advances in AI and increased risk of bioweapon deployment has been subject to much debate. Some have concluded that AI has yet to change the threat landscape. Researchers at RAND, for instance, "found no statistically significant difference in the viability of plans [to deploy bioweapons] generated with or without LLM [large language model] assistance." This led them to conclude that current models have not increased the risk of such attacks in the short run. Nevertheless, they acknowledged that models may produce "unfortunate outputs" that merit concern. Those outputs include: discussion of the best means to cause a significant number of casualties with bioweapons, analysis of how to develop bioweapons, evaluation of the pros and cons of various deployment strategies, and an assessment of tactics to dodge laws that may hinder the successful use of bioweapons.

Others have concluded that AI models carry a very real, imminent threat of aiding the use of bioweapons. One of the most well-known skeptics of the unchecked proliferation of AI, Gary Marcus, lands in this camp. Marcus contests the methodologies employed in studies that downplay the risks posed by AI—specifically, he alleges that an OpenAI study was littered with such faults.

OpenAI's study assigned five tasks to students and experts corresponding to the five stages of biological threat creation. “These tasks,” explained the company, “were designed to assess the end-to-end critical knowledge needed to successfully complete each stage in the biological threat creation process.” Review of the outputs led the company to dismiss any meaningful change in the risk of such an attack brought about by AI. The researchers also emphasized “that information access alone is insufficient to create a biological threat[.]” Still, OpenAI did admit ChatGPT 4 “may increase experts’ ability to access information about biological threats, particularly for accuracy and completeness of tasks.”

That study determined that the company's most powerful models provided “at most a mild uplift” to actors seeking to launch a bioweapons attack. Marcus's review of that study revealed some troubling sleights of hand used by the OpenAI researchers. For one thing, he thinks they may have “underreported a result that should be viewed as significant” by relying on a statistical test that is appropriate in only a few, narrow cases. Marcus's own use of a more standard test suggests that OpenAI’s results were indeed significant. On the whole, he regards the study as an indication that existing LLMs could give a meaningful boost to ill-intentioned experts seeking to initiate a bioweapon attack. Marcus summarizes that "if even one malicious expert gets over the hump [of developing a bioweapon] and creates a deadly pathogen; that's huge."

If it is agreed that AI models can ease the process of developing and deploying bioweapons, then do public authorities—namely, the states in which AI labs are incorporated—have an obligation to lessen these odds?

If the answer is “yes” (for the sake of transparency, I think that “yes” is the only answer because of the obligation of states to further public well-being), then what means are available to the state to mitigate these AI-enabled odds?

A Forgotten Check on Corporate Power: States’ Reservation of the Power to Revoke Corporate Charters

There are no corporations in the state of nature. Every corporation is a creature of law—specifically, state law. States bring these corporate creatures to life via charters. Daniel Hanley provides an excellent summary of why states insisted on corporations being granted such charters:

Primarily because of the still nascent administrative capacities of the various state governments at the time, the charter was deployed when needed as a regulatory weapon to ensure corporations adhered to the rule of law and served the public interest: Any deviation of the terms of its charter, could result in the charter being revoked and end the corporation’s existence.

As implied by this summary, states not only had and still have the power to create corporations, but also to destroy them by revoking previously granted charters. To this day, every state has the authority to revoke a corporation’s charter. Delaware, for instance, imposes a mandate on the general assembly to revoke the charter of any corporation for the "abuse, misuse, or non-use[]" of their powers. The Attorney General, acting on behalf of the legislature, may then commence charter revocation proceedings in the appropriate court. Though the Attorney General’s authority to initiate such proceedings has been lessened in recent decades, Delaware court decisions affording the state broad powers to revoke charters has remained good law.

The wide range of behavior that may justify revocation of a charter was demonstrated in Young v. National Association for the Advancement of White People, Inc., 109 A.2d 29 (Del. Ch. 1954). In that case, the Delaware Court of Chancery explained that a charter revocation inquiry should turn on evidence of “a sustained course of fraud, immorality or violations of statutory law[.]” The court added that such an inquiry should also examine whether “prevent[ion] of irreparable injury to the State” justified action by the court. In Young, which dealt with allegations of inciting white parents to riot against integration, the court concluded that no such injury existed because the claimed threat to school attendance had since subsided. Nevertheless, the court made clear that “[t]here is no question” that clear evidence of abuse of a corporate charter justifies its revocation by the state.

Courts in other states likewise have broadly interpreted a state’s authority to either revoke a corporate charter or dissolve the corporation. Charlie Cray and Lee Drutman offer a long list of liberal uses of this dissolution power. The duo note that corporations have been dissolved for

failing to lay railroad tracks by a date promised, joining other companies to monopolize sugar, conducting fraudulent real estate practices, putting out false advertising, serving polluted water to customers, running baseball games on Sundays, paying members of the [corporate] President’s family excessive salaries, self-dealing, and for the apparent complicity of failing to remove the Corporate President after four convictions in one year for illegally selling alcohol.

Though modern popular efforts to compel state officials to revoke a corporation’s charter have not been as successful, constitutional and statutory authority to revoke the charter of a corporation that imperils the general welfare remains on the books across states, including in Delaware.

Potential Justification for Revocation of OpenAI’s Charter

OpenAI, incorporated in Delaware, is arguably on a sustained course of immoral behavior by virtue of knowingly putting the public at increased risk of a bioweapon attack. The company acknowledges that its products may provide a “mild uplift” to those seeking to launch deadly attacks. What’s more, it admits it is unsure of the accuracy and durability of its findings. “Going forward,” OpenAI flagged in the discussion section of the aforementioned study, “it will be vital to develop a greater body of knowledge in which to contextualize and analyze results of this and future evaluations” related to the use of AI models to develop and deploy bioweapons. Such “contextualization” and “analysis” should not occur in the wild—or, to be more precise, by public experimentation with ChatGPT. Yet, OpenAI is instead actively in the process of preparing the public release of an even more advanced model.

The willingness of OpenAI to further develop and deploy products with uncertain, but at a minimum, negative, effects on public safety exposes the company’s abuse of the powers afforded to it by the state of Delaware. Whether by facilitating a bioweapons attack or eliminating millions of jobs, OpenAI continues to expose the people of Delaware (and the American public writ large) to irreparable injury. Note that this Commentary has only covered a few of the potential AI-induced causes of such injury. A substantial number of uncontrolled risks introduced by AI bolsters any state’s consideration of revoking charters of AI labs. In particular, states have clear evidence that AI will play a role in disrupting elections later this year.

It is true that states can and should have more safeguards in place to limit the odds of worst-case scenarios arising from AI. It is also true that states cannot permit OpenAI and others to profit from that lack of planning. Unless and until the risks of AI are properly understood and controlled for, states have reason to consider revoking charters of AI labs.

The alternative—authorizing AI labs to experiment on the public while crossing our fingers that their models do not facilitate unrest, exacerbate inequality, and accelerate deadly attacks—directly conflicts with the idea that our constitutional order empowers the people, via their representatives, to exercise absolute control over civic affairs.

If and When States Shirk Their Duties

In all likelihood, neither Delaware nor any other state will exercise its revocation power to pull the charters of AI labs that are building the most risky models. Moreover, even if a state did take such a step, forty-nine states would likely welcome OpenAI and other leading labs with open arms.

The race-to-the-bottom among states to offer the most permissive charters (and corporate laws more generally) is so entrenched in our legal history that the practice has earned a moniker—chartermongering. Before Delaware became the incorporation capital of the world thanks to its permissive laws, New Jersey was the epicenter of corporate headquarters.

Back in the late 1800s, New Jersey opted to break ranks with the other states and trade meaningful control over corporations for incorporation fees and franchise taxes. When New Jersey officials realized they received the bad end of that bargain and reinstated more stringent corporate laws, Delaware happily lowered its own legal barriers and lured corporations to this new jurisdiction. Delaware has refused to surrender its place at the top (bottom?) of the corporate law rankings ever since.

This brief history goes to show that any effort by a state to meaningfully regulate AI via their reserved powers is akin to popping balloons in a windstorm—the labs will eventually land somewhere. The somewhat obvious, yet neglected, solution is to mandate that leading AI labs receive a charter from the federal government. Hanley makes a compelling case for this approach:

So, while corporations have many options from which to obtain their legal rights, the public has a limited number of governing entities capable of regulating their operations. Unless the states were to come together and act collectively, which they have little incentive to do, only federal action will solve the problem of corporate abuse in America.

Though Hanley aims to require all corporations receive a corporate charter, my proposal is to have Congress start with AI labs. This requirement would give Congress (and, by extension, the American people) a chance to direct AI labs toward the public interest.

A federal charter requirement on the largest AI labs may have some legs on the Hill. Senator Elizabeth Warren introduced a bill to require large corporations receive a federal charter back in 2018, and even championed the idea while considering a run for President. A bevy of bipartisan bills to regulate AI indicates that Warren’s colleagues may be willing to consider a unique version of her initial legislation.

Conclusion

Our constitutional order is premised on the idea that the people are sovereigns—not the states, not the federal government, and certainly not corporations. The regulatory checks that once maintained the people’s control over corporations have been forgotten. States, specifically Delaware, ought to revive their power to revoke corporate charters and demand that AI labs do more to mitigate potentially devastating risks posed by their models. In the alternative, Congress should step up and protect the American people before a small, homogeneous group of AI experts causes irreversible damage to our economy, political system, and way of life.