
Seizing Global AI Regulation: Chips for Peace as America’s Last Call to Lead
By Natasza Gadowska - Edited by Isabel Yin and Pantho Sayed
[The above image was AI-generated by ChatGPT o3.]
Natasza is an LL.M. Candidate at Harvard Law School (2025). She holds her primary law degree from Jagiellonian University in Kraków, Poland, where she is also currently pursuing a Ph.D. in AI and data privacy. Natasza’s academic journey has taken her across five universities on three continents, including the University of Melbourne, the University of California San Diego, Ludwig Maximilian University of Munich, LUMSA University in Rome, and the University of Basel. Before joining Harvard, Natasza gained professional experience at the OECD, the United Nations, the Centre for AI and Digital Ethics at the University of Melbourne, and various law firms, contributing to projects on AI regulation, data privacy, digital trade, and competition law.
Cullen O’Keefe, How the U.S. and Its Allies Can Lead on Safe and Beneficial AI, Lawfare (July 10, 2024, 9:38 AM), https://www.lawfaremedia.org/article/chips-for-peace--how-the-u.s.-and-its-allies-can-lead-on-safe-and-beneficial-ai/.
The Global Push for AI Regulation
Artificial Intelligence (AI) technologies have become one of the most discussed topics in recent years. As AI development accelerates, new systems are being introduced, transforming industries and reshaping societies. While global efforts to regulate AI are advancing alongside these technological developments, there remains a significant lack of clear global leadership equipped to enforce ethical, social, and security standards effectively.
Current State of the Art
The first country to introduce a regulation to manage Generative AI was China, with its Interim Measures for the Management of Generative Artificial Intelligence Services that came into effect in August 2023. A year later, the European Union took a leading role as its ambitious AI Act entered into force, which aimed to create a comprehensive framework categorizing AI systems by risk and imposing strict standards for high-risk applications.
The United States currently lacks comprehensive federal legislation regulating AI. Various federal initiatives were launched with the aim of guiding AI regulation, such as President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the Blueprint for an AI Bill of Rights, which offered guidance on equitable access and use of AI systems. However, these initiatives were recently withdrawn.
The U.S. relies heavily on a self-regulatory approach. Leading AI companies such as Google, Meta, Microsoft, and OpenAI have voluntarily committed to developing safe and transparent AI technologies and investing in safety measures. However, reliance on self-regulation is inherently problematic. Private companies are driven by values and incentives that differ from those of consumers. The 2008 financial crisis revealed the perils of entrusting risk management and consumer protection to profit-driven private actors in the absence of adequate regulatory oversight. In that case, the U.S. government was eventually able to intervene (as were many other governments) and, to some extent, mitigate the fallout for both institutions and their consumers. However, in the event of a catastrophic AI scenario, it is unlikely that any state would have the capacity to reverse the consequences.
Challenges in International Governance
Some progress has been made at the international level. The OECD AI Principles—the first intergovernmental standards on AI—were initially adopted in 2019 and updated in May 2024. Similarly, in March 2024, the United Nations adopted its first-ever standalone resolution negotiated at the UN General Assembly to establish a global consensus approach to AI governance. Titled “Seizing the Opportunities of Safe, Secure, and Trustworthy Artificial Intelligence Systems for Sustainable Development,” the resolution encourages countries to safeguard human rights, protect personal data, and monitor AI risks. In September 2024, the UN’s High-level Advisory Body on Artificial Intelligence released a report, “Governing AI for Humanity.” Developed through extensive consultations, the report outlines several key recommendations for global AI governance.
While I support the idea of international AI regulation, the duplication of efforts among international organizations highlights a significant issue: resource inefficiency. Having multiple entities like the OECD and the UN independently addressing similar issues can provide diverse perspectives, but it can also lead to overlapping initiatives and wasted resources. Although these frameworks may complement one another in some cases, the overall effect is often just confusion and inefficiency.
The Need for Leadership and Unity Beyond Divisions
In light of the issues outlined above, it is evident that while much is happening in the field of AI, there is a notable lack of clear leadership and global consensus on addressing the catastrophic risks posed by AI technology.
Given the United States and its allies’ leading position in AI supply chains, Cullen O’Keefe’s proposed Chips for Peace framework presents a unique opportunity to promote international collaboration by leveraging technological dependencies. Inspired by President Eisenhower’s Atoms for Peace initiative, this framework seeks to ensure the safe development of AI technologies through a combination of establishing robust safety standards for AI development and deployment, advocating for the equitable distribution of AI advancements, and coordinating efforts to prevent the uncontrolled spread of AI capabilities.
A central pillar of Chips for Peace is the strategic use of semiconductor supply chains. Given their exceptional access to and influence over the global semiconductor ecosystem—stemming from dominance in chip design, control over critical manufacturing equipment and software, strategic alliances with key producers like Taiwan and South Korea, and leadership in cloud infrastructure and AI hardware—the U.S. and its allies can leverage this position to enforce compliance with the framework’s principles. Access to critical AI infrastructure could be conditioned on adherence to rigorous safety standards. By regulating access in this manner, the U.S. and its allies could not only enhance global safety but also promote equitable growth in AI technologies.
I believe that the Chips for Peace framework has the potential to pave the way for the establishment of an international body to oversee AI developments and ensure they align with ethical and safety considerations. Such an initiative would be a crucial step toward mitigating the risks posed by AI while fostering collaboration on its responsible advancement.
However, implementing the Chips for Peace project in practice faces significant challenges. One critical concern is what leading countries, particularly the U.S., would gain from sharing their most advanced AI technologies with other nations. There is a real risk that less-developed states could exploit these technologies without reciprocal contributions—or, in the worst-case scenario, use them against the U.S. or its allies. On the other hand, a first move by the U.S. would serve as a strategic decision to prevent other countries like China from offering similar technological advantages and building an alliance group themselves.
Another challenge lies in determining which countries should be invited to participate in such an agreement. Including all nations could raise severe political concerns, such as the risk and/or impact of sharing sensitive knowledge with adversaries. On the other hand, excluding certain states might backfire if those countries achieve independence in chip production and AI development. This could lead to the formation of rival coalitions and creation of a new “cold war” dynamic. Therefore, to choose wisely, I believe such an agreement should include countries that may not necessarily be fully like-minded but nonetheless have the potential to become technologically independent.
Call for Action
Despite these challenges, I believe the U.S. is uniquely positioned to lead this initiative, and the Chips for Peace framework has a strong chance of success. The most advanced frontier AI developers—such as OpenAI, Anthropic, Google DeepMind, and Meta—are U.S.-owned companies. The same applies to major cloud providers like Amazon, Microsoft, and Google, as well as chip manufacturers, with NVIDIA setting the global standard. While China is advancing rapidly and has significant resources, much of the world still relies predominantly on U.S. technologies. This reliance provides a strong foundation for political and diplomatic efforts to test the Chips for Peace concept.
Ultimately, pursuing such a framework is far better than taking no action and allowing other actors to set the agenda, as China did with its Belt and Road Initiative. As a key negotiation principle suggests, making the first offer establishes a baseline that shapes the direction of further discussions. Now is the time for the United States to lead—act before others do!