On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), leaders of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, proposed a new framework for congressional regulation of AI. The Bipartisan Framework for U.S. AI Act defines five key policy considerations for future legislation, building on the SAFE Innovation Framework released by Senate Majority Leader Chuck Schumer (D-N.Y.) in June 2023.
The first consideration outlined by the framework is to “Establish a Licensing Regime Administered by an Independent Oversight Body.” The framework proposes the creation of a new licensing body that businesses developing sophisticated general-purpose AI technologies would be required to register with. Licenses would be predicated on various safety standards, including risk management, testing, and reporting, as determined by mandatory audits. The framework also raises the idea of conflict of interest standards to combat growing concerns about regulatory capture in AI.
The next step under the framework is to “Ensure Legal Accountability for Harms.” The focus is on empowering citizens and regulators to pursue legal action against companies for harms caused by AI. Broadly, this means creating private causes of action for breaches of privacy, civil rights, and other potential harms, while implementing legislation to address existing problems with the use of AI, such as election interference. The framework also urges Congress to declare that Section 230 of the Communications Decency Act, which protects online content hosting sites from liability over user-created content, does not apply to AI and AI-created content.
The third proposal is to “Defend National Security and International Competition.” The authors advocate for sanctions and export controls related to AI against adversary nations, specifically citing Russia and China, as well as foreign actors that are engaged in human rights violations. The proposal falls in line with recent actions from the Senate increasing sanctions against Russian and Chinese companies for their involvement in manufacturing weapons for the war in Ukraine. Concerns over the potential consequences of AI weaponry have begun to emerge as major powers ramp up investments in automated warfare systems.
The fourth step in the framework is to “Promote Transparency.” The framework breaks up the concept of AI transparency into two primary categories: insight into the nature of the models, and notice that outcomes are AI-generated. In promoting the first goal, it calls for disclosure requirements regarding the “training data, limitations, accuracy, and safety of [AI] models,” alongside the release of necessary data to independent researchers for verification. For the second goal, there are recommendations regarding affirmative notice when a user is interacting with an AI system, watermarks on “deepfake” content, and a public database of AI systems and related incidents.
Rounding out the policy considerations is a general call to “Protect Consumers and Kids.” The senators advocate for legislation requiring AI decision makers to implement safety brakes, notice requirements, and human review options. In addition, the framework declares that consumers should have control over how AI developers use their data, and states that “strict limits should be imposed on generative A.I. involving kids.”
The new framework comes in the midst of calls for regulation from industry leaders and experts. While it does not contain actual legislation, the authors note that the framework represents an important milestone on the path towards a cohesive AI policy. As Senator Blumenthal argued, lawmakers must consider future AI policy concerns before they get out of hand in order to avoid repeating the failures of regulation surrounding social media. According to Blumenthal, defining agreed-upon policy goals from the get-go should help Congress act more decisively on some of the largest policy issues of the 21st Century that relate to AI.
Despite broad support for better regulation and oversight, disagreements have emerged on how to handle a governmental oversight body. Companies such as IBM and Google, two of the largest players in the AI space, have advocated against the creation of a new AI oversight body. Instead, they claim that existing agencies are better suited to handle the challenges that AI presents, and that having oversight spread across multiple bodies will provide for better flexibility than a single regulatory authority. Google, IBM, and other industry leaders have been highly involved with government efforts towards AI regulation and governance, raising questions regarding their influence in a field they all have a competitive interest in.
The recent push from Congress for stronger AI regulation has led to criticisms from both digital rights groups, such as the Electronic Frontier Foundation, regarding the potential for regulatory capture, and libertarian groups such as Americans for Prosperity, that claim it could stifle innovation and reduce the competitiveness of American companies in the global market for AI. Against the backdrop of these concerns, seven new pieces of AI-related legislation were introduced to Congress in July alone, with calls for broader and more comprehensive policies growing.
While the next steps that Congress will take remain unclear, large scale AI regulation based on the new framework may be just around the corner, with the potential to shape numerous facets of life today.