Recent Federal Trade Commission (“FTC”) publications and pronouncements signal the FTC’s willingness to regulate AI-generated products. According to the FTC, lack of transparency about the use of copyrighted materials in AI-generated works and products may amount to consumer deception. The FTC indicated its stance on at least three occasions in the past four months.
First, in an August 16, 2023 Business Blog post titled “Can’t Lose What You Never Had: Claims About Digital Ownership and Creation in the Age of Generative AI,” the FTC warned that while intellectual property rights are “generally beyond the FTC’s consumer protection jurisdiction”, the FTC “take[s] note–and can take action–if companies aren’t upfront about what consumers are buying, who made it, how it was made, or what rights people have in their own creations.” The FTC argued that information on the extent to which a generative AI tool makes use of copyrighted materials may impact people’s decision to use one tool or another. The agency further argued that the information impacts business decisions to use AI tools for commercial purposes, given that businesses could be held liable if their use of the AI output infringes protected works. The FTC analogized AI tools to environmentally-friendly products, an area where it is “not unusual for the FTC to sue when sellers deceive consumers about how products were made.” The blogpost concluded with a rather unclear guideline for the players in the generative AI field: “when offering a generative AI product, [companies] may need to tell customers whether and the extent to which the training data includes copyrighted or otherwise protected material” (emphasis added).
Second, in an October 3, 2023 Business Blog titled “Consumers Are Voicing Concerns About AI,” the FTC repeated the warning that the use of copyrighted work to train generative AI products may implicate consumer protection issues. In this article, the FTC corroborated the claim that consumers are in fact concerned about, among other issues, the use of copyrighted material in generative AI by referencing the FTC’s Consumer Sentinel Network, a tool that aggregates consumer complaints from data contributors, including the FTC’s own fraud reporting website.
Finally, in its recent comment submitted to the U.S. Copyright Office, the FTC claimed that “AI technology raises significant competition and consumer protection issues beyond questions about the scope of rights and the extent of liability under the copyright laws.” The Commission noted that a “conduct that may be consistent with the copyright laws nevertheless may violate Section 5 [of the FTC Act.]” For example, while mimicry of an artist’s creative style may not be illegal under the copyright law which provides only limited protection for pure style, if the same conduct implicates unfair or deceptive acts under Section 5, the FTC may intervene. The FTC appended to the comment a transcript of its October 4, 2023 “Creative Economy and Generative AI” roundtable, identifying several themes that emerged from the roundtable discussion. For example, the FTC noted that panelists preferred an opt-in framework, rather than the opt-out framework that is more common today and “puts the burden on creators to police an ever-changing marketplace.” The panelists “expressed a desire for opt-in frameworks, where AI developers seek authorial consent and clearly explain how they intend to use their work, ideally with appropriate credit and compensation.” The Commission further noted that the ability of generative AI tools to replicate the style of creators “can not only create consumer confusion but it also can cause serious harm to both fans and artists.”
The recent line of FTC’s pronouncements shows that the agency is willing to regulate the generative AI space, especially when consumer protection and intellectual property rights intersect. This willingness adds another layer of complexity to the unsettled intellectual property issues regarding the use of generative AI.