Submit to Digest

FCC Cracks Down on AI-Powered Robocalls

On February 8, 2024, the Federal Communications Commission (“FCC”) unanimously issued a Declaratory Ruling classifying artificial intelligence (“AI”)-generated voices in robocalls as “artificial or prerecorded voice[s]” under the Telephone Consumer Protection Act (“TCPA”). The ruling, which is already in effect, prohibits voice cloning technology used in common robocall scams targeting consumers. It also expands the causes of action state attorneys general can pursue to prosecute perpetrators. Supporters of the ruling applaud its unprecedented step towards action against AI-generated robocalls, but critics protest that the ruling is overly broad and unfeasible.

The TCPA prohibits calls using “an artificial or prerecorded voice to deliver a message without the prior express consent of the called party,” with limited exceptions (such as for emergency purposes). However, the meaning of “artificial” remained ambiguous prior to this ruling, so a loophole in the Act existed which allowed AI-powered voices to be used for nefarious ends.

Such immoral ends include political campaign interference. In fact, the FCC ruling came on the heels of reports that some New Hampshire residents had received phone calls discouraging them from voting in their state’s January 23 primary. The calls told those residents that if they voted in the primary, they would not be able to vote in the November general elections. The voice speaking on the calls had been generated by AI to sound like President Joe Biden, and even employed his signature phrase, “[w]hat a bunch of malarkey.” New Hampshire Attorney General John Formella stated that the calls have since been traced back to Texas companies Life Corporation and Lingo Telecom, and the FCC has issued a cease-and-desist letter to both companies.

In addition to disrupting American democracy, other nefarious uses of robocalls include cloning children’s and grandchildren’s voices to extort money from parents and grandparents – going so far as to even fabricate hostage situations with these AI-generated voices. Robocallers also seek personal information that can be used to commit identity theft. FTC data shows that consumers lost $2.7 billion to imposter scams in 2023 alone.

The FCC ruling closes this loophole by clarifying that current AI technologies that replicate or create human speech fall under the TCPA’s definition of “artificial” voices. This means that AI-powered robocalls, similar to traditional robocalls, now require prior express consent from the recipient for most purposes.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” said FCC Chairwoman Jessica Rosenworcell. “We’re putting the fraudsters behind these robocalls on notice.”

The FCC’s ruling improves government enforcement efforts against AI-powered robocalls in four important ways. First, it enhances the FCC’s civil enforcement authority by empowering the agency to fine robocallers using AI-generated voices up to $23,000 per violation. Second, it allows the Commission to take steps to block calls from carriers facilitating illegal robocalls, such as using the enforcement authority it previously only used for recorded robocalls. Third, the TCPA allows individual consumers or an organization to bring a lawsuit against AI-generated robocallers to recover up to $1,500 in damages for each unwanted call. Finally, state attorneys general have additional enforcement tools they can now employ against AI-generated robocalls, such as suing under a federal cause of action created by the TCPA.

While many commenters applaud the FCC’s decision, others claim there is still more work to be done to combat AI-generated deception techniques in communications.

First, Nick Penniman, founder and CEO of the nonpartisan political reform group Issue One, argues the FCC ruling does not do enough to protect the electorate from other sources of AI-generated fraud. Political advertising powered by AI-generated images and videos is a major source of fraud and is an “existential threat to democracy.” The new ruling, however, does not expand the scope of the Commission’s power to regulate such the whole range of potential communications since the decision is limited to robocalls, which are not images or videos. This limited scope leaves AI-generated images and videos unregulated while political campaigns and their supporters increasingly use such materials ahead of the election.

Second, while some commenters argue the holding does not do enough to combat AI fraud in political advertising, Aaron Tifft, an attorney at Hall Estill, protests that the decision is too broad. Tifft argues that the ruling’s ambiguity increases the burden on innocent telemarketers by requiring them to obtain consent before every AI-generated communication with a potential client, shareholder, or employee. For example, the ruling leaves open a gray area surrounding whether text messages also fall within the scope of TCPA sanctions. Commenters like Tifft worry this uncertainty could lead to frivolous and costly lawsuits for businesses.

Finally, Eric Burger, research director of the Commonwealth Cyber Initiative at Virginia Tech, and former chief technology officer at the FCC from 2017 until 2019, questions the enforceability of the unanimous decision. According to Burger, the current method of so-called “one-step traceback” has been effective. This method involves identifying the source of a robocall to trace the call back through the complex network of telecom infrastructure and identify the call’s originating point. However, one-step traceback is cumbersome and expends an inordinate amount of law enforcement resources, so the FCC has traditionally only prosecuted the largest robocall efforts. Now, AI has lowered the barriers to entry for robocallers, allowing them to deploy thousands of robocalls a day. Even with its new ruling, the FCC’s current protocols have not yet adapted to the rapid pace of AI-powered robocalls and responding using existing protocols would burden state investigatory efforts without achieving the disincentive the FCC envisions in this ruling.

The FCC’s decision represents a significant step in addressing AI-powered robocalls, but it does not mark the end of this story. The evolving nature of technology necessitates a collaborative and adaptable approach to ensure consumer protection without hindering innovation. Ongoing discussions, clear regulations, and effective enforcement will be crucial in keeping our communication channels free from unwanted and potentially harmful AI-powered robocalls.