Submit to Digest

Supreme Court to Grapple with the Responsibility of Social Media Giants in February Cases

Reports

This month, the Supreme Court will hear two cases that will determine how technology companies behave in the years to come. Both cases concern technology companies’ liability for the content that they host, recommend, and moderate on their websites. The Court will hear oral arguments in Gonzalez v. Google on February 21, 2023 and Twitter v. Taamneh the following day. The cases probe slightly different areas of responsibility for the nation’s most powerful technology companies, but the same issue will guide both: what is the proper relationship between users and the platforms that host their speech?

Discussions of free speech on the internet invariably begin with Section 230 of the Communications Decency Act: a provision that has been nick-named the “Twenty-Six Words that Created the Internet.” Passed in 1996, the statute protects digital platforms from liability for the data that users post on their site. The statute establishes protection in two related provisions. First, digital platforms are not “treated as the publisher or speaker of any information” provided by a user of their website under subsection (c)(1). Second, digital platforms are not liable as a result of “any action voluntarily taken in good faith to restrict access to” information posted to their website. The result is that these digital platforms are entitled to moderate the content appearing on their platform without being subject to the legal actions which a traditional publisher, like a newspaper, may face. Congress created the liability shield to promote the development of digital platforms: companies could host content by users without being held responsible for users’ potentially harmful messages and can moderate in good faith without stepping outside of that legal protection.

Today, social media platforms and search engines utilize algorithms to connect their users with information. These platforms employ algorithms to analyze online behavior and prioritize content expected to keep users engaged. The first case before the court, Gonzalez v. Google, will explore whether this modern paradigm has changed companies’ relationship with Section 230.

Reynaldo Gonzalez sued the search giant under the Anti-Terrorism Act (“ATA”) for Google’s alleged support of ISIS recruitment efforts preceding an attack in Paris that claimed the life of Gonzalez’s daughter. The complaint asserts that Google recommended ISIS videos to users and, therefore, assisted in their recruitment and their terrorist activities in violation of the ATA. Google successfully moved to dismiss the complaint on the grounds that Section 230(c)(1) protects them from liability.

Briefs in support of Gonzalez argue that Section 230 has been construed overbroadly and now operates far beyond its purpose. The Cyber Civil Rights Initiative points out that the statute was intended to encourage companies to restrict access to harmful content, not to create sweeping protections for their behavior. Briefs submitted by other social platforms, including Reddit, caution against the practical consequences of limiting Section 230 protections. They warn that algorithms are necessary for content moderation and that changing the existing paradigm risks “devastating the Internet.” Some academics, including Hany Farid of UC Berkeley, respond that technology companies have made a habit of predicting the Internet’s doom when faced with regulation. Other briefs emphasize the thin line between presentation and promotion of information; any modern publisher is expected to employ an algorithm to provide structure to the bevy of content hosted on their platform. Doing so falls within the expected behavior of platforms, as foreseen by Section 230 since its inception.

The case may turn on the Court’s view of how algorithms and content interact. If algorithms are mere tools for delivering data to users, then Section 230 would reasonably be understood to extend to use by companies. However, if algorithms are better understood as recommending and serving targeted content, they may fall outside of Section 230 protections. This distinction has roots in the language of Section 230 itself. The statute describes “interactive computer services” as providing access to data and receiving the full benefits of liability protection. Separately, the statute identifies “information content providers,” who are responsible for the creation and development of content. Information content providers receive no protections under Section 230. Thus, the outcome of the case may depend on whether the use of algorithms to recommend and promote content transforms Google into a “provider,” rather than a “service.”

Section 230 has been a hot-button political target in recent years as social media companies face increasing scrutiny. Former President Trump battled technology companies over his position that conservative voices were being silenced. President Trump threatened to veto a spending bill if it didn’t include a repeal of Section 230 and, ultimately, the Department of Justice (“DOJ”) proposed a revision that would have limited its scope. President Biden adopted a similar stance: he called for Congress to reduce Section 230’s liability shield in order to incentivize more aggressive content moderation. Notably, Supreme Court Justice Clarence Thomas has provided clues to his position. He made reference to Section 230 in an unrelated matter in 2020, writing that interpreting its protections “beyond the natural reading of the text can have serious consequences.”

Twitter v. Taamneh, the second case before the Court, addresses the culpability of social media companies in a slightly different manner. Relatives of Nawras Alassaf, who was killed in a 2017 terrorist attack by ISIS, have sued companies on the grounds of their failure to moderate and remove harmful content. Their claim similarly alleges that social media companies, including Twitter, Google, and Facebook, aided and abetted the terrorist group by refusing to exercise editorial supervision over their posts that promoted violent activity.

The complaint alleges that the social media companies were “well aware” of the terrorist activity taking place on their platforms; in fact, they had been in ongoing conversations with US officials about the extent of users’ engagement with terrorist organizations. Taamneh alleges that the algorithmic distribution of ISIS content was instrumental in their successful efforts to recruit members and carry out attacks. Taanmah seeks damages, pursuant to the Justice Against Sponsors of Terrorism Act (“JASTA”), which authorizes lawsuits against any organization that “aids and abets” international terrorism. The Chamber of Commerce, by submitting a brief in support of the companies, argued that the lawsuit attempts to expand the aiding and abetting standard of anti-terrorism statutes. They characterize the laws as creating liability for the “knowing” assistance of terrorism; the companies at issue cannot possibly review all content posted to their sites, and imposing liability for their failure to remove all posts that lead to harm is an impossible standard to maintain.

Both cases will have wide-ranging consequences for the technology industry. Halimah DeLaine Prado, General Counsel for Google, wrote in a blog post that a decision eroding the strength of Section 230 would result in websites being forced to choose between moderating all controversial content or abdicating responsibility entirely by refusing to acknowledge dangerous posts. In her words, consumers will be left to choose “between overly curated mainstream sites or fringe sites flooded with objectionable content.” Smaller companies, like Pinterest and Patreon, have expressed similar concerns. If algorithmically delivering content to users or failing to proactively remove harmful posts creates liability, then mid to small-sized companies may face an onslaught of lawsuits and be unable to absorb the legal costs that a larger company could endure. To avoid legal action, these companies must take on the herculean task of removing any and all questionable content.

Congress’s silence looms large in the background of these Supreme Court cases. Strong public response to the ruling appears inevitable, regardless of who prevails in each matter. Despite calls from both political parties for reform, there has been little change in federal legislative control of social media during its meteoric rise over the last decade. The Supreme Court’s decisions will send shockwaves through the industry and the halls of Congress alike. Google v. Gonzalez and Twitter v. Taamneh may signal the start of a renewed national focus on how powerful digital platform companies should treat their users.