Once upon a time, people believed that the internet would be a free medium controlled by users from the bottom up. Instead, the internet created new gatekeepers that offer platforms for users to create their own content. These intermediaries are not mere middlemen,[1] but rather governors of speech.[2] They act as centers for disseminating information and possess an essential role in directing the attention of users. They influence what is viewed, what is valued, and what is disseminated and re-disseminated by users. They can promote or withhold ideas, organize the flow of information, and influence social dynamics. This moderation can influence users’ emotions,[3] behaviors, and their decision-making processes, and even influence democracy.[4] With greater influence comes a potential for a greater harm.
In general, intermediaries influence users to share content without fact checking. They are motivated to do so because dissemination of content enhances active participation of users and increases revenues from advertisers.[5] The most prominent example for influencing users to share content is Facebook’s social network system. In this social network, everyone has a personal profile and their relationships are framed as “friendships.” Consequently, members of the social networks are more likely to accept and spread the information on the social networks, even though not all of their Facebook connections are really their friends.[6]
Another example is social mirroring. Facebook uses social mirroring to reflect user behavior back by showing users what their friends share. Facebook collects information on users and prioritizes on their newsfeeds the content created by their close friends and family, thereby reinforcing existing biases.[7] As a result, users are more likely to believe the information and forward it along since they know the source.
A third example is the “share” button, which simplifies the ability to share content.[8] This causes users to share content intuitively without using their reflective thinking and without acknowledging the consequences.
Intermediaries can also encourage users to share specific types of content, such as defamation and fake stories without neutrality. Illicit intermediaries such as “TheDirty.com” and other websites push users to “submit dirt” on others and harm third parties.[9] These encouragements cause users to share more defamatory content and increase the scope and rate of negative speech. In addition, intermediaries’ encouragements to disseminate specific types of content affect social dynamics, precipitate polarization and extremism, and enhance the influence of the offensive content.
In some cases, intermediaries republish users’ content themselves. They may link posts, spread them, emphasize specific items, or add headlines to users' content. Consequently, they affect the manner in which ideas are interpreted and credibility is ascribed.
An individual can write a libelous article on his blog, which has limited readership; but an owner of a public Facebook page, who acts as an intermediary, might pick it up and spread it to millions of readers. The dissemination of user-generated content may cause harm to users and third parties because more people are exposed to the information. Moreover, dissemination repeats the information – and the more times people see it (especially from different venues), the more likely they are to believe it.[10]
Today, truth is no longer as important as what seems or feels to be true. And in this environment, intermediaries are particularly important. Intermediaries are central hubs of power and take essential part in shaping networks. The power that intermediaries possess over users’ content is one of the major issues policy-makers must address, particularly as they allow and even encourage the dissemination of harmful content such as defamation and fake stories. Yet, pursuant to Section 230 of the Communications Decency Act (CDA), courts refrain from holding intermediaries’ responsible for users’ content, except in extreme cases.[11] However, the overall immunity scheme was constructed when the web was in its infancy. As technologies advance and the web grows more prevalent, it becomes time to challenge and refine the immunity regime.[12]
There are many ways to explore the responsibility that intermediaries should bear. The article Taking out of Context focuses on one particular aspect: the dissemination of users’ defamatory content by intermediaries.[13] Online libel has attracted a great deal of attention in courtrooms and regulations; yet it remains under-conceptualized. The article strives to fill in the gap. It asks a simple question: should websites receive immunity for disseminating libelous content created by third parties?
When an intermediary promotes content, the number of recipients increases exponentially as users of the network “like” and “share” it. How will the victim of the content find redress? When should the immunity be narrowed in such cases? Even though these questions appear before courts on a regular basis, they have no simple solutions. Judges often write conflicting decisions because they lack a theoretical framework or guidelines upon which they can base their opinions. The article endeavors to change that.
Drawing on network theory, psychology, marketing, and information systems, the article maps common strategies of content dissemination and explains how they influence the severity of harm. Subsequently, it examines case law and normative considerations that should be taken into account when deciding whether an intermediary is liable for disseminating users' content.
The article proposes to observe liability through the prism of context. It offers a method that imposes liability on intermediaries. It is based on two axes: how much text was taken out of context, and what method they have used. The determining factor is a causal link between the “breach in context” and the intermediary. This framework differentiates between intermediaries who disseminate content in ways that are consistent with its original context and intermediaries who take it out of context. By binding intermediaries' liability with a breach of context, the article proposes a nuanced guideline for deciding the scope of their liability. It does so while accounting for basic principles of tort law, freedom of speech, fairness, efficiency and prospective effects on innovation.
* Ph.D., Chesin Postdoctoral Fellow–Hebrew University of Jerusalem, Faculty of Law &Research Fellow HUJI Cyber Security Research Center (H-SCRC) - Cyber Law Program.
[1] See Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 1 (2017) [https://perma.cc/G2RL-AYQ4] (arguing that because intermediaries structure, sort and sometimes sell users’ data, they are not passive conduits).
[2] See Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, 51 U.C. Davis L. Rev. 1149 (2018).
[3] D.I. Kramer, Jamie E Guillory & Jeffrey Hancock, Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks, 111 Pans. 8788, 8788 (2014) (Facebook only displayed negative posts written by friends of friends of users and negative posts were created and shared in higher rates than other types of content).
[4] See Jonathan Zittrain, Engineering Elections, 127 Harv. L. Rev. F. 335 (2014) (the “election experiment” serves as a good example; some Facebook users were encouraged to click on a button if they voted and the newsfeed indicated that. Others were not shown the graphic sign. Researches cross-referenced everyone’s name with actual voting records and found that people who saw a sign that their friends voted were more likely to vote). Moreover, intermediaries and other stakeholders can influence the dissemination of fake stories on candidates. See Zeynep Tufecki, Twitter and Tear Gas: The Power and Fragility of Networked Protest 264-65 (2017). See Carole Cadwalladr & Emma Graham-Harrison, 50 million Facebook profiles harvested for Cambridge Analytica in major data breach, The Guardian (Mar. 17, 2018), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.
[5] See Balkin, supra note 2 (“Search engines and social media sites have an interest in getting people to express themselves as much as possible publicly so that they will produce content and data that can be indexed or analyzed, even though people may regret their choices later on.”).
[6] James Grimmelmann, Saving Facebook, 94 Iowa L. Rev. 1137, 1162 (2009).
[7] See Cass R. Sunstein: #Republic- Divided Democracy in the Age of Social Media 14 (2017) (explaining that Facebook prioritizes posts of family and close friends on the newsfeed); Julie E Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev. (forthcoming, 2017), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2991261 [https://perma.cc/LJ5A-JKK6] (explaining that algorithmic mediation of information flows may intensify group polarization and reinforce existing biases).
[8] See Sunstein, supra note 7, at 108.
[9] See Jones v. Dirty World Entertainment Recordings LLC, 2014-WL-2694184 (6th Cir. June 16, 2014).
[10] See Gordon Pennycook, Tyrone D. Cannon & David G. Rand, Prior Exposure Increases Perceived Accuracy of Fake News (Apr. 25, 2017), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2958246 [https://perma.cc/YDX8-XQFG].
[11] 47 U.S.C. § 230.
[12] Many scholars already suggested to narrow the immunity. See Sylvain, supra note 1 (arguing that courts should rethink the scope of immunity in a way that is adapted to the oversized influence that online intermediaries have on users’ conduct).
[13] See Michal Lavi, Taking out of Context, 31 Harv. J.L & Tech. 145 (2017).