Who enforces the content guidelines promulgated by mega-platforms that host user-generated content, such as Facebook, Twitter, and Youtube? While the specifics remain intentionally obfuscated, content moderation is done by tens of thousands of online content moderators, mostly employed by subcontractors in India and the Philippines, paid wages well below the average Silicon Valley tech employee. These legions of moderators spend their days making split-second decisions on whether to take down questionable content, applying appropriateness criteria that are often ambiguous and culturally-specific.
Meanwhile, the biggest platforms are under increasing pressure to beef up their moderating tools, and to make their moderation practices more transparent. One underappreciated part of this debate is the human impact that moderation has on the moderators. Many of these individuals spend their days trying to meet numerical quotas, as they sift through beheading videos, violent images, and pornographic content. Some journalists, scholars, and analysts have noted PTSD-like symptoms and other mental health issues arising among moderators. Many have called attention to this problem, but few have proposed solutions. In this comment, we briefly describe the human cost of online content moderation, and then identify some potential solutions.
The Problem
Websites like Facebook, Twitter, and Youtube serve as platforms for user-generated content uploaded by a global community of contributors. The uploaded content is just as diverse as the user base. That means a significant amount of the uploaded content is of the sort that most users–and, by extension, the platforms–would find objectionable. Users routinely upload (or attempt to upload) content such as child pornography, gratuitous violence, and disturbing, hate-filled messages.
Platforms reserve the right to police user-generated content through a clause in their Terms of Service, usually by incorporating their Community Guidelines by reference. For example, Youtube’s Community Guidelines prohibit “nudity or sexual content,” “harmful or dangerous content,” “hateful content,” “violent or graphic content,” “harassment and cyberbullying,” “spam, misleading metadata, “scams,” “threats,” videos that would violate someone else’s copyright, “impersonation,” and “child endangerment.
There is little doubt that content moderation serves a useful function; it helps prevent popular online platforms from serving as breeding grounds for the worst types of online behavior. Unfortunately, the task of wading through that content and keeping it off users’ browsers falls on mostly overseas workers who are rarely mentioned in conversations about online safety. Platform companies already employ a large number of moderators, and that number is set to grow even further. Social media companies have said they will hire new staff in response to concerns about Russian interference in the 2016 U.S. Presidential election. Facebook alone has committed to hiring at least 1000 new moderators.
Why is the push to hire more moderators concerning? Because there is a growing body of evidence that content moderation, as currently constituted, entails considerable psychological risks to the employee. Before we can even address the inadequacies of the treatment of existing moderators, we have to acknowledge that one of the biggest problems in evaluating the existing systems is that we have very little information about them. The companies are intentionally opaque and resist any attempt by others to investigate the existing procedures. There is no independent third-party monitor, and employees are typically barred from discussing their work through non-disclosure agreements.
In spite of all this, thanks to excellent work by journalists and scholars, we do have access to some source material. For example, a new documentary, titled “The Cleaners,” features interviews with several former moderators who used to work for a subcontractor in the Philippines. The interviewees discuss their experiences filtering the worst images and video the internet has to offer. Several individuals express their sense that the job had caused them fatigue and distress, and one interviewee stated that she quit the job because it was making her depressed. One previously unreported aspect of the job of moderators was the numerical quotas the subcontractors were forced to meet: each moderator was required to screen thousands of images or videos per day in order to maintain their employment.
This anecdotal evidence is consistent with the allegations of a former Microsoft content moderator in his ongoing court case against the company. The plaintiff’s complaint states that Microsoft failed to adequately apprise him of the psychological risks inherent in his employment and subsequently failed to provide him with adequate treatment. He claims to have developed post-traumatic stress disorder as a direct result of his content moderation duties.
Current Approaches
While this problem remains underappreciated, there are some current attempts to improve the work environment, better inform prospective employees, and to seek compensation for past harm. Unfortunately, much of it risks falling short by only addressing the problem as to employees of the company, as opposed to contractors, or only addressing it as to employees or contractors within the United States.
Industry groups represent one potential avenue through which to fix the problem. A group called The Technology Coalition works to coordinate companies’ efforts to combat online child sexual exploitation. According to the Coalition’s website, its current members are: Adobe, Apple, Dropbox, Facebook, GoDaddy, Google, Kik, Microsoft, Oath, PayPal, Snapchat, Twitter. In January 2015, it published an “Employee Resilience Guidebook for Handling Child Sex Abuse Images” (hereinafter “The Guidebook”).
The Guidebook advises that “[l]imiting the amount of time employees are exposed to [child pornography] is key.” Yet, the interviews in the aforementioned documentary and other news sources suggest that many subcontractors permit/require their moderators to filter child pornography all day long. The Guidebook also recommends that companies acquire their employees’ “informed consent,” which “includes providing an appropriate level of information so the employee understands what the role entails.” Again, former employees of overseas subcontractors report being surprised upon encountering child pornography and beheadings as part of the job. Lastly, the Guidebook recommends that companies “have a robust, formal ‘resilience’ program in place to support an employee’s well-being and mitigate any effects of exposure.” Employees of a number of subcontractors in the Philippines do not mention any such program. Even the plaintiff in the Microsoft case, who worked in Seattle, alleges that such programs were entirely lacking. Another problem is that the Guidebook is only intended to address exposure to child sexual imagery, and not other forms of objectionable material frequently posted on sites that depend on user-generated content.
The lawsuit by a former Microsoft employee is another attempt to bring the human costs of content moderation to light and to compensate at least one of the victims for the lack of safeguards. Alleging negligent infliction of emotional distress, the plaintiff claims employees on the Online Safety team were forced to view the types of horrible images described above and were not at all prepared for the task, nor were there adequate counseling or support services provided. While the case showcases an avenue available to U.S. employees of technology companies, it is an atypical case and does not represent a generalizable solution. The employee there alleges a number of specific facts not representative of the industry as a whole, including an involuntary internal transfer to Microsoft’s content moderation division. Moreover, as an in-house employee, that individual is not representative of the typical content moderator, who is usually an overseas contractor.
Going Forward
Clearly, platform companies have not been induced by traditional employment law doctrines to spend a great deal of time and money caring for their content moderators. We do not think the ongoing Microsoft lawsuit will radically change that state of affairs, for the reasons already mentioned. For now, it seems, the best way to improve the working conditions of content moderators might be to draw more public attention to the issue. It is for that reason that documentaries and other public media like “The Cleaners” are so important.
Obviously the companies are in a better position than outside observers to design and implement best practices in this area. That said, it is not difficult to imagine a few preliminary recommendations. First, platform companies should require that the subcontractors they hire clearly communicate the risks inherent in regularly viewing certain types of troubling content to the line employees doing the moderation. Second, platform companies should fund an expanded learning initiative or training program comparable to The Technology Coalition’s, focused not just on sexual images of children but all the content that a moderator is likely to encounter on the job. Employees and contractors should properly understand the requirements and potential side effects of the job before they begin. Third, the platform companies should incorporate minimum workplace counseling standards into contracts for moderation services, and monitor compliance with those standards. These subcontractors have no choice but to respond to the demands of the platform companies, their major (and potentially only) customers, so the companies are in perfect position to initiate massive improvement in working conditions.
Technology companies do not exactly have the unconditional trust of the public at this moment in time, so they have an incentive to head this off before it turns into a public relations nightmare. One concrete, extensive step they can take is to band together, potentially through one of the industry associations such as the Internet Association or the Software Alliance, and commission a study on the working conditions of content moderators which would include recommendations for improvements. The companies engaging with that study and following the recommendations could go far in building trust and in improving the working conditions for the people helping to keep their platforms safe. At the same time, nonprofits, government agencies, and scholars can pressure U.S. and foreign companies to allow outside monitoring. If these outside groups are able to pierce the veil and expose the impact this work has on moderators, they might be able to engage in a classic name-and-shame campaign, pushing the platform companies to hold subcontractors to higher standards.