Social Media’s Silent Filter

Gil C/Shutterstock.com

Under-the-radar workers have scrubbed objectionable material from Facebook and other sites since well before the fake-news controversy.

A few months ago, in the wake of the fake-news debacle surrounding the election, Facebook announced partnerships with four independent fact-checking organizations to stomp out the spread of misinformation on its site. If investigators from at least two of these organizations—SnopesPolitifact, ABC News and FactCheck.org, all members of the Poynter International Fact Checking Network—flag an article as bogus, that article now shows up in people’s News Feeds with a banner marking it as disputed.

Facebook has said its employees have a hand in this process by separating personal posts from links that present themselves as news, but maintains it plays no role in judging the actual content of the flagged articles themselves.

“We believe in giving people a voice and that we cannot become arbiters of truth ourselves,” wrote Adam Mosseri, the vice president of Facebook’s News Feed team, in introducing the change.

The announcement was an early step in Facebook’s ongoing revision of how it defines its role as a platform on which people consume news. Through the tumult of the election and under heavy public pressure, the company has gone from firmly denying any status as a media company to now acknowledging (albeit vaguely) some degree of responsibility for the information people take in.

“The things that are happening in our world now are all about the social world not being what people need,” Mark Zuckerberg told Recode after he published a sweeping, 6,000-word manifesto on the company’s future last month. “I felt like I had to address that.”   

Missing from this evolving self-portrayal, however, has been significant mention of a distinct kind of editorial practice Facebook and most other prominent social-media platforms are involved in. Thus far, much of the post-election discussion of social-media companies has focused on algorithms and automated mechanisms often assumed to undergird most content-dissemination processes online. But algorithms are not the whole story. In fact, there is a profound human aspect to this work. I call it commercial content moderation, or CCM.

* * *

CCM is the large-scale screening by humans of content uploaded to social-media sites—Facebook, Instagram, Twitter, YouTube and others. As a researcher, I have studied this process in detail: In a matter of seconds, following pre-determined company policy, CCM workers make decisions about the appropriateness of images, video, or postings that appear on a given site— material already posted and live on the site, then flagged as inappropriate in some way by members of the user community. CCM workers engage in this vetting over and over again, sometimes thousands of times a day.

While some low-level tasks can be automated (imperfectly) by processes such as matching against known databases of unwanted content, facial recognition and “skin filters,” which screen photos or videos for flesh tones and then flag them as pornography, much content (particularly user-generated video) is too complex for the field of “computer vision”—the ability for machines to recognize and identify images.

Such sense-making processes are better left to the high-powered computer of the human mind, and the processes are less expensive to platforms when undertaken by humans, although not without other costs.

Increasingly, CCM work is done globally, in places such as the Philippines, India and elsewhere, although CCM workers can also be found in call centers in rural Iowa, online on Amazon Mechanical Turk, or at the headquarters of major Silicon Valley firms—typically without full-time employment and all it entails, such as access to quality health care. CCM workers are almost always contractors, in many cases limited in term because of their high rate of burnout.

Over the past six years, I have spoken with and interviewed dozens of CCM workers who have labored in locales as diverse as Mountain View, Scotland and the Philippines. Despite cultural, ethnic and linguistic challenges, they share similarities in work-life and working parameters. They labor under the cloak of NDAs, or nondisclosure agreements, which disallow them from speaking about their work to friends, family, the press, or academics, despite often needing to:

As a precondition of their work, they are exposed to heinous examples of abuse, violence and material that may sicken others, and such images are difficult for most to ingest and digest. One example of this difficulty is in a recent and first-of-its-kind lawsuit filed by two Microsoft CCM workers who are now on permanent disability, they claim, because of their exposure to disturbing content as a central part of their work.

CCM workers are often insightful about the key role their work plays in protecting social-media platforms from risks that run the gamut from bad PR to legal liability. The workers take pride in their efforts to help law enforcement in cases of child abuse. They have intervened when people have posted suicidal ideation or threats, saving lives in some cases—and doing it all anonymously.

CCM workers tend to be acutely aware, too, of the outsized role social-media platforms have in determining public sentiment. One CCM worker, a contractor for a global social-media firm headquartered in Silicon Valley, described his discomfort in the way he had to deal with war-zone videos. He brought up the case of Syria, which he and others cited to me as examples of the worst material they had to see on the job in terms of its level of violence and horror. He explained that much of the material being uploaded to his company’s video-distribution platform seemed to come from civilian cell phones, who wanted, it was supposed, to disseminate footage of the civil war’s nightmare for advocacy purposes.

This CCM worker, whose identity I am ethically bound to protect as a condition of my research, pointed out such content violated the company’s own community codes of conduct by showing egregious violence, violence against children, blood and gore and so on. Yet, a decision came down from the group above him—a policy-setting team made up of the firm’s full-timers—to allow the videos to stand. It was important to show the world, they decided, what was going on with Syria and to raise awareness about the situation there.

Meanwhile, the employee explained to me, other videos flooded the platform on a daily basis from other parts of the world where people were engaged in bloody battle. Juárez was one pointed example he gave me. Although the motives of the uploaders in those cases were not always clear, no leeway was given for videos that showcased violence toward civilians—beheadings, hangings and other murders.

Whether or not the policy group realized it, the worker told me, its decisions were in line with U.S. foreign policy: to support various factions in Syria, and to disavow any connection to or responsibility for the drug wars of Northern Mexico. These complex, politically charged decisions to keep or remove content happened without anyone in the public able to know. Some videos appeared on the platform as if they were always supposed to be there, others disappeared without a trace.

* * *

Is an editorial practice by any other name, such as CCM, one that the public ought to know more about? Social-media firms treat their CCM practices as business or trade secrets and typically refuse to divulge their internal mechanisms for decision-making or to provide access to the workers who undertake CCM.

There is no public editorial board to speak of, no letters to the editor published to take issue with practices. For this story, I contacted four social media platforms for comment: Facebook, Instagram, Snapchat and YouTube. Instagram declined to speak on the record; the others haven’t responded.

Meanwhile, platforms have frequently released new media-generation tools without a demonstrated understanding of their potential social impact. Such has been the case of Facebook Live and Periscope, both of which were introduced as fun ways to livestream aspects of one’s life to others, but have served key roles in serious and violent situations; the cases of Philando Castile and Keith Lamont Scott are two recent examples.

At the behest of law enforcement, Facebook turned off the Facebook Live feed of Korryn Gaines as she was using the platform to document an attempted arrest. After her feed went dark, she was shot and killed and her son was shot and wounded by police.

Such cases starkly illustrate the incentive for social-media companies like Facebook to shy away from the “media company” label. Media companies are held to public criticism and account when they violate ethics and norms. It’s a great position for a firm to be able to reap profits from circulating information, and yet take no responsibility for its veracity or consumption.

Social-media companies often appear eager to completely and cost-effectively mechanize CCM processes, turning to computer vision and AI or machine-learning to eliminate the human element in the production chain. While greater reliance upon algorithmic decision-making is typically touted as leap forward, one that could potentially streamline CCM’s decision-making, it would also eliminate the human reflection that leads to pushback, questioning and dissent. Machines do not need to sign NDAs, and they are not able to violate them in order to talk to academics, the press, or anyone else.

What key information and critiques are now missing from view, and to the benefit of whom? In the absence of openness about these firms’ internal policies and practices, the consequences for democracy are unclear. CCM is a factor in the current environment of fake-news proliferation and its harmful results, particularly when social-media platforms are relied upon as frontline, credible information sources. The public debate around fake news is a good start to a critical conversation, but the absence of the full understanding of social-media firms’ agenda and processes, including CCM, make the conversation incomplete.