Categories
Uncategorized

Facebook Moderation and the Unforeseen Consequences of Scale

Parable of the Radium Girls

In 1917, a factory owned by United States Radium in Orange, New Jersey hired workers to paint watchfaces with self-luminescent radium paint for military-issue, glow-in-the-dark watches. Two other factories soon followed in Ottawa, Illinois and Waterbury, Connecticut. The workers, mostly women, were instructed to point the tip of their paint brushes by licking them. They were paid by the watchface, and told by their supervisors the paint was safe.

Evidence suggested otherwise. As employees began facing illness and death, US Radium initially rejected claims that radium exposure might have been more damaging than they’d first led workers to believe. A decade-long legal battle ensued, and US Radium eventually paid damages to their former employees and their families.

The Radium Girls’ story offers us a glimpse into a scenario where a technological innovation promised significant economic return, but its effects on the people who came into daily contact with it were unknown. In the course of pursuing the economic opportunity at hand, the humans doing the line work to produce value wound up doubling as lab rats in an unplanned experiment.

Today, regulations would prohibit a workplace that exposed workers to these hazards.

The unforeseen consequences of unplanned experiments

This week, the Verge’s Casey Newton published an article examining the lives of Facebook moderators, highlighting the toll taken on people whose job it is to handle disturbing content rapid-fire, on a daily basis. The employees at Cognizant, a company contracted by Facebook to scale the giant social network’s moderation workforce, make $15/hour and are expected to make decisions for 400 posts each day at a rate of 95% accuracy. A drop in numbers calls a mod’s job into question. They have 9 minutes/day of carefully monitored break time. The pay is even lower for Arabic-speaking moderators in other countries, who make less than six dollars per day.

Facebook has 2.3 billion global users. This means, by sheer size of the net being cast, moderators will encounter acts of graphic violence, hate speech, and conspiracy theories. Cognizant knows this, and early training for employees involves efforts to harden the individual to what the job entails. After training, they’re off to the races.

Over time, exposure is reported to cause a distorted sense of reality. Moderators begin developing PTSD-like symptoms. They describe trouble context-switching between the social norms of the workplace and the rest of their lives. They are legally enjoined from talking about the nature of their work with friends or loved ones. Some employees begin espousing the viewpoints of the conspiracy theories they’ve been hired to moderate. Coping mechanisms take the shape of dark humor, including jokes about suicide and racism, drugs, and risky sex with coworkers. There are mental health counselors available on-site, however, their input boils down to making sure the employee can continue doing the job, rather than concern for their well-being beyond the scope of the bottom line.

“Works as intended”

When Facebook first started building, they weren’t thinking about these problems. Today, the effects of global connectivity through a single, centralized platform, populated with billions of users, with an algorithm dictating what those users see, is something we have no precedent for understanding. However, as we begin the work of trying to contend with the effects of technology-mediated communication at unprecedented scale, it’s important to identify a key factor in Facebook’s stewardship of their own platform: the system is working as intended. I’ve long noted that if scale is a priority, having garbage to clean up in an online network is a sign of success, because it means there are enough people to make garbage in the first place.

The very reality that human moderators need to do this work at such magnitude means Facebook is working extraordinarily well, for Facebook.

Let’s explore this for a moment. The platform’s primary mode has long been to assemble as many people as possible in one place, and keep them there as long as possible. The company makes money by selling ads, so number of users and quantity of time on the site is their true north. The more people there are on the site, and the longer they spend there, the more opportunities for ad impressions, resulting in more money. They are incentivized to pursue this as thoroughly as possible, and under these strict parameters, any measure which results in more users and more engagement is a good one.

Strong emotional reactions tend to increase engagement. The case study of the network’s role in the spreading of rumors which led to mob violence in Sri Lanka provides a potent look at how the company’s algorithms can exacerbate existing tensions. “The germs are ours, but Facebook is the wind,” said one person interviewed. So on the one hand, Facebook is incentivized to get as many users as possible and get them as riled up as possible, because that drives engagement, and thus profit. Some of the time, that will produce content like that which moderators at Cognizant need to clean up. To keep this machine running, human minds need to be used as filters for psychologically toxic sludge.

Facebook could make structural platform shifts which would reduce the likelihood of disturbing content showing up in the first place. They could create different corners of the site where users go specifically to engage in certain activities (share their latest accomplishment, post cooking photos), rather than everyone swimming in the same amorphous soup. They could go back to affiliations with offline institutions, like universities, and make your experience within these tribes be the default experience of the site. Or they could get more selective about who they accept money from, or whom they allow to be targeted for ads. But I’m sure any one of these moves would damage their revenues at numbers that would boggle our minds. Facebook’s ambition for scale, and their need to maintain it now that they have it, is working against creating healthier experiences.

Like the Radium Girls, Facebook moderators are coming into daily contact with a barely-understood new form of technology so that others may profit. As we begin to see the second order effects and human costs of these practices and incentive systems, now is a good time for scale to be questioned as an inherent good in the business of the internet.