On Humane Trust & Safety and Support Teams

Recently I sat down with a leader at a tech company building a community platform to offer input on running Support and Trust & Safety teams. After, I sent a followup email and then realized its contents may be useful to other folks in tech wrestling with the ethical quandary of managing teams tasked with cleaning up the internet. My advice has been posted here with their permission:

When we spoke I did a lot a of rambling, because I have an excess of war stories rolling around in my head, but I realize that may have left you swimming. Below, I’ve parsed my thoughts into a handful of things I think are worth trying to act on if you want to create a team doing Support and/or Trust & Safety which is designed to bypass many of the unhealthy dynamics that are usually baked into this work:
1) Pay well for work that doesn’t scale.
It sounded like you may be pushing to have Support be paid on the same scale as engineering, and if you’re already working on that, that’s amazing! Typically tech companies hate paying for labor that doesn’t produce scalable output; mushy human stuff is a necessary evil, and they pay as little as they can manage for it. This is a really short-sighted view, though. Software which has users is going to take work to maintain no matter what, and if you recognize the operational costs upfront, you’ll be helping avoid surprises and burnout for your entire company.
2) Prepare to increase the size of your Trust & Safety and Support teams proportionally with the scale of your userbase.
This comes from similar dynamics as #1, and is typically really hard to swallow. I’m not suggesting that Trust & Safety and Support will never be able to automate away parts of their work, or that folks on those teams shouldn’t be encouraged to do so as part of their jobs. But you should expect that the load and the complexity of issues will scale proportionally to the number of people using your software, especially as you add more social features, and having an ever growing number of smart, capable people on hand with enough extra bandwidth to think strategically and devise solutions for the stuff they’re seeing on the frontlines will ensure you’re generally not blindsided by your platform unleashing messes on the world.
3) Create a separate time off scheme for Trust & Safety and Support.
I recommend giving everyone doing this work 3 paid months off. This work is going to take a psychological toll and people’s productivity is going to be affected. Usually this results in staff feeling terrible about themselves and papering it over, which is the first step on the road to burnout. Yet if acknowledgement and affordances are created for this outcome, and it’s treated as a normal thing that individuals aren’t failures for and which the company has their back on, it will increase people’s resilience by ensuring they don’t waste more bandwidth on shame.
4) Have Support and Trust & Safety driving platform-level changes.
Another enormous driver of burnout in this work is facing upsetting scenarios which you have no opportunity to resolve. This creates lots of second order trauma. Alternately, if Trust & Safety folks find themselves fielding the output of a huge platform abuse vector and they think of novel ways to solve it, actually seeing it implemented is an incredible morale boost which will prevent against learned helplessness, and help your people stay in the game with you over the long haul.

After the Flood: Finding the Path to Healthy Communications Technology

The cost of information transmission has collapsed. As a result of the profit opportunity this presented, human interaction has been centralized in platforms of truly enormous scale. Centralization makes it possible for these platforms to monetize our clicks and eyeballs to the tune of billions of dollars, through billions of users. We are only beginning to see the effect of this on human brains.

So YouTube […] set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal.

[…]

Three days after Donald Trump was elected, Wojcicki convened her entire staff for their weekly meeting. One employee fretted aloud about the site’s election-related videos that were watched the most. They were dominated by publishers like Breitbart News and Infowars, which were known for their outrage and provocation.

YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant (via Bloomberg)

This raises some inevitable concerns about how to handle information responsibly when it moves at 21st century speeds. If maximizing reach and reducing friction yields profit while compromising individual and societal health, then when is reach a liability and who bears the costs of its downsides? At what scale does reach become unsafe?

Developing a framework for safety under 21st century communication paradigms opens up several questions.

Throughput is the measure of data transfer in an information system. Is it possible to devise a means of quantifying throughput through the human culture both today and historically? If so, what change over time would we observe?

If it were possible to quantify throughput filtered collectively through human brains, have we exceeded the measure for safety, and how could we tell? Is there a way to measure change over time, and can a heuristic be developed for maximum safe reach?

Physical information systems have limits to what they can process, and when they’re surpassed, items are dropped or queued. A webserver can only handle so many requests per second, and when that threshold is maliciously exceeded, we call that a denial of service attack. If we are also physical beings with physical limitations, why should we be exempt from limitations seen in other information systems?

Computer networks are deliberately partitioned for safety and security. Should human networks be deliberately partitioned for the safety of our brains? Are there lessons we can learn from the domain of network administration?

Does decentralization inherently create safety?  Are there specific design patterns that should be adopted or avoided to prevent replicating the problems seen in current social products?

Mark Zuckerberg, trying to get ahead of the inevitable, recently put out an article proposing a regulatory framework for entities like Facebook. In the US, previous regulation on media companies tried to limit consolidation to mitigate monopolies holding overwhelming sway of public opinion. The originators of that regulation could never have dreamed of the reach of a platform like Facebook, which is largely uncontested. Are there historical precedents for regulating media companies which are applicable to 21st century problems, even though they have a very different shape?

None of these questions has clear, immediate, or straightforward answers. Still, as we grapple with the impacts of centralized communication platforms, they represent only the beginning of the hard problems we need to grapple with to ensure we build communications technology responsibly.

From Point A to Chaos: The Inversion of Information Economics

If it’s really a revolution, it doesn’t take us from point A to point B, it takes us from point A to chaos.

Clay Shirky, 2005

In 2019, we reel from a series of improbable outcomes. Whether it’s the 2016 US election, Brexit, the resurgence of the theory that the earth is flat, or the decline of vaccination, turns which once seemed unthinkable have arrived in force. Culture war blossoms around developments which some see as progress, and others find threatening and absurd. This coincides with the rise of centralized communication platforms that reward compulsive engagement, indiscriminately amplifying the reach of compelling messages—without regard for accuracy or impact.

Historically, composition and distribution of information took significant effort on the part of the message’s sender. Today, the cost of information transfer has collapsed. As a result, the burden of communication has shifted off the sender and onto the receiver.

Interconnectedness via communications technology is helping to change social norms before our eyes, at a frame rate we can’t adjust to. If we want to understand why we’ve gone from point A to chaos, we need to start by examining what happens when the cost of communicating with each other, in Shirky’s words, “falls through the floor”.

A brief history of one-to-one communication

The year 1787 offers us another moment in time when the communications technology of the day stood to influence the direction of history. The Constitutional Convention was underway, and the former British colonies were voting on whether to adopt the hotly contested new form of government. Keenly aware that news of how each state fell would influence the behavior of those who had yet to cast their votes, Alexander Hamilton assured James Madison, his primary collaborator at the time, that he would pay the cost of fast riders to move letters between New York and Virginia, should either one ratify the Constitution.

The constraints of communications technology back then meant it was possible for an event to occur in one location without people elsewhere hearing of it for days or weeks on end. For this single piece of information to be worth the cost of transit between Hamilton and Madison, nothing less than the future of the new republic had to be at stake.

From a logistical standpoint, moving information from one place to another required paper, ink, wax, a rider, and a horse. Latency was measured in days. Hamilton and Madison’s communications likely benefited from the postal system, which emerged in 1775, to provide convenient, affordable courier service. Latency was still measured in days, but by then it was possible to batch efforts and share labor costs with other citizens.

Around 1830, messages grew faster, if not cheaper, with the advent of the electric telegraph. The telegraph allowed transmission of information across cities and eventually continents, with latency clocking in at a rate of two minutes per character. The sender of a message was charged by the letter, and an operator was needed at each end to transcribe, transmit, receive and deliver the message.

With the telephone, in 1876, it became possible to hold an object to your ear and hear a human voice transmitted in real time. The telephone required an operator to initiate the circuit needed for each conversation, and once they did, back-and-forth could unfold without intermediaries. This dramatic acceleration from letters carried on horseback to the the telephone took place in the space of 89 years. By the early 20th century, phone switching was automated, further reducing the cost of information exchange.

By the 2000’s, mobile phones and the internet enabled email and texting, and instantaneous communication was within reach between anyone in the world who was lucky enough to have access to these technologies in their early days on the consumer market. For these, no additional labor is needed beyond the sender’s composition of the message and the receiver’s consideration of its contents. Automation handles encoding, transmission, relaying, delivery and storage of the whole thing. The time between a message being written and a message being received has been reduced to mere seconds.

Constraints to communication at this phase come to be dictated by access to technology, rather than access to labor.

A brief history of one-to-many communication

While Alexander Hamilton wrote copious personal letters, he also leveraged the mass communications medium of his day, the press, to shape political dialogue. He was responsible for 51 of the 85 essays published in the Federalist Papers. By 1787, publishing had already benefited from the invention of the printing press. The production of written word was no longer rate-limited by the capacity of scribes or clergy, or restricted by the church. Those who were educated enough to write, and connected enough to publish, could do so. Of course, only a select handful of people in the early United States met that criteria.

Samuel Adams was the son of a church deacon, a successful merchant, and a driving force in 1750s Boston politics. Benjamin Franklin apprenticed under his brother, a printer, and eventually went on to take up the trade, running multiple newspapers over his lifetime. In 1765, Samuel Adams falsely painted Thomas Hutchinson as a supporter of the Stamp Act in the press, leading a mob of arsonists to burn down Hutchinson’s house. Meanwhile, Franklin created a counterfeit newspaper claiming the British paid Native Americans to scalp colonists which he then circulated in Europe to further the American cause. It’s not that fake news is a recent phenomenon, it’s that you used to have to have special access to distribute it.

Soon, other media emerged to broadcast ideas. By the 1930’s, radio was a powerful conduit for culture and news, carrying both current events and unique entertainment designed for the specific constraints of an audio-only format. Radio could move over vast distances, and it did so at the speed of light. At the same time, radio required specialist engineers to operate and maintain the expensive equipment needed to transmit its payload. It required more specialists to select and play the content people wanted to hear.

Television emerged using similar technology, with additional overhead. As well as all the work needed for transmission, television required additional specialists and elaborate equipment to capture a moving image.

Radio and television both operated over electromagnetic spectrum which was prone to interference if not carefully managed. By necessity spectrum is regulated, which creates scarcity, making the owners of broadcast companies powerful arbiters of the collective narrative.

So between print, radio and television, a handful of corporations determined what was true, what would be shared with the masses, and who was allowed to be part of the process.

Force multipliers in communication

Eventually, innovations in the technologies above began to cannibalize and build off of one another, helping the already declining cost of information transfer fall even faster.

By the late 1800’s, typewriters allowed faster composition of the written word and clearer interpretation for the recipient. By the late 1970’s, the electronic word processor used integrated circuits and electromechanical components to provide a digital display, allowing on-the-fly editing of text before it was printed.

Then, the 1980’s saw the rise of the personal computer, which absorbed the single use device of the word processor, folding it in and making it just another software application. For the first time, the written word was now a stream of instantly visible, digital text, making the storage and transmission of thoughts and ideas easier than ever.

Alongside the PC, the emergence of packet-switched networks opened the door to fully-automated computer communications. This formed the backbone of the early internet, and services ranging from chat to newsgroups to the web.

The arrival of the open source software revolution around the year 2000 enabled unprecedented productivity for software teams. By making the building blocks of web software applications free and modifiable to anyone, small teams could move quickly from concept to execution without having to sink time into the basic infrastructure common to any site. For example, in 2004, Facebook was built in a dorm room using Linux as the server operating system, Apache as its web server, MySQL for its database, and php as its server-side programming language. Facebook helped usher in the current era of centralized, corporate-controlled, modern social software, and it was built on the back of open source.

The pattern seen in the evolution from printing press to home PC is repeated and supercharged when we encounter the smartphone. By 2010, smartphones paired the ability to record audio and video with a constant internet connection. Thanks to the combination of smartphones and social software, everyday consumers were granted the ability to capture, edit and distribute media with the same global reach as CNN circa 1990. This had meaningful impact during the protests against police violence in Ferguson, Missouri, in 2014. Local residents and citizen journalists streamed real-time, uncut video of events as they unfolded—without having to consult any television executives.

In the end, this is a story of labor savings. Today, benefits from compounding automation and non-scarce information technology resources, like open source code, have collapsed the amount of human labor needed to reach mass audiences. An individual can compose and transmit content to an audience of millions in an instant.

This leverage for communication does not have a historical precedent.

Dissolving norms

As the cost of information transfer grows rapidly cheaper, structures and dynamics which once seemed solid have become vertiginously fluid.

In the pre-internet age, you had producers and you had consumers. Today, large-scale social platforms are simultaneously media channel and watering hole, and power users may shift between being both producer and consumer in a single session. The distinction between one-to-one and one-to-many communication has also become far less clear. A broadcast-style message may result in a public response from a passerby which catalyzes an interaction between the passerby and the original poster, with lurkers silently watching the exchange unfold. Later, the conversation may be resurfaced and re-broadcast out by a third party.

The intent of our communications also aren’t always fully known to us when we enact them, and the results can be disorienting. We’ve become increasingly accustomed to mumbling into a megaphone, and people may face lasting consequences for things they say online. Ease of distribution has also blurred the lines between public and private communication. In the past, even the act of writing a letter to a single individual involved significant costs and planning. Today, the effort required for writing a letter and writing an essay seen by millions is functionally identical—and basically free.

Meanwhile, professional broadcast networks are no longer the final arbiters of our collective narrative. Journalism used to be the answer to the question “How will society be informed”? In a world of television, radio and newspaper, those who controlled the exclusive organs of media decided what the audience would see, and therefore what it meant to be informed.  Defining our shared narratives is now a collaborative process, and the question of what is relevant has billions of judges able to weigh in. Today we have shifted, according to An Xiao Mina, from “broadcast consensus to digital dissensus”.

Uncharted waters

In 2019, we face an inversion of the economics of information. When the ability to send a message is a scarce resource, as it was in 1787, you’re less likely to use those mechanisms to transmit trivial updates. Today, the extreme ease of information transfer invites casualness which begets the inconsequential. Swimming in these waters is leaving us open to far more noise masquerading as signal than in eras past.

Many of us can attest that the time between considering what we want to say and getting to say it has shrunk to minutes or seconds, and the messages we send are increasingly frequent and bite-sized, thought out on-the-fly. When this dynamic compounds over time and spreads across the human culture, with both individuals and institutions taking part, we find ourselves experiencing the cognitive equivalent of a distributed denial-of-service attack through an endless torrent of “news,” opinion, analysis and comment. Just ask the Macedonian teenagers making bank churning out fake news articles.

To make sense of this, we’ll need design patterns, technologies, narratives, and maybe even whole disciplines which don’t exist yet. The decline of broadcast consensus brings an enormous, painful loss of clarity about our world, but it simultaneously creates opportunities for voices who were missing in eras past. We’ve sailed off the side of the map, into waters not yet charted. Now, we’re called on to relearn how to navigate, even as our instruments are rendered useless. And we need all the help we can get.

 

Facebook Moderation and the Unforeseen Consequences of Scale

Parable of the Radium Girls

In 1917, a factory owned by United States Radium in Orange, New Jersey hired workers to paint watchfaces with self-luminescent radium paint for military-issue, glow-in-the-dark watches. Two other factories soon followed in Ottawa, Illinois and Waterbury, Connecticut. The workers, mostly women, were instructed to point the tip of their paint brushes by licking them. They were paid by the watchface, and told by their supervisors the paint was safe.

Evidence suggested otherwise. As employees began facing illness and death, US Radium initially rejected claims that radium exposure might have been more damaging than they’d first led workers to believe. A decade-long legal battle ensued, and US Radium eventually paid damages to their former employees and their families.

The Radium Girls’ story offers us a glimpse into a scenario where a technological innovation promised significant economic return, but its effects on the people who came into daily contact with it were unknown. In the course of pursuing the economic opportunity at hand, the humans doing the line work to produce value wound up doubling as lab rats in an unplanned experiment.

Today, regulations would prohibit a workplace that exposed workers to these hazards.

The unforeseen consequences of unplanned experiments

This week, the Verge’s Casey Newton published an article examining the lives of Facebook moderators, highlighting the toll taken on people whose job it is to handle disturbing content rapid-fire, on a daily basis. The employees at Cognizant, a company contracted by Facebook to scale the giant social network’s moderation workforce, make $15/hour and are expected to make decisions for 400 posts each day at a rate of 95% accuracy. A drop in numbers calls a mod’s job into question. They have 9 minutes/day of carefully monitored break time. The pay is even lower for Arabic-speaking moderators in other countries, who make less than six dollars per day.

Facebook has 2.3 billion global users. This means, by sheer size of the net being cast, moderators will encounter acts of graphic violence, hate speech, and conspiracy theories. Cognizant knows this, and early training for employees involves efforts to harden the individual to what the job entails. After training, they’re off to the races.

Over time, exposure is reported to cause a distorted sense of reality. Moderators begin developing PTSD-like symptoms. They describe trouble context-switching between the social norms of the workplace and the rest of their lives. They are legally enjoined from talking about the nature of their work with friends or loved ones. Some employees begin espousing the viewpoints of the conspiracy theories they’ve been hired to moderate. Coping mechanisms take the shape of dark humor, including jokes about suicide and racism, drugs, and risky sex with coworkers. There are mental health counselors available on-site, however, their input boils down to making sure the employee can continue doing the job, rather than concern for their well-being beyond the scope of the bottom line.

“Works as intended”

When Facebook first started building, they weren’t thinking about these problems. Today, the effects of global connectivity through a single, centralized platform, populated with billions of users, with an algorithm dictating what those users see, is something we have no precedent for understanding. However, as we begin the work of trying to contend with the effects of technology-mediated communication at unprecedented scale, it’s important to identify a key factor in Facebook’s stewardship of their own platform: the system is working as intended. I’ve long noted that if scale is a priority, having garbage to clean up in an online network is a sign of success, because it means there are enough people to make garbage in the first place.

The very reality that human moderators need to do this work at such magnitude means Facebook is working extraordinarily well, for Facebook.

Let’s explore this for a moment. The platform’s primary mode has long been to assemble as many people as possible in one place, and keep them there as long as possible. The company makes money by selling ads, so number of users and quantity of time on the site is their true north. The more people there are on the site, and the longer they spend there, the more opportunities for ad impressions, resulting in more money. They are incentivized to pursue this as thoroughly as possible, and under these strict parameters, any measure which results in more users and more engagement is a good one.

Strong emotional reactions tend to increase engagement. The case study of the network’s role in the spreading of rumors which led to mob violence in Sri Lanka provides a potent look at how the company’s algorithms can exacerbate existing tensions. “The germs are ours, but Facebook is the wind,” said one person interviewed. So on the one hand, Facebook is incentivized to get as many users as possible and get them as riled up as possible, because that drives engagement, and thus profit. Some of the time, that will produce content like that which moderators at Cognizant need to clean up. To keep this machine running, human minds need to be used as filters for psychologically toxic sludge.

Facebook could make structural platform shifts which would reduce the likelihood of disturbing content showing up in the first place. They could create different corners of the site where users go specifically to engage in certain activities (share their latest accomplishment, post cooking photos), rather than everyone swimming in the same amorphous soup. They could go back to affiliations with offline institutions, like universities, and make your experience within these tribes be the default experience of the site. Or they could get more selective about who they accept money from, or whom they allow to be targeted for ads. But I’m sure any one of these moves would damage their revenues at numbers that would boggle our minds. Facebook’s ambition for scale, and their need to maintain it now that they have it, is working against creating healthier experiences.

Like the Radium Girls, Facebook moderators are coming into daily contact with a barely-understood new form of technology so that others may profit. As we begin to see the second order effects and human costs of these practices and incentive systems, now is a good time for scale to be questioned as an inherent good in the business of the internet.

Implementers and Integrators

Organizations need both Implementers and Integrators.

Implementers are those who specialize in nuts and bolts execution work. Migrating a database, designing a landing page, and running event logistics are all examples of implementation. Integrators observe disparate pieces of a system working in tandem, and discern ways of helping them work better together. This could mean noticing where teammates have a pattern of talking past each other and intervening to resolve confusion before the wrong thing gets built, or recognizing that two people want to develop similar skillsets and pairing them up to take an online course together.

In tech, we over-index on Implementers. I believe part of this comes from companies initially facing an existential chasm as a matter of course, which must be crossed by getting V1 out the door, or risking not having a company. At this point, implementing your way forward is really your only option. After you’ve implemented successfully enough to see another day, that experience stands as a powerful object lesson among your early team. A dopamine cycle has formed up around implementing, and the time you crossed the chasm becomes the stuff of shared legend.

As organizations age, the technical and human systems being maintained grow in size and complexity. The more pieces you have, the more different ways there are for them to malfunction. This is when the need for Integrators who can recombine or fine tune the pieces so they run together more powerfully, or reliably, or humanely, starts to really show.

This is tricky. Integration work can be a lot harder to account for. It requires observation before action, and often action takes the shape of nuanced adjustments which are only later felt system-wide. Frequently, an org under strain from growth will respond by seeking more Implementers, without realizing what they need is more Integrators. This is rational. We use our past experiences to inform our present strategies, and based on the past, adding more Implementers to make more things is how to get out of a pickle. Unfortunately, this old strategy applied to this new problem exacerbates the strain. An uptick in Implementers creates more complexity which needs to be managed, further increasing the need for Integrators. Contributing to the force of this tightening ratchet is the fact that Implementer work, with its tangible outputs, is vastly better understood, and almost always more rewarding in the short-term. Proper recognition of Integrator work requires a lot of faith, and interpersonal trust, and moving through uncertainty.

You might know Integrators under a different name: managers. Good management is more than bossing people around. A good manager bridges the divergence in people’s frames of reference, creating shared meaning so that more than one person can do productive work on same problem. Inevitably flat organizations realize they need management once they hit a certain scale. Without designated Integrators, it becomes unclear where individual contributors can make the best impact—to say nothing about where they go when things stop working. Meanwhile, managers who do just boss people around leave their people feeling helpless and blindsided as a matter of daily course, as they lack a sense of the bigger cohesive picture and their role in influencing it.

Changing your toolset at a point when you’re facing high stress and high stakes is exceedingly difficult. When nothing is working, and the pressure is weighing on your chest, stopping to ask yourself and your collaborators why things feel so off, or whether you’re all solving the right problem, can seem like the most outrageous thing anyone could possibly do. Yet this sort of non-complementary behavior may be the best way to ensure integrity in the systems you’re all maintaining.

Exploring the Human Implications of Conway’s Law

Conway’s Law states that:

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”

In other words, the communication patterns in your team are duplicated in your software. I recently had a chat in which my counterpart made the point that if we take Conway’s Law to be true, it should stand that the mental state of the individuals engaging in said communication is also having a great affect on your software.

When dealing with networks, we most often focus on the links between the nodes. It’s the relationship between the pieces which produce a meta layer of dynamics to work with. Without connections, there is no network. Yet when dealing with networks of humans, the natural extension of Conway’s Law calls on us to examine the state of nodes themselves.

In contrast to inanimate interactive entities, like servers, human nodes are neither uniform nor neutral. Much the way DNA contains a set of plans for how an organism’s development unfolds, humans are carriers of their own design patterns. The way these design patterns are expressed in an organization will vary based on things like their past experiences and identities, whether or not they perceive that their contributions are respected, and whether they’ve gotten enough sleep or calories, just to name a few. An individual human node will respond to stimulus differently based on the state of these factors, which shifts how they will communicate on a given day. Conway’s Law postulates that this is reflected in the software system they’re building. Then, there’s a concurrent process running where they interact with other humans in your organization, each with design patterns of their own. This combination produces the state of your software.

While preemptively controlling for every aspect of the communication between the humans designing a system seems unlikely, it stands to reason that optimizing for the well-being of the individuals doing the work can be a sort of resilience engineering. Things like proper compensation, respect for boundaries, a blameless culture, and clear opportunities for advancement create circumstances most likely to engender an open, well-regulated, constructive mental state. If Conway’s Law is right, maintenance on the state of the human nodes in a network paves the way for more constructive communication patterns, and better software.

Will voting functionality on Facebook solve anything?

Word came out earlier this week that Facebook is running an experiment, giving a small number of users in New Zealand upvote/downvote buttons on comments. I’m wondering what Facebook is looking to learn.

Upvotes and downvotes have been around since forever on gamified platforms like Reddit and Stack Overflow. Voting introduces a sense of right or wrong in a community. It quantifies the value of your participation, turning your popularity, or lack thereof, into something measurable. It’s opinionated. Fittingly, voting is a functionality which took root in technical, programming-focused, and gaming-adjacent communities.

Facebook has a whole different premise than gamified discussion sites do. They built their social network around sharing and staying in contact with loved ones. They made it frictionless to share anything about yourself, in the hopes that you would share everything about yourself. Facebook makes money by using the data they have about you to show you extremely tailored ads. What you see in your Facebook timeline is algorithmically generated and optimized for content which you are most likely to react to, fueling engagement. Facebook has, of course, been under scrutiny since news of the data leak to Cambridge Analytica and investigations into how the site has been used to organize local violence in Sri Lanka.

So on the one hand, you have platforms which are about getting people to post and quantify the value of each other’s words (Reddit, Stack Overflow), and on the other, a social network which aims to make you observable and reactive (Facebook). And voting, a core functionality from one is being essentially air-dropped over to the other. Where could that be helpful? Where could that be harmful?

There’s been a lot of talk about fake news lately. I tend to think the definition of “fake news” is far more slippery than most of us care to believe, but that’s a post for another time. Point is, there’s concern that there’s no way to discredit something posted in bad faith on Facebook, and in the form of voting, there’s functionality which allows every day users to do just that. I get why added opinionated functionality might seem like the right counter-measure.

What I wonder is, will the mob mentality which tends to form up during user voting ultimately help or harm the nature of the interactions taking place? When asked, a company spokesperson said “People have told us they would like to see better public discussions on Facebook, and want spaces where people with different opinions can have more constructive dialogue…Our hope is that this feature will make it easier for us to create such spaces, by ranking the comments that readers believe deserve to rank highest, rather than the comments that get the strongest emotional reaction.” The idea that folks will believe the posts which should be ranked highest aren’t also the ones which emit the strongest emotional reaction runs counter to everything I’ve ever known about humans and keyboards.

Furthermore, when a post gets heavily downvoted…what happens? Will there be someone on the other side, at Facebook, able to step in? I’m guessing not. Will there be anything that happens when a comment is heavily downvoted? Unclear. Is this all about making sure we just keep clicking things? Maybe.

All in all, this sounds like an interesting experiment and I’m glad to see Facebook do something. I really hope they speak publicly about their findings. But I also see how this feature could cause users to double down on their existing disagreements, grudges, and gripes.

Finally, copping to the fact this is an experiment, this measure strikes me as misguided because voting sets users up to reach for a goal which Facebook has not defined. Whether you’re trying to drop the best meme, or the most articulately explained physics equation, seeking votes means you’re aspiring to be something valuable in the eyes of the group. What is it people are reaching for when they seek upvotes on their Facebook comments? What kind of discussion space is Facebook looking to create? How do the answers to the previous questions vary based on the demographics and context of the folks doing the posting? Without answering these, Facebook is simply bolting another tunnel onto the multi-level hamster mansion, and hoping the novelty of its presence gets the critters to stop fighting.

Commit History

This is primarily an exercise in record keeping. Two years ago I wrote a post about burnout and went silent. Here are some things which have transpired since:

I found work which fits the parameters outlined here.

I published Contributions to put myself on the hook for figuring out more purposeful work, aiming at either of two broad categories: #1 enabling networks of people to help each other or #2 helping the technology industry be a better version of itself. I’ve now clocked nearly two years as a Community Manager at Stack Overflow. Both sets of parameters are being met surprisingly well.

Better attention management.

Our attention spans don’t scale to the size of the internet. In March of 2014 I took the month off Twitter, which gave me a chance to examine how it was re-patterning my brain. The silence was delicious, and the hiatus equipped me to make clearer decisions about where I spend my cognitive resources. My relationship to social media hasn’t been the same since.

The start of a thesis.

In May of 2015 I presented at CMX Summit East on the five traits of enduring communities. This was the first time I’d done any speaking since taking time away, and it was rewarding to lay down a cohesive set of ideas which were the product of several years of work. I also couldn’t have asked for a better place to do it. (To David, Carrie, Yrjathank you!)

These are subtle but significant changes which come from focused effort. I look forward to continuing to put in the work.

Reset

About three weeks ago, I said goodbye to my team. I’ve been in need of a break and after spending a year and a half working on technical infrastructure for the dev community, the company is increasing their focus on enterprise products. It was a good time for me to step back.

Since leaving, I’ve been able to think more about what the last few years of my working life have looked like. In the span of the last three years I’ve helped two companies move from early-stage to mid-stage. I’ve handled user shitstorms, overseen dozens of launches, pulled all nighters, been through the fundraising process twice, managed people, turned customers into close friends, and kept countless moments of high emotion from tearing people apart. Along the way I developed a knack for breaking down silos between technical and non-technical teams. I love watching organizations grow and flourish. I’ve been humbled at what it is to be a manager, and to serve those who have entrusted some part of their career to me. This work is a privilege.

I’ll also be the first to say it. I’ve burnt myself out, hard.

I drove myself to unhealthy levels of exertion too hard for too long in pursuit of the next milestone.

I was over-investing myself in the organizations I was part of. I was making my work the one single point of failure for my ego.

After awhile, my friends never saw me anymore, I forgot why I ever liked the things I liked, and crossing off virtually any ‘to do’ list item took an excruciating amount of effort.

Why did this finally dawn on me? Historically, one of my biggest assets is my ability to think deeply about a messy problem, and formulate a single response that makes things click into place. I’m also good at simulating other people’s experiences and modeling interactions. It’s part of what made me a good community manager. After driving myself too hard for too long, my ability to do those things dulled.

I also realized I wasn’t fully listening to the things people would say to me. I caught myself categorizing what kind of conversation I thought I was entering ahead of time, associating a few normal behaviors with that type of interaction, and going on autopilot. Upon realizing this, I responded by pushing myself harder in an attempt to recoup lost productivity. Soon the combined momentum and pressure were taking on a life of their own. I was doing the next thing that presented itself because it seemed easiest at the time, and I was too harried to synthesize what the pieces added up to. I was struggling to connect with people.


You own your tools

If I stop, and think (which I haven’t done in years), I know damn well there’s a problem with all that.

The always-on, drive yourself into the ground, race to the top of Hacker News, pwn Demo Day approach is a specific tactic, and not a long term strategy. It needs to be implemented at certain critical points in a company’s life-cycle, assuming you’re trying to build a venture capital-scale business.

This tactic needs to be employed when:

1) you’re first starting out and successfully doing or not doing certain things (making a product, getting money in the bank) determines whether or not you have a company at all.

2) your entire company is pushing to meet a discrete and time sensitive deadline, the outcome of which is has been deemed similarly pivotal as the very earliest stages of your business. (in the style of #1).

That’s it. Those are the only times when the approach to work that I was taking are appropriate. (If you’re running a lifestyle business, you’re playing a different game all together, and more power to you.) And if you’ve convinced yourself that you’re always working in one of the two sets of parameters I described, thereby creating constant life or death urgency, you’re doing it wrong.

There are extenuating circumstances, but if you’ve made your working life into one long string of extenuating circumstances you’re designing for diminishing returns and eventual failure. You can either become amazing at self-regulating and prioritization overnight, or pull the plug, make a full stop, and engage in some serious self-assessment. Personally, the former wasn’t working for me, so I eventually chose the latter.


Well, what now?

It would be lovely if I could explain to you all the things I’ve figured out and the wisdom I’ve found. The reality is I’m still pretty turned about, and clarity takes awhile to cultivate. Fortunately, I’m already feeling more like myself than I have in a long time. After just three weeks, the haze is clearing. I’m also resisting the urge to dive back into a full time job at the first sign of relief. I can’t do my best work if I’m not a whole person, and I think I owe my future teammates that much.

My life’s work is too important to go through it mindlessly. That’s what I started doing, and I don’t want to keep compromising. I view work as a calling, and jobs are individual vehicles to help materialize some piece of the puzzle. I’ve proven that I’m incapable of doing anything half-heartedly. It lets me rally myself and those around me, but the dark side of this tendency can get seriously bleak.

I need to learn better self-management, so I’m going to keep taking time. I’m going to keep spending long aimless afternoons in the park, reminding myself of what makes me smile, and letting the bits of understanding filter in, packet by packet. I’ll keep writing, I’ll keep talking, I’ll keep renewing friendships, and I’ll keep thinking about how to do work that pays respect to either of the problem sets I outlined here.

I’ll keep you posted.

image

So You Want to Work With the Developer Community?

When I transitioned into working with technical communities, I was pretty sure I was a weird outlier who was suddenly getting to explore my passion for developer culture. Then I noticed something funny; some of my peers followed the same track shortly after I did. Investment in developer tools has also been on the rise, so demand for this work looks like it’ll keep on flowing.

If it’s going to be a trend, I’d like to help newcomers serve the developer community better and be more successful in their own right. Here are a few things I wish someone had told me.

Developer community management isn’t new

What is somewhat new is early-stage venture backed companies concerning themselves with it. For a long time, this was the domain of enterprise software giants. Community management has been core to successfully run open source projects since open source emerged, and in OS, community decisions and engineering decisions are often been inextricably linked. Your developer-focused company might not be driven by open source specifically, but if you’re trying to facilitate any kind of collaboration, you’ll need to understand how open source works.

If you want to study some of the roots (and you do), go read about the Java Community Process, find out what Ubuntu’s community efforts entail, and get to know the backstory between Sun and Oracle. This may sound outlandishly academic and arcane, but I promise you this is imprinted on the developer community’s collective subconscious, and is affecting how technology gets made today. If you show up and behave as though there’s no history or cultural precedents, your efforts wont go as far.

The incentives are different

If you’ve been working with a non-technical community, you’ve likely been investing time and effort into helping your community members feel close to your brand. Developers don’t care about brands as abstract entities. It will matter whether you connect with your community in the mediums they’re most comfortable in. It will matter whether or not you try and hard sell them. If you’re a developer, everybody wants a piece of you, both from a positive standpoint of being able to set your price in the market, and from a negative standpoint in the commoditized and transactional ways people will approach you as a resource. The things that made your old community feel appreciated might not transfer. Developers want to build good systems and be recognized for their contributions. Prepare to understand and respect that.

It’s not a spectator sport

The only way to be a facilitator is to be a member. I don’t believe you can successfully work with the developer community if you’re not actually part of it. Building with code is the act of assembling individual actions into a process that can be repeated whenever desired. You should have at least some understanding of those components, of the technical facets of your specific ecosystem, and a few opinions on them. It’s far better to fail at hacking tiny things together than it is to not try at all. Prepare to show up to events, just be there, and get your hands dirty. It may be tempting to think that as a community manager you can just deal with the big broad picture. I believe that you need to understand the granular components in order to make your vision real.

Creating a space where people know and care about one another, and interaction is high is still the name of the game, whether or not you work with the developer community. Just get ready for the social mores to look and feel different.