Categories
Platform Integrity Social Software

Making Stack Overflow Work

A few weeks ago I stopped by the Social Design Club and gave folks a tour of the Stack Overflow/Stack Exchange Q&A engine. After a few years away, it was fascinating to dig into the details of an intricate system I helped maintain for a long time. The design decisions remain relevant to people working on social software today, so read on for lessons from one of the most successful communities on the internet.

Without an entire book, it would be hard to cover everything the platform does well, so treat these as some introductory highlights.

Preamble: “Collectively increasing the sum total of good programming knowledge in the world.”

According to one of its founders, this was the ultimate intent of Stack Overflow. Since 2008, the site has changed the process of software engineering so completely that it’s easy to forget things were ever done any other way, but useful information for debugging your code used to be a scarce resource on the internet. If you were trying to solve a coding problem, the help you needed was probably locked up in an out-of-date book somewhere, buried in a forum post interspersed with spam way down the page, or worst, on Experts Exchange.

Experts Exchange was a technical Q&A site. Their content often came up in search engine results, and when you clicked through, they threw a paywall in your face. Sometimes you would pay to see the content, only to find it didn’t solve your problem. The affront of Experts Exchange to developers everywhere was much of the animating force behind Stack Overflow’s creation.

Finally, it’s important to realize that site creators Jeff Atwood and Joel Spolsky were celebrity programmer bloggers for years before they teamed up on their shared project. At a time when developers had less clout than they do today, Jeff and Joel were talking about how to get better at the craft of software engineering, and advocating for greater respect for developers in the workplace. When Stack Overflow showed up, it presented an opportunity for lots of programmers to collaborate on a community with their heroes.

But how do you tackle a goal as big as the one above?

An MMO for Creating High Quality Artifacts

Perhaps the most remarkable thing about Stack Overflow, and the wider Stack Exchange network, is how it incentivized a ridiculous number of humans to behave in relatively constructive ways towards a shared goal on the internet. I believe this can be attributed to 3 major factors:

  1. a clearly defined shared identity
  2. tons of effort engaging and interacting with the community, at least early on
  3. brilliantly designed game dynamics.

All of these were essential, but here we focus on #3.

Questions & Answers as the prime information type

If you glance at this image, it’s really clear what the folks who made the site want you to focus on.

From the Stack Exchange Welcome Tour

Everything that happens on Stack Overflow, and throughout the Q&A network, is meant to be in service to the creation of questions and answers. Fittingly, those get prime real estate in the UI. There’s other stuff going on, but it’s all smaller gears running in the background in service to the larger goal.

Community-defined heuristics for desired content

A group is defined as much by who they’re not as by who they are, and the early Stack Overflow community knew this. Folks didn’t want just any Q&A on their site. Case in point:

“How is babby formed?”, a Yahoo! Answers post often cited as the antithesis of Stack Overflow’s aspirations.

From jump, the intent was to help programmers be better programmers, and idle discussion was discouraged. Armed with this clarity, founders, community members, and staff engaged in countless hours of arguing, debating, rehashing, and consensus-building over the years about what kinds of posts were a good fit for the site (this tips into #2 above). These conversations eventually coalesced into the Help Center, which provides rigorous, thoughtful heuristics for what good and bad questions and answers looked like.

In practice, the community came to reward posts which:

  • Demonstrate what the asker has already tried
  • Provide enough context for others to easily repro the issue
  • Include code snippets
  • Are well formatted, in complete sentences, free of typos
  • Are interesting for experienced developers to understand and solve
Voting

The more highly voted an answer, the higher it is on the page. This means visibility is dictated by perceived helpfulness, not chronology. As a user, upvotes on your content give you rep, creating a persistent, quantified standing within the community.

Privileges

As you contribute more posts the community approves of, you climb the ranks in rep. This unlocks privileges which allow you to engage in various moderation and maintenance tasks.

Privileges listed on the Stack Overflow Help Center.

The beauty of this is the people who’ve demonstrated the most investment in the community are the ones empowered to help care for it.

Flagging

Posts are flagged and potentially closed when they’re unclear, too broad, or simply off-topic, maintaining the focus and quality of the site. 

This moderation functionality is a natural pairing with community-defined content heuristics. To say the the nature and usage of these tools is “contested” would be an understatement.

Features for clarifying and improving posts

When you’re creating a shared knowledge base, the ability to take existing posts and make them better is key. This activity is supported in two ways:

1. Edits

Anyone can edit a post. However, changes from users with less than 2K rep must have their edits approved.

The on-site edit display is similar to a git diff.

2. Comments

The smaller, indented text below is a comment. They are temporary “Post-It notes” on questions and answers, intended for clarification and constructive criticism.

They can receive votes but generate no rep.

Moderator Elections
2020 Stack Overflow Moderator Election results.

Stack Overflow and Stack Exchange run on the labor of volunteer moderators, and the community appoints those leaders via a thorough democratic process. Elections run on OpaVote and use an STV system. More background can be found here.

Somehow, it worked

Stack Overflow’s gamifying the creation of expert information has been a resounding success. Today, nearly every developer in the world has a Stack Overflow tab open at work and, as of this writing, there are 176 Q&A sites in the Stack Exchange network. But communities are never done. With scale and stellar metrics come increasingly complex problems. As time goes on, there are messy questions of who the system was designed for, and what kind of consideration is owed to the many volunteers doing the lion’s share of the work maintaining it.

Thanks to Freyja and Joe for having me as a speaker, and prompting me to revisit this very special machine.

Categories
Platform Integrity

On Humane Trust & Safety and Support Teams

Recently I sat down with a leader at a tech company building a community platform to offer input on running Support and Trust & Safety teams. Later, I sent a followup email and then realized its contents may be useful to other folks in tech wrestling with the ethical quandary of managing teams tasked with cleaning up the internet. My advice has been posted here with their permission:

When we spoke I did a lot a of rambling, because I have an excess of war stories rolling around in my head, but I realize that may have left you swimming. Below, I’ve parsed my thoughts into a handful of things I think are worth trying to act on if you want to create a team doing Support and/or Trust & Safety which is designed to bypass many of the unhealthy dynamics that are usually baked into this work:
1) Pay well for work that doesn’t scale.
It sounded like you may be pushing to have Support be paid on the same scale as engineering. That’s really solid. Typically tech companies hate paying for labor that doesn’t produce scalable output; mushy human stuff is a necessary evil, and they pay as little as they can manage for it. This is really short-sighted, though. Software which has users is going to take work to maintain no matter what, and if you recognize the operational costs upfront, you’ll be helping avoid surprises and burnout for your entire company later.
2) Prepare to increase the size of your Trust & Safety and Support teams proportionally with the scale of your userbase.
This comes from similar dynamics as #1, and is typically really hard to swallow. I’m not suggesting that Trust & Safety and Support will never be able to automate away parts of their work, or that folks on those teams shouldn’t be encouraged to do so as part of their jobs. But you should expect that the load and the complexity of issues will scale proportionally to the number of people using your software, especially as you add more social features, and having an ever growing number of smart, capable people on hand with enough extra bandwidth to think strategically and devise solutions for the stuff they’re seeing on the frontlines will ensure you’re generally not blindsided by your platform unleashing messes on the world.
3) Create a separate time off scheme for Trust & Safety and Support.
I recommend giving everyone doing this work 3 paid months off. This work is going to take a psychological toll and people’s productivity is going to be affected. Usually this results in staff feeling terrible about themselves and papering it over, which is the first step on the road to burnout. Yet if acknowledgement and affordances are created for this outcome, and it’s treated as a normal thing that individuals aren’t failures for and which the company has their back on, it will increase people’s resilience by ensuring they don’t waste more bandwidth on shame.
4) Have Support and Trust & Safety driving platform-level changes.
Another enormous driver of burnout in this work is facing upsetting scenarios which you have no opportunity to resolve. This creates lots of second order trauma. Alternately, if Trust & Safety folks find themselves fielding the output of a huge platform abuse vector and they think of novel ways to solve it, actually seeing it implemented is an incredible morale boost which will prevent against learned helplessness, and help your people stay in the game with you over the long haul.
Categories
Uncategorized

After the Flood: Finding the Path to Healthy Communications Technology

The cost of information transmission has collapsed. As a result of the profit opportunity this presented, human interaction has been centralized in platforms of truly enormous scale. Centralization makes it possible for these platforms to monetize our clicks and eyeballs to the tune of billions of dollars, through billions of users. We are only beginning to see the effect of this on human brains.

So YouTube […] set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal.

[…]

Three days after Donald Trump was elected, Wojcicki convened her entire staff for their weekly meeting. One employee fretted aloud about the site’s election-related videos that were watched the most. They were dominated by publishers like Breitbart News and Infowars, which were known for their outrage and provocation.

YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant (via Bloomberg)

This raises some inevitable concerns about how to handle information responsibly when it moves at 21st century speeds. If maximizing reach and reducing friction yields profit while compromising individual and societal health, then when is reach a liability and who bears the costs of its downsides? At what scale does reach become unsafe?

Developing a framework for safety under 21st century communication paradigms opens up several questions.

Throughput is the measure of data transfer in an information system. Is it possible to devise a means of quantifying throughput through the human culture both today and historically? If so, what change over time would we observe?

If it were possible to quantify throughput filtered collectively through human brains, have we exceeded the measure for safety, and how could we tell? Is there a way to measure change over time, and can a heuristic be developed for maximum safe reach?

Physical information systems have limits to what they can process, and when they’re surpassed, items are dropped or queued. A webserver can only handle so many requests per second, and when that threshold is maliciously exceeded, we call that a denial of service attack. If we are also physical beings with physical limitations, why should we be exempt from limitations seen in other information systems?

Computer networks are deliberately partitioned for safety and security. Should human networks be deliberately partitioned for the safety of our brains? Are there lessons we can learn from the domain of network administration?

Does decentralization inherently create safety?  Are there specific design patterns that should be adopted or avoided to prevent replicating the problems seen in current social products?

Mark Zuckerberg, trying to get ahead of the inevitable, recently put out an article proposing a regulatory framework for entities like Facebook. In the US, previous regulation on media companies tried to limit consolidation to mitigate monopolies holding overwhelming sway of public opinion. The originators of that regulation could never have dreamed of the reach of a platform like Facebook, which is largely uncontested. Are there historical precedents for regulating media companies which are applicable to 21st century problems, even though they have a very different shape?

None of these questions has clear, immediate, or straightforward answers. Still, as we grapple with the impacts of centralized communication platforms, they represent only the beginning of the hard problems we need to grapple with to ensure we build communications technology responsibly.

Categories
Uncategorized

From Point A to Chaos: The Inversion of Information Economics

If it’s really a revolution, it doesn’t take us from point A to point B, it takes us from point A to chaos.

Clay Shirky, 2005

In 2019, we reel from a series of improbable outcomes. Whether it’s the 2016 US election, Brexit, the resurgence of the theory that the earth is flat, or the decline of vaccination, turns which once seemed unthinkable have arrived in force. Culture war blossoms around developments which some see as progress, and others find threatening and absurd. This coincides with the rise of centralized communication platforms that reward compulsive engagement, indiscriminately amplifying the reach of compelling messages—without regard for accuracy or impact.

Historically, composition and distribution of information took significant effort on the part of the message’s sender. Today, the cost of information transfer has collapsed. As a result, the burden of communication has shifted off the sender and onto the receiver.

Interconnectedness via communications technology is helping to change social norms before our eyes, at a frame rate we can’t adjust to. If we want to understand why we’ve gone from point A to chaos, we need to start by examining what happens when the cost of communicating with each other falls through the floor.

A brief history of one-to-one communication

The year 1787 offers us another moment in time when the communications technology of the day stood to influence the direction of history. The Constitutional Convention was underway, and the former British colonies were voting on whether to adopt the hotly contested new form of government. Keenly aware that news of how each state fell would influence the behavior of those who had yet to cast their votes, Alexander Hamilton assured James Madison, his primary collaborator at the time, that he would pay the cost of fast riders to move letters between New York and Virginia, should either one ratify the Constitution.

The constraints of communications technology back then meant it was possible for an event to occur in one location without people elsewhere hearing of it for days or weeks on end. For this single piece of information to be worth the cost of transit between Hamilton and Madison, nothing less than the future of the new republic had to be at stake.

From a logistical standpoint, moving information from one place to another required paper, ink, wax, a rider, and a horse. Latency was measured in days. Hamilton and Madison’s communications likely benefited from the postal system, which emerged in 1775, to provide convenient, affordable courier service. Latency was still measured in days, but by then it was possible to batch efforts and share labor costs with other citizens.

Around 1830, messages grew faster, if not cheaper, with the advent of the electric telegraph. The telegraph allowed transmission of information across cities and eventually continents, with latency clocking in at a rate of two minutes per character. The sender of a message was charged by the letter, and an operator was needed at each end to transcribe, transmit, receive and deliver the message.

With the telephone, in 1876, it became possible to hold an object to your ear and hear a human voice transmitted in real time. The telephone required an operator to initiate the circuit needed for each conversation, and once they did, back-and-forth could unfold without intermediaries. This dramatic acceleration from letters carried on horseback to the the telephone took place in the space of 89 years. By the early 20th century, phone switching was automated, further reducing the cost of information exchange.

By the 2000’s, mobile phones and the internet enabled email and texting, and instantaneous communication was within reach between anyone in the world who was lucky enough to have access to these technologies in their early days on the consumer market. For these, no additional labor is needed beyond the sender’s composition of the message and the receiver’s consideration of its contents. Automation handles encoding, transmission, relaying, delivery and storage of the whole thing. The time between a message being written and a message being received has been reduced to mere seconds.

Constraints to communication at this phase come to be dictated by access to technology, rather than access to labor.

A brief history of one-to-many communication

While Alexander Hamilton wrote copious personal letters, he also leveraged the mass communications medium of his day, the press, to shape political dialogue. He was responsible for 51 of the 85 essays published in the Federalist Papers. By 1787, publishing had already benefited from the invention of the printing press. The production of written word was no longer rate-limited by the capacity of scribes or clergy, or restricted by the church. Those who were educated enough to write, and connected enough to publish, could do so. Of course, only a select handful of people in the early United States met that criteria.

It’s not that fake news is a recent phenomenon, it’s that you used to need special access to distribute it. Samuel Adams was the son of a church deacon, a successful merchant, and a driving force in 1750s Boston politics. Benjamin Franklin apprenticed under his brother, a printer, and eventually went on to take up the trade, running multiple newspapers over his lifetime. In 1765, Samuel Adams falsely painted Thomas Hutchinson as a supporter of the Stamp Act in the press, leading a mob of arsonists to burn down Hutchinson’s house. Meanwhile, Franklin created a counterfeit newspaper claiming the British paid Native Americans to scalp colonists which he then circulated in Europe to further the American cause.

Soon, other media emerged to broadcast ideas. By the 1930’s, radio was a powerful conduit for culture and news, carrying both current events and unique entertainment designed for the specific constraints of an audio-only format. Radio could move over vast distances, and it did so at the speed of light. At the same time, radio required specialist engineers to operate and maintain the expensive equipment needed to transmit its payload. It required more specialists to select and play the content people wanted to hear.

Television emerged using similar technology, with additional overhead. As well as all the work needed for transmission, television required additional specialists and elaborate equipment to capture a moving image.

Radio and television both operated over electromagnetic spectrum which was prone to interference if not carefully managed. By necessity spectrum is regulated, which creates scarcity, making the owners of broadcast companies powerful arbiters of the collective narrative.

So between print, radio and television, a handful of corporations determined what was true, what would be shared with the masses, and who was allowed to be part of the process.

Force multipliers in communication

Eventually, innovations in the technologies above began to cannibalize and build off of one another, helping the already declining cost of information transfer fall even faster.

By the late 1800’s, typewriters allowed faster composition of the written word and clearer interpretation for the recipient. By the late 1970’s, the electronic word processor used integrated circuits and electromechanical components to provide a digital display, allowing on-the-fly editing of text before it was printed.

Then, the 1980’s saw the rise of the personal computer, which absorbed the single use device of the word processor, folding it in and making it just another software application. For the first time, the written word was now a stream of instantly visible, digital text, making the storage and transmission of thoughts and ideas easier than ever.

Alongside the PC, the emergence of packet-switched networks opened the door to fully-automated computer communications. This formed the backbone of the early internet, and services ranging from chat to newsgroups to the web.

The arrival of the open source software revolution around the year 2000 enabled unprecedented productivity for software teams. By making the building blocks of web software applications free and modifiable to anyone, small teams could move quickly from concept to execution without having to sink time into the basic infrastructure common to any site. For example, in 2004, Facebook was built in a dorm room using Linux as the server operating system, Apache as its web server, MySQL for its database, and php as its server-side programming language. Facebook helped usher in the current era of centralized, corporate-controlled, modern social software, and it was built on the back of open source.

The pattern seen in the evolution from printing press to home PC is repeated and supercharged when we encounter the smartphone. By 2010, smartphones paired the ability to record audio and video with a constant internet connection. Thanks to the combination of smartphones and social software, everyday consumers were granted the ability to capture, edit and distribute media with the same global reach as CNN circa 1990. This had meaningful impact during the protests against police violence in Ferguson, Missouri, in 2014. Local residents and citizen journalists streamed real-time, uncut video of events as they unfolded—without having to consult any television executives.

In the end, this is a story of labor savings. Today, benefits from compounding automation and non-scarce information technology resources, like open source code, have collapsed the amount of human labor needed to reach mass audiences. An individual can compose and transmit content to an audience of millions in an instant.

This leverage for communication does not have a historical precedent.

Dissolving norms

As the cost of information transfer grows rapidly cheaper, structures and dynamics which once seemed solid have become vertiginously fluid.

In the pre-internet age, you had producers and you had consumers. Today, large-scale social platforms are simultaneously media channel and watering hole, and power users may shift between being both producer and consumer in a single session. The distinction between one-to-one and one-to-many communication has also become far less clear. A broadcast-style message may result in a public response from a passerby which catalyzes an interaction between the passerby and the original poster, with lurkers silently watching the exchange unfold. Later, the conversation may be resurfaced and re-broadcast out by a third party.

The intent of our communications also aren’t always fully known to us when we enact them, and the results can be disorienting. We’ve become increasingly accustomed to mumbling into a megaphone, and people may face lasting consequences for things they say online. Ease of distribution has also blurred the lines between public and private communication. In the past, even the act of writing a letter to a single individual involved significant costs and planning. Today, the effort required for writing a letter and writing an essay seen by millions is functionally identical—and basically free.

Meanwhile, professional broadcast networks are no longer the final arbiters of our collective narrative. Journalism used to be the answer to the question “How will society be informed”? In a world of television, radio and newspaper, those who controlled the exclusive organs of media decided what the audience would see, and therefore what it meant to be informed.  Defining our shared narratives is now a collaborative process, and the question of what is relevant has billions of judges able to weigh in. Today we have shifted, according to An Xiao Mina, from “broadcast consensus to digital dissensus”.

Uncharted waters

In 2019, we face an inversion of the economics of information. When the ability to send a message is a scarce resource, as it was in 1787, you’re less likely to use those mechanisms to transmit trivial updates. Today, the extreme ease of information transfer invites casualness which begets the inconsequential. Swimming in these waters is leaving us open to far more noise masquerading as signal than in eras past.

Many of us can attest that the time between considering what we want to say and getting to say it has shrunk to minutes or seconds, and the messages we send are increasingly frequent and bite-sized, thought out on-the-fly. When this dynamic compounds over time and spreads across the human culture, with both individuals and institutions taking part, we find ourselves experiencing the cognitive equivalent of a distributed denial-of-service attack through an endless torrent of “news,” opinion, analysis and comment. Just ask the Macedonian teenagers making bank churning out fake news articles.

To make sense of this, we need new design patterns, technologies, narratives, and disciplines. The decline of broadcast consensus leaves us grappling with a painful loss of clarity, yet it simultaneously creates opportunities for voices who were missing in eras past. We’ve sailed off the side of the map, into waters not yet charted. Now, we’re called on to relearn how to navigate, even as our instruments are rendered useless. And we need all the help we can get.

Categories
Uncategorized

Facebook Moderation and the Unforeseen Consequences of Scale

Parable of the Radium Girls

In 1917, a factory owned by United States Radium in Orange, New Jersey hired workers to paint watchfaces with self-luminescent radium paint for military-issue, glow-in-the-dark watches. Two other factories soon followed in Ottawa, Illinois and Waterbury, Connecticut. The workers, mostly women, were instructed to point the tip of their paint brushes by licking them. They were paid by the watchface, and told by their supervisors the paint was safe.

Evidence suggested otherwise. As employees began facing illness and death, US Radium initially rejected claims that radium exposure might have been more damaging than they’d first led workers to believe. A decade-long legal battle ensued, and US Radium eventually paid damages to their former employees and their families.

The Radium Girls’ story offers us a glimpse into a scenario where a technological innovation promised significant economic return, but its effects on the people who came into daily contact with it were unknown. In the course of pursuing the economic opportunity at hand, the humans doing the line work to produce value wound up doubling as lab rats in an unplanned experiment.

Today, regulations would prohibit a workplace that exposed workers to these hazards.

The unforeseen consequences of unplanned experiments

This week, the Verge’s Casey Newton published an article examining the lives of Facebook moderators, highlighting the toll taken on people whose job it is to handle disturbing content rapid-fire, on a daily basis. The employees at Cognizant, a company contracted by Facebook to scale the giant social network’s moderation workforce, make $15/hour and are expected to make decisions for 400 posts each day at a rate of 95% accuracy. A drop in numbers calls a mod’s job into question. They have 9 minutes/day of carefully monitored break time. The pay is even lower for Arabic-speaking moderators in other countries, who make less than six dollars per day.

Facebook has 2.3 billion global users. This means, by sheer size of the net being cast, moderators will encounter acts of graphic violence, hate speech, and conspiracy theories. Cognizant knows this, and early training for employees involves efforts to harden the individual to what the job entails. After training, they’re off to the races.

Over time, exposure is reported to cause a distorted sense of reality. Moderators begin developing PTSD-like symptoms. They describe trouble context-switching between the social norms of the workplace and the rest of their lives. They are legally enjoined from talking about the nature of their work with friends or loved ones. Some employees begin espousing the viewpoints of the conspiracy theories they’ve been hired to moderate. Coping mechanisms take the shape of dark humor, including jokes about suicide and racism, drugs, and risky sex with coworkers. There are mental health counselors available on-site, however, their input boils down to making sure the employee can continue doing the job, rather than concern for their well-being beyond the scope of the bottom line.

“Works as intended”

When Facebook first started building, they weren’t thinking about these problems. Today, the effects of global connectivity through a single, centralized platform, populated with billions of users, with an algorithm dictating what those users see, is something we have no precedent for understanding. However, as we begin the work of trying to contend with the effects of technology-mediated communication at unprecedented scale, it’s important to identify a key factor in Facebook’s stewardship of their own platform: the system is working as intended. I’ve long noted that if scale is a priority, having garbage to clean up in an online network is a sign of success, because it means there are enough people to make garbage in the first place.

The very reality that human moderators need to do this work at such magnitude means Facebook is working extraordinarily well, for Facebook.

Let’s explore this for a moment. The platform’s primary mode has long been to assemble as many people as possible in one place, and keep them there as long as possible. The company makes money by selling ads, so number of users and quantity of time on the site is their true north. The more people there are on the site, and the longer they spend there, the more opportunities for ad impressions, resulting in more money. They are incentivized to pursue this as thoroughly as possible, and under these strict parameters, any measure which results in more users and more engagement is a good one.

Strong emotional reactions tend to increase engagement. The case study of the network’s role in the spreading of rumors which led to mob violence in Sri Lanka provides a potent look at how the company’s algorithms can exacerbate existing tensions. “The germs are ours, but Facebook is the wind,” said one person interviewed. So on the one hand, Facebook is incentivized to get as many users as possible and get them as riled up as possible, because that drives engagement, and thus profit. Some of the time, that will produce content like that which moderators at Cognizant need to clean up. To keep this machine running, human minds need to be used as filters for psychologically toxic sludge.

Facebook could make structural platform shifts which would reduce the likelihood of disturbing content showing up in the first place. They could create different corners of the site where users go specifically to engage in certain activities (share their latest accomplishment, post cooking photos), rather than everyone swimming in the same amorphous soup. They could go back to affiliations with offline institutions, like universities, and make your experience within these tribes be the default experience of the site. Or they could get more selective about who they accept money from, or whom they allow to be targeted for ads. But I’m sure any one of these moves would damage their revenues at numbers that would boggle our minds. Facebook’s ambition for scale, and their need to maintain it now that they have it, is working against creating healthier experiences.

Like the Radium Girls, Facebook moderators are coming into daily contact with a barely-understood new form of technology so that others may profit. As we begin to see the second order effects and human costs of these practices and incentive systems, now is a good time for scale to be questioned as an inherent good in the business of the internet.

Categories
Technical Management

Implementers and Integrators

Organizations need both Implementers and Integrators.

Implementers are those who specialize in nuts and bolts execution work. Migrating a database, making color and font choices for a landing page, and managing event logistics are all examples of implementation. By contrast, integrators observe disparate pieces of a system working in tandem, and discern ways of helping them function better together. This could mean noticing where teammates have a pattern of talking past each other and intervening to resolve confusion before the wrong thing gets built, or recognizing that two people want to develop similar skillsets and pairing them up to take a course together.

In tech, we index heavily on Implementers. I believe part of this comes from most companies initially facing an existential chasm as a matter of course, which gets crossed by getting V1 out the door. At this point, implementing your way forward is really your only option. After you’ve implemented successfully enough to see another day, that experience stands as a powerful object lesson among your early team. The time you crossed the chasm becomes the stuff of shared legend.

As organizations age, the technical and human systems being maintained grow in size and complexity. The more pieces you have, the more different ways there are for them to malfunction. This is when the need for Integrators who can recombine or fine tune the pieces so they run together more powerfully, or reliably, or humanely, really emerges.

Recognizing and responding to this shift is tricky. Integration work can be a lot harder to account for, therefore harder to set up as a function. Effective Integrator work also requires observation before action, and then action often takes the shape of nuanced adjustments which are only felt throughout the system later on. Sometimes, an org under strain from growth will respond by adding Implementers, without realizing what they need is more Integrators. This is rational. We use our past experiences to inform our present strategies, and based on the past, adding more Implementers is the way to move forward. Unfortunately, this old strategy applied to this new problem can exacerbate the strain. An uptick in Implementers creates more complexity which needs to be managed, further increasing the need for Integrators. Contributing to the force of this tightening ratchet is the fact that Implementer work, with its tangible outputs, is vastly better understood, and almost always more rewarding in the immediate-term. Properly establishing Integrator work within your org requires a lot of faith, and interpersonal trust, and moving through uncertainty.

You might know Integrators under a different name: managers. Good management is more than bossing people around. A good manager bridges the divergence in people’s frames of reference, creating shared meaning so that more than one person can do productive work on same problem. Inevitably flat organizations realize they need management once they hit a certain scale. Without designated Integrators, it becomes unclear where individual contributors can make the most impact—to say nothing about where they go when the friction becomes too great. Meanwhile, managers who do just boss people around leave their direct reports feeling helpless and blindsided as a matter of daily course, as they lack a sense of the bigger cohesive picture and their role in influencing it.

Developing this new lens and upgrading your toolset at a point when you’re facing high stress and high stakes is exceedingly difficult. It can be hard, in the flow and the mess, or worse, when it feels like nothing is working and the pressure is weighing on your chest, to stop to ask yourself and your collaborators why things feel so off, or whether you’re all solving the right problem. Yet this sort of virtuous non-complementary behavior may be the best way to ensure integrity in the systems you’re all maintaining.

Categories
Technical Management

Exploring the Human Implications of Conway’s Law

Conway’s Law states that:

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”

In other words, the communication patterns in your team are duplicated in your software. I recently had a chat in which my counterpart made the point that if we take Conway’s Law to be true, it should stand that the mental state of the individuals doing the communicating is having a great affect on your software.

When dealing with networks, we most often focus on the edges between the nodes. Without computers talking to each other, there is no network, so it makes sense to focus on connection and uptime. Yet when dealing with networks of humans, I believe Conway’s Law calls on us to start by examining the state of the nodes themselves.

In contrast to inanimate interactive entities, like servers, human nodes are neither uniform nor neutral. Much the way DNA contains a set of plans for how an organism’s development unfolds, humans are carriers of their own design patterns. The way these design patterns are expressed in an organization (aka within the network) will vary based on things like their past lived experiences and identities, whether or not they perceive that their contributions are respected, or whether they’ve gotten enough sleep or calories, just to name a few. An individual human node will respond to stimulus differently based on the state of these factors, which shift how they will communicate on a given day. Conway’s Law postulates that this is reflected in the software system they’re building. Then, there’s a concurrent process running where they interact with other humans in your organization, each with design patterns of their own. This dance of interrelation produces the state of your software.

While preemptively controlling for every possible aspect of the communication between the humans designing your system is unlikely, it stands to reason that optimizing for the well-being of the individuals doing the work can be a sort of resilience engineering. Things like proper compensation, respect for boundaries, a blameless culture, and clear opportunities for advancement create circumstances most likely to engender an open, well-regulated, constructive mental state in your individuals. If Conway’s Law is right, maintenance on the state of the human nodes in a network paves the way for more constructive communication patterns, and better software.

Categories
Uncategorized

Will voting functionality on Facebook solve anything?

Word came out earlier this week that Facebook is running an experiment, giving a small number of users in New Zealand upvote/downvote buttons on comments. I’m wondering what Facebook is looking to learn.

Upvotes and downvotes have been around since forever on gamified platforms like Reddit and Stack Overflow. Voting introduces a sense of right or wrong in a community. It quantifies the value of your participation, turning your popularity, or lack thereof, into something measurable. It’s opinionated. Fittingly, voting is a functionality which took root in technical, programming-focused, and gaming-adjacent communities.

Facebook has a whole different premise than gamified discussion sites do. They built their social network around sharing and staying in contact with loved ones. They made it frictionless to share anything about yourself, in the hopes that you would share everything about yourself. Facebook makes money by using the data they have about you to show you extremely tailored ads. What you see in your Facebook timeline is algorithmically generated and optimized for content which you are most likely to react to, fueling engagement. Facebook has, of course, been under scrutiny since news of the data leak to Cambridge Analytica and investigations into how the site has been used to organize local violence in Sri Lanka.

So on the one hand, you have platforms which are about getting people to post and quantify the value of each other’s words (Reddit, Stack Overflow), and on the other, a social network which aims to make you observable and reactive (Facebook). And voting, a core functionality from one is being essentially air-dropped over to the other. Where could that be helpful? Where could that be harmful?

There’s been a lot of talk about fake news lately. I tend to think the definition of “fake news” is far more slippery than most of us care to believe, but that’s a post for another time. Point is, there’s concern that there’s no way to discredit something posted in bad faith on Facebook, and in the form of voting, there’s functionality which allows every day users to do just that. I get why added opinionated functionality might seem like the right counter-measure.

What I wonder is, will the mob mentality which tends to form up during user voting ultimately help or harm the nature of the interactions taking place? When asked, a company spokesperson said “People have told us they would like to see better public discussions on Facebook, and want spaces where people with different opinions can have more constructive dialogue…Our hope is that this feature will make it easier for us to create such spaces, by ranking the comments that readers believe deserve to rank highest, rather than the comments that get the strongest emotional reaction.” The idea that folks will believe the posts which should be ranked highest aren’t also the ones which emit the strongest emotional reaction runs counter to everything I’ve ever known about humans and keyboards.

Furthermore, when a post gets heavily downvoted…what happens? Will there be someone on the other side, at Facebook, able to step in? I’m guessing not. Will there be anything that happens when a comment is heavily downvoted? Unclear. Is this all about making sure we just keep clicking things? Maybe.

All in all, this sounds like an interesting experiment and I’m glad to see Facebook do something. I really hope they speak publicly about their findings. But I also see how this feature could cause users to double down on their existing disagreements, grudges, and gripes.

Finally, copping to the fact this is an experiment, this measure strikes me as misguided because voting sets users up to reach for a goal which Facebook has not defined. Whether you’re trying to drop the best meme, or the most articulately explained physics equation, seeking votes means you’re aspiring to be something valuable in the eyes of the group. What is it people are reaching for when they seek upvotes on their Facebook comments? What kind of discussion space is Facebook looking to create? How do the answers to the previous questions vary based on the demographics and context of the folks doing the posting? Without answering these, Facebook is simply bolting another tunnel onto the multi-level hamster mansion, and hoping the novelty of its presence gets the critters to stop fighting.

Categories
Uncategorized

Commit History

This is primarily an exercise in record keeping. Two years ago I wrote a post about burnout and went silent. Here are some things which have transpired since:

I found work which fits the parameters outlined here.

I published Contributions to put myself on the hook for figuring out more purposeful work, aiming at either of two broad categories: #1 enabling networks of people to help each other or #2 helping the technology industry be a better version of itself. I’ve now clocked nearly two years as a Community Manager at Stack Overflow. Both sets of parameters are being met surprisingly well.

Better attention management.

Our attention spans don’t scale to the size of the internet. In March of 2014 I took the month off Twitter, which gave me a chance to examine how it was re-patterning my brain. The silence was delicious, and the hiatus equipped me to make clearer decisions about where I spend my cognitive resources. My relationship to social media hasn’t been the same since.

The start of a thesis.

In May of 2015 I presented at CMX Summit East on the five traits of enduring communities. This was the first time I’d done any speaking since taking time away, and it was rewarding to lay down a cohesive set of ideas which were the product of several years of work. I also couldn’t have asked for a better place to do it. (To David, Carrie, Yrjathank you!)

These are subtle but significant changes which come from focused effort. I look forward to continuing to put in the work.

Categories
Uncategorized

Reset

About three weeks ago, I said goodbye to my team. I’ve been in need of a break and after spending a year and a half working on technical infrastructure for the dev community, the company is increasing their focus on enterprise products. It was a good time for me to step back.

Since leaving, I’ve been able to think more about what the last few years of my working life have looked like. In the span of the last three years I’ve helped two companies move from early-stage to mid-stage. I’ve handled user shitstorms, overseen dozens of launches, pulled all nighters, been through the fundraising process twice, managed people, turned customers into close friends, and kept countless moments of high emotion from tearing people apart. Along the way I developed a knack for breaking down silos between technical and non-technical teams. I love watching organizations grow and flourish. I’ve been humbled at what it is to be a manager, and to serve those who have entrusted some part of their career to me. This work is a privilege.

I’ll also be the first to say it. I’ve burnt myself out, hard.

I drove myself to unhealthy levels of exertion too hard for too long in pursuit of the next milestone.

I was over-investing myself in the organizations I was part of. I was making my work the one single point of failure for my ego.

After awhile, my friends never saw me anymore, I forgot why I ever liked the things I liked, and crossing off virtually any ‘to do’ list item took an excruciating amount of effort.

Why did this finally dawn on me? Historically, one of my biggest assets is my ability to think deeply about a messy problem, and formulate a single response that makes things click into place. I’m also good at simulating other people’s experiences and modeling interactions. It’s part of what made me a good community manager. After driving myself too hard for too long, my ability to do those things dulled.

I also realized I wasn’t fully listening to the things people would say to me. I caught myself categorizing what kind of conversation I thought I was entering ahead of time, associating a few normal behaviors with that type of interaction, and going on autopilot. Upon realizing this, I responded by pushing myself harder in an attempt to recoup lost productivity. Soon the combined momentum and pressure were taking on a life of their own. I was doing the next thing that presented itself because it seemed easiest at the time, and I was too harried to synthesize what the pieces added up to. I was struggling to connect with people.


You own your tools

If I stop, and think (which I haven’t done in years), I know damn well there’s a problem with all that.

The always-on, drive yourself into the ground, race to the top of Hacker News, pwn Demo Day approach is a specific tactic, and not a long term strategy. It needs to be implemented at certain critical points in a company’s life-cycle, assuming you’re trying to build a venture capital-scale business.

This tactic needs to be employed when:

1) you’re first starting out and successfully doing or not doing certain things (making a product, getting money in the bank) determines whether or not you have a company at all.

2) your entire company is pushing to meet a discrete and time sensitive deadline, the outcome of which is has been deemed similarly pivotal as the very earliest stages of your business. (in the style of #1).

That’s it. Those are the only times when the approach to work that I was taking are appropriate. (If you’re running a lifestyle business, you’re playing a different game all together, and more power to you.) And if you’ve convinced yourself that you’re always working in one of the two sets of parameters I described, thereby creating constant life or death urgency, you’re doing it wrong.

There are extenuating circumstances, but if you’ve made your working life into one long string of extenuating circumstances you’re designing for diminishing returns and eventual failure. You can either become amazing at self-regulating and prioritization overnight, or pull the plug, make a full stop, and engage in some serious self-assessment. Personally, the former wasn’t working for me, so I eventually chose the latter.


Well, what now?

It would be lovely if I could explain to you all the things I’ve figured out and the wisdom I’ve found. The reality is I’m still pretty turned about, and clarity takes awhile to cultivate. Fortunately, I’m already feeling more like myself than I have in a long time. After just three weeks, the haze is clearing. I’m also resisting the urge to dive back into a full time job at the first sign of relief. I can’t do my best work if I’m not a whole person, and I think I owe my future teammates that much.

My life’s work is too important to go through it mindlessly. That’s what I started doing, and I don’t want to keep compromising. I view work as a calling, and jobs are individual vehicles to help materialize some piece of the puzzle. I’ve proven that I’m incapable of doing anything half-heartedly. It lets me rally myself and those around me, but the dark side of this tendency can get seriously bleak.

I need to learn better self-management, so I’m going to keep taking time. I’m going to keep spending long aimless afternoons in the park, reminding myself of what makes me smile, and letting the bits of understanding filter in, packet by packet. I’ll keep writing, I’ll keep talking, I’ll keep renewing friendships, and I’ll keep thinking about how to do work that pays respect to either of the problem sets I outlined here.

I’ll keep you posted.

image