From Point A to Chaos: The Inversion of Information Economics

If it’s really a revolution, it doesn’t take us from point A to point B, it takes us from point A to chaos.

Clay Shirky, 2005

In 2019, we reel from a series of improbable events. Whether it’s the 2016 US election, Brexit, the resurgence of the theory that the earth is flat, or the decline of vaccination, outcomes we once saw as unthinkable have arrived in force. A culture war blossoms in the conflicts around developments some see as progress, and which others find threatening and absurd. This coincides with centralized communication platforms that reward compulsive engagement, indiscriminately amplifying the reach of compelling messages—without regard for accuracy or impact.

Historically, the composition and distribution of information took significant effort on the part of the message’s sender. Today, the collapse in cost of moving a message around has shifted the burden of communication from the sender onto the receiver.

Interconnectedness via communications technology is changing social norms before our eyes, at a frame rate we can’t adjust to. If we want to understand why we’ve gone from point A to chaos, we need to start by examining what happens when the cost of transferring information falls through the floor.

A brief history of one-to-one communication

Another moment in time when the communications technology of the day influenced the direction of history was in 1787, during the Constitutional Convention. The former British colonies were voting on whether to adopt the hotly contested new form of government, and news of how each state fell would influence the decisions of those who had yet to cast their votes. Knowing this, Alexander Hamilton assured James Madison that he would pay the cost of fast riders to move letters between New York and Virginia, should either one ratify the Constitution.

In this world, it was possible for an event to take place in one location without people in a different location knowing about it for days or weeks on end. For a single piece of information to be worth the cost of transit, nothing less than the future of the new republic had to be at stake.

Moving information from one place to another required paper, ink, wax, a rider, and a horse. Latency was measured in days. As time went on, and the infrastructure of the United States matured, a postal system emerged to provide convenient, affordable courier service between the former colonies. Latency was still measured in days, but now it was possible to batch efforts and share labor costs with other citizens.

Around 1830, messages grew faster, if not cheaper, with the advent of the electric telegraph. The telegraph allowed near-instantaneous transmission of information between cities and even continents. However, the sender of a message was charged by the letter, and an operator was required at each end to transcribe, transmit, receive and deliver the message.

With the telephone, in 1876, it became possible to hold an object to your ear and hear a human voice transmitted in real time. The telephone required an operator to initiate the circuit needed for each conversation, and once they did, back-and-forth could unfold without intermediaries. This dramatic acceleration from letters carried on horseback to the the telephone took place in the space of 89 years. By the early 20th century, phone switching got automated, further reducing the cost of information exchange.

By the 2000’s, mobile phones and the internet enabled email and texting, and instantaneous communication to anyone in the world was within reach. At this stage, no additional labor is needed beyond the sender’s composition of the message and the receiver’s consideration of its contents. Automation handles the encoding, transmission, relaying, delivery and storage of the whole thing. The time between a message being written and a message being received has been reduced to mere seconds.

A brief history of one-to-many communication

While Alexander Hamilton wrote copious personal letters, he also leveraged the mass communications medium of his day, the press, to shape political dialogue. He was responsible for 51 of the 85 essays published in the Federalist Papers. By 1787, publishing had already benefited from the printing press. The production of written word was no longer rate-limited by the capacity of scribes or clergy, or restricted by the ideologies of the institutions they belonged to. Those who were educated enough to write, and connected enough to publish, could do so.

Yet the labor of typesetting remained arduous, and presses were the enormous, expensive and exclusive domain of newspapers and book publishers. Once produced, the written word then had to be delivered, passing through a network of intermediaries before it reached the reader. This pattern persisted until very recently.

But other media emerged to carry ideas to the masses, and their delivery methods were less cumbersome. By the 1930’s, radio was a powerful conduit for culture and news, carrying both current events and unique entertainment designed for the specific constraints of an audio-only format. Radio could carry over vast distances—even entire continents. And it did so at the speed of light.

Still, radio was expensive. It required specialist engineers to operate and maintain the expensive, dangerous equipment needed to transmit its payload. It required more specialists to select and play the content people wanted to hear.

Television emerged using similar technology, with even greater overhead. In addition to all the work needed for transmission, television also required the specialists and elaborate equipment needed to capture a moving image.

Radio and television both operated over electromagnetic spectrum which was prone to interference if not carefully managed. By necessity spectrum is regulated, which creates scarcity, which made the owners of broadcast companies powerful gatekeepers of truth and legitimacy.

So between print, radio and television, a handful of corporations determined what was true, what would be shared with the masses, and who could be trusted to be part of the process.

Force multipliers in communication

Eventually, innovations in the technologies above began to cannibalize and build off of one another, helping the already declining cost of information transfer begin falling at an exponential rather than a linear rate.

For example, by the late 1800’s, typewriters allowed faster composition of the written word and clearer interpretation for the recipient. By the 1970’s, the electronic word processor used integrated circuits and electromechanical components to offer a digital display, allowing for on-the-fly editing of text before it was printed.

Then, the 1980’s saw the rise of the personal computer, which absorbed the single use device that was the word processor into just another software application. The written word was now a stream of instantly visible, digital text, making the storage and transmission of thoughts and ideas easier than ever.

Alongside the PC, the emergence of packet-switched networks opened the door to fully-automated computer communications. This formed the backbone of the early internet, and services ranging from chat to newsgroups to the web.

The arrival of the open source software revolution around the year 2000 enabled unprecedented productivity for software teams. By making the building blocks of web software applications free and modifiable to anyone, small teams could move quickly from concept to execution without having to sink time into the basic infrastructure common to any site. For example, in 2004, Facebook was built in a dorm room using Linux as the server operating system, Apache as its web server, MySQL for its database, and php as its server-side programming language. With this, the era of centralized, corporate-controlled, modern social software was born.

The pattern seen in the evolution from printing press to home PC is repeated and supercharged when it comes to the smartphone. By 2010, smartphones paired the ability to record audio and video with a constant internet connection. Thanks to the combination of smartphones and social software, everyday consumers have the ability to capture, edit and distribute media with the same global reach as CNN circa 1990. This had meaningful impact during the protests against police violence in Ferguson, Missouri, in 2014. Local residents and citizen journalists streamed real-time, uncut video of events as they unfolded—without having to consult any television executives.

In the end, this is a story of labor savings. Today, benefits from compounding automation and non-scarce information technology resources, like open source code, have collapsed the amount of human labor needed to reach mass audiences. An individual can compose and transmit content to an audience of millions in an instant.

This leverage for communication does not have a historical precedent.

Dissolving norms

As the cost of information transfer becomes exponentially cheaper, roles and tasks which were once well-defined have become vertiginously fluid.

The lines have blurred between consumer and producer of information. Many users of modern social software shift fluidly between both. A broadcast-style message may result in a public response from a passerby which catalyzes a conversation between them, which itself may later be resurfaced and broadcast out by a third party. This leads to unexpected outcomes, like the viral content which seems to explode out of nowhere. People may face lasting consequences for things they say online. Anything interesting is up-for-grabs. With editing the internet now a collaborative process, the question of what is interesting has billions of judges able to weigh in.

The lines have also blurred between public and private communication, as we grow increasingly accustomed to mumbling into a megaphone. In the past, even the act of writing a letter to a single individual involved significant costs and planning. Today, the effort required for writing a letter and writing an essay seen by millions is functionally identical—and basically free.

Meanwhile, gatekeeping has collapsed. In a world of televisions and printing presses, those who controlled the exclusive, expensive organs of media picked and chose what the audience saw. Today, the primary gatekeeper is the algorithm. It cares only for what keeps its audience coming back to produce more content. And it never has to sleep.

Uncharted waters

By 2019, we face an inversion of the economics of information.

Communications technology has become so effective at eliminating friction to publishing that it’s possible to spin up armies of spambots. The time between considering what we want to say and getting to say it has shrunk to minutes or seconds, and the messages we send are increasingly frequent and bite-sized, thought out on-the-fly. As a result, we face the cognitive equivalent of a distributed denial-of-service attack through an endless torrent of “news,” opinion, analysis and comment.

Why are we surprised by the enormous consequences of this enormous power? In large part, it’s because we simply have no models to help us understand and predict the consequences of communication technology with this scale and reach. We are no longer just porting human interaction onto digital scaffolding, we are using computation to create new forms of interaction altogether. It has never worked this way before.

We’ve sailed off the side of the map, into waters never charted in human memory. Now, our task is one of relearning to navigate, even though our instruments are broken.