Categories
Uncategorized

After the Flood: Finding the Path to Healthy Communications Technology

The cost of information transmission has collapsed. As a result of the profit opportunity this presented, human interaction has been centralized in platforms of truly enormous scale. Centralization makes it possible for these platforms to monetize our clicks and eyeballs to the tune of billions of dollars, through billions of users. We are only beginning to see the effect of this on human brains.

So YouTube […] set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal.

[…]

Three days after Donald Trump was elected, Wojcicki convened her entire staff for their weekly meeting. One employee fretted aloud about the site’s election-related videos that were watched the most. They were dominated by publishers like Breitbart News and Infowars, which were known for their outrage and provocation.

YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant (via Bloomberg)

This raises some inevitable concerns about how to handle information responsibly when it moves at 21st century speeds. If maximizing reach and reducing friction yields profit while compromising individual and societal health, then when is reach a liability and who bears the costs of its downsides? At what scale does reach become unsafe?

Developing a framework for safety under 21st century communication paradigms opens up several questions.

Throughput is the measure of data transfer in an information system. Is it possible to devise a means of quantifying throughput through the human culture both today and historically? If so, what change over time would we observe?

If it were possible to quantify throughput filtered collectively through human brains, have we exceeded the measure for safety, and how could we tell? Is there a way to measure change over time, and can a heuristic be developed for maximum safe reach?

Physical information systems have limits to what they can process, and when they’re surpassed, items are dropped or queued. A webserver can only handle so many requests per second, and when that threshold is maliciously exceeded, we call that a denial of service attack. If we are also physical beings with physical limitations, why should we be exempt from limitations seen in other information systems?

Computer networks are deliberately partitioned for safety and security. Should human networks be deliberately partitioned for the safety of our brains? Are there lessons we can learn from the domain of network administration?

Does decentralization inherently create safety?  Are there specific design patterns that should be adopted or avoided to prevent replicating the problems seen in current social products?

Mark Zuckerberg, trying to get ahead of the inevitable, recently put out an article proposing a regulatory framework for entities like Facebook. In the US, previous regulation on media companies tried to limit consolidation to mitigate monopolies holding overwhelming sway of public opinion. The originators of that regulation could never have dreamed of the reach of a platform like Facebook, which is largely uncontested. Are there historical precedents for regulating media companies which are applicable to 21st century problems, even though they have a very different shape?

None of these questions has clear, immediate, or straightforward answers. Still, as we grapple with the impacts of centralized communication platforms, they represent only the beginning of the hard problems we need to grapple with to ensure we build communications technology responsibly.