Skip to main content

Paying attention to algorithms

The algorithms used by the big Internet companies exert increasing power over our lives. They direct what we see on social media and influence us in all kinds of ways from the products we choose to buy to whom we vote for in political elections. How are we impacted by this development? And by extension, how do these algorithms come to affect our democratic system and our freedom of expression? Brendan de Caires, Executive Director of PEN Canada, writes about how these hidden powers may come to damage society and lead to a spread of fake news, filter bubbles, and hate speech.

Credits Text: Brendan de Caires March 20 2020

In 1971 Carnegie-Mellon University held a discussion on “Computers, Communications, and the Public Interest.” Its moderator observed that “If anything characterizes the current age, it is the complex problems of our technological civilization and the unpleasant physical and mental trauma they induce.” Herbert Simon, a professor of Computer Science and Psychology spoke about the relatively new concept of “information overload[1] pointing out that “an information-rich world … means a dearth of something else: a scarcity of whatever it is that information consumes.” Information, he continued “consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently …[2]

Simon anticipated the disruptions of social media with considerable prescience. He warned that future technologies would have to decide how to manage the “overrich environment of information”, more specifically what they would “allow to be withheld from the attention of other parts of the system.” He wondered how future citizens “at the apexes of decision systems [will] receive an appropriately filtered range of considerations bearing on the decisions they have to make?”

A decade later personal computers were commonplace; soon afterwards they became nodes on networks that kept getting faster and larger. By the turn of the century desktops, laptops and even smartphones were flooding data into the vast collection of activities that we call the Internet. The immensity of that flow is hard to imagine. In 2016 former Wired editor Kevin Kelly wrote that “Twenty years after its birth, the immense scope of the web is hard to fathom. The total number of web pages, including those that are dynamically created upon request, exceeds 60 trillion. That’s almost 10,000 pages per person alive. And this entire cornucopia has been created in less than 8,000 days.”

By mid-2019, 30,000 hours of footage were uploaded to YouTube every hour.[3] As this sentence is typed, Internetlivestats estimates that we create and transmit 9,000 tweets, 950 Instagram and 1608 Tumblr posts, 3 million emails and 78,000 Google queries every second. Our collective online traffic is close to 80 terabytes of data per second.

The quiet ministrations of machine intelligence keep this superabundance at bay. Imperceptibly, Artificial Intelligence (AI) turns oceans of data into manageable streams. It fashions bespoke Google searches, customizes Facebook and Twitter feeds, and nudges us – on Amazon, Netflix, Instagram and YouTube – towards content that conforms, often eerily, with our present moods and opinions. That eerieness is no accident, for the same processes that lessen the friction in our “user experience” also monitor us, recording every habit and preference, collating individual ‘psychometric profiles’ which then dictate where we are led. Algorithm-driven ‘user-engagement’ is key to any digital platform’s profits.

Harvard Professor Shoshana Zuboff calls this “survellance capitalism.” Her definitive tome on the subject warns that when “Google discovered that we are less valuable than others’ bets on our future behavior [it] changed everything.”[4] Google’s founders devised “a corporate form that gave them absolute control in the market sphere, [and then] pursued freedom in the public sphere.” To do this, they insisted that “unprecedented social territories that were not yet subject to law” and argued “that any attempts to intervene or constrain [their actions were] ill-conceived and stupid, that regulation is always a negative force that impedes innovation and progress, and that lawlessness is the necessary context for “technological innovation.”’

What happens to the public interest if such assertions and practices go unchallenged? As newspapers struggle through their transition to digital – in the US alone than 2,000 have closed since 2004[5] – the digital networks taking their place are mostly curated by machine intelligence rather than by writers or editors. The resulting landscape in the words of David Kaye, UN Special Rapporteur for the Promotion and Protection of the Right to Freedom of Opinion and Expression is one that subjects us to “opaque forces with priorities that may be at odds with an enabling environment for media diversity and independent voices.”[6]

These hidden forces harm the public interest in predictible ways. The feedback loops which fuel ‘virality’ – i.e. maximize ‘user engagement’ and ‘time on device’ – are proven shortcuts to the poliferation of fake news, filter bubbles, hate speech and the constant fear and outrage which digital platforms so often produce. The shift to machine curation has meant that rather than measure information in terms of its value to the public interest, or as informed opinion – ideals set out by the philosopher Jürgen Habermas in The Structural Transformation of the Public Sphere[7] – digital platforms treat their data agnostically. In other words, they seek an increased flow of attention without worrying to review the material which produces that attention. As a result, the processes that make social media flow so well also marginalize and silence millions of voices, thousands of times a second, not through malice – by definition AI censorship is a non-human and emergent property of the network – but because the algorithms are optimized to focus our attention, more lucratively, elsewhere.

Who has the power to change this? Facebook and Google, a duopoly that manages newsfeeds for more than 2 billion people, have corporate structures that allow their founders near total control of company policy. But neither has incentives to lessen their stranglehold on the attention economy, nor to tamper with surveilance-driven profits in the name of an old-fashioned ideal like the public interest. Opacity is their ally. The less we know about the digital black boxes which produce the surveillance capitalists’ vast revenues, the better for them. It’s hard to worry about what you can’t see.

A sceptic might ask: Is this new? Haven’t publishers always tried to steer us towards more sensational and profitable content? That raises a trillion dollar question. Are digital platforms publishers? The current legal and policy framework in the US says no. It exempts digital platforms from legal liability for much of their content and the big tech companies have worked hard to ensure that this remains so.[8] Section 230 of the 1996 US Communication Decency Act – large parts of which were ruled unconstiutional by the US Supreme Court – is the main reason why so much of the Internet ended up in what amounts to a First Amendment space.

Often referred to as “the twenty-six words that created the Internet” CDA 230 shields internet platforms from liability for user-generated content, allowing them to host material that might otherwise entangle them in lawsuits. The Electronic Freedom Foundation says it “makes the U.S. a safe haven for websites that want to provide a platform for controversial or political speech and a legal environment favorable to free expression.” All true, but it also gives digital platforms an all-purpose shield for their laissez faire attitude towards recommended content. Since they are now the primary conduit for online news, the line between publishing content and delivering an audience for it may have become a distinction without a difference.

A different sceptic might ask, how much of our online behaviour do algorithms really control? This can be answered more precisely.[9] Guillaume Chaslot, a former Google employee who used to work on YouTube’s recommendation algorithm, estimates that 70 percent of YouTube’s views are generated by its recommendations. In a recent Wired article he notes that this amounts to “700,000,000 hours each day”. Facebook’s news feed pushes “around 950,000,000 hours of watch time per day.”[10] Elsewhere, Chaslot notes that his concerns about Google’s algorithm began when he saw that it “was helping videos promoting political conspiracy theories—like those from right-wing radio host Alex Jones—to get millions of views.” After leaving the company he created a nonprofit that tracks the nudges of YouTube’s algorithm in real time, he is also creating a browser extension that can warn its users: “This algorithm is made to make you binge-watch, not to recommend things that are true.”[11]

Groups like PEN have spent much of their history defending controversial speech and counter-speech, or protecting and amplifying the voices of dissidents and minorities. Essential work in an information economy but much less relevant to the censorship that takes place, invisibly, in the attention economy. Today, misinformation, hate speech and other deformations of the public sphere are carried out by proprietary code which is hardly even noticed by the public, much less reviewed or subjected to democratic oversight.

A decade ago social media platforms were widely seen as technology that would enable and strengthen democracy. The intervening years have complicated that belief. Now we know that they can also provoke xenophobia,[12] racism, and ethnic cleansing, broadcast conspiracy theories, skew elections, and exacerbate a wide range of social and political tensions. Hasn’t the time come for us to insist that companies which influence our online behaviour so profitably also take a greater measure of responsibility for business practices which cause so much harm?


[1] A phrase coined by the political scientist Bertram Gross in his 1964 book “The Managing of Organizations” and popularized by the futurist Alvin Toffler in “Future Shock” (1970).

[2] Emphases added. A transcript of the conversation is available here.

[4] Shoshana Zuboff “The Age of Surveillance Capitalism: the Fight for A Human future at the New Frontier of Power” (London: Profile Books 2018) p. 93.

[5] See the 2018 report “The Loss of Local News” 2018 Center for Innovation and Sustainability in Local Media, University of North Carolina at Chapel Hill https://www.cislm.org/wp-content/uploads/2018/10/The-Expanding-News-Desert-10_14-Web.pdf which estimates 1,800 closures and the update here which raises that figure to 2,000.

[7] The Structural Transformation of the Public Sphere: An Inquiry into a category of Bourgeois Society. Trans. Thomas Burger with Frederick Lawrence. Cambridge, MA: MIT Press, 1991.

[8] See the Electronic Freedom Foundation’s article on Section 230 of the 1996 US Communication Decency Act. This says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”.

[9] In 2013, McKinsey estimated that a quarter of Amazon’s profits and 75 percent of users’ Netflix choices were driven by algorithms.

[10] The Toxic Potential of YouTube’s Feedback Loop – Wired magazine, 13 July 2019.

[11] WIRED25: Stories of People Who Are Racing to Save Us Humanity is facing thorny problems on all fronts. These folks are working to solve them – and trying to avoid the unintended consequences this time.

[12] Fanning the Flames of Hate: Social Media and Hate Müller, Karsten and Schwarz, Carlo, (November 3, 2019).

Like what you read?

Take action for freedom of expression and donate to PEN/Opp. Our work depends upon funding and donors. Every contribution, big or small, is valuable for us.

Donate on Patreon
More ways to get involved

Search