Are data decisions less dangerous?

In traditional publications, editors lie at the very top of the food chain, not because they write more or better, but because they are the ones responsible for choosing what gets published and what gets spiked. As Oscar Wilde wrote: “There is only one thing in the world worse than being talked about, and that is not being talked about.”

In a world of social media, the importance of editorial judgement has diminished. Stories and content are surfaced through the wisdom of the crowd and the power of social networks. Likes, shares, links and retweets are fed into complex algorithms to generate a constant supply of content relevant to the interests of users. Needless to say, stories that fail to gain this kind of traction can end up languishing in social media oblivion.

What’s left unseen

According to the principles of technology firms, these platforms are essentially neutral, designed solely to provide more content that’s more relevant and engaging for users. If your friends share something on Facebook, the firm’s closely-guarded algorithm assumes you will also be interested in this content. Similarly, when your search terms accord with a site linked to by thousands of other web pages, it indicates to Google that this is the kind of content you’re looking for.

However, a couple of recent examples have let slip this façade of neutrality and exposed how data can never fully replace human judgement. Firstly, Gizmodo reported allegations by former Facebook employees that the company routinely made subjective editorial decisions about what to include in its “trending” news section, generally at the expense of “conservative” news stories.

Judgements taken by a newspaper with a clear editorial policy read by a fraction of a population have some impact, but Facebook has 1.6bn users worldwide and its decisions could have huge consequences for both awareness and public opinion. Despite denying systemic bias, the platform has agreed to implement a number of changes to retain the trust of the public and the U.S. Congress.

Meanwhile, Google made its own more explicit editorial judgement with the announcement that it would stop taking advertising from payday lenders (defined by the company as those offering loans with payment terms of less than 60 days). As the New Yorker noted, the company has a tricky relationship with payday loans which rank among the most popular (and therefore highest revenue) search terms in Google Adwords. Lenders have reacted angrily calling the move an “unprecedented abuse of power by a monopoly player.”

Data decisions

Both of these cases illustrate the fine line that the biggest players in online media now tread when making decisions, given the unprecedented power they wield. However, the outrage ignores the far more significant impact of decisions not taken by humans as a matter of policy. It’s very easy to criticise a decision when there is a clear group of individuals to blame, but it’s a lot more difficult to get angry at an algorithm.

Yet, algorithms are coded by humans who make a series of decisions, and technology firms are tweaking and adapting their code all the time. As Google’s former ‘Design Ethicist’ outlined in an article last week, technology exerts a powerful effect on our choices and attitudes, even when it is designed and optimised solely to better serve the needs of its users. The purpose of Facebook and Google is not to show you what’s true, but what’s useful, entertaining, engaging, and will ultimately lead you to come back for more.

More generally, algorithms that rely on the wisdom of the crowd are prone to amplify the biases of those crowds. On social media such as Facebook this often means users only see content from their friends, most of whom will share their views.

This filter bubble effect was brilliantly illustrated recently in a WSJ interactive showing the differences between what a liberal and a conservative might see in their Facebook newsfeed. Even if Facebook has its thumb on the scale, the trending news section nonetheless provides a more varied diet of opinions than most users would normally see on their accounts. Meanwhile, if Facebook’s “I voted” megaphone has a genuine influence on election turnout as is claimed, the younger skew in userbase means the platform could even have a decisive impact on the outcome of the EU referendum.

Similarly, the scale of Google means it must confront these tensions. It is likely that the large AdWords budgets of payday lenders will now be diverted to other marketing activities such as SEO. But if Google thinks payday loans are so unethical, shouldn’t it divert users to other sites, or even ban them from search altogether?

On the other hand, payday loans are a legal and legitimate financial product, for which there is significant consumer demand. And the more Google intervenes in its algorithms, the less confidence users will have that it is providing them with the right results.

Brands, politicians, publishers and others may feel they are at the caprices of the policies of tech giants who can switch off the tap at any point. But when decisions are made by data alone, they are often even more unaccountable.

Josh Glendinning

Hill & Knowlton Strategies Search