Have faith no more

The buck now stops with nobody in particular, writes Rachel Botsman

I placed my trust in others just as trust in the wider world collapsed. While being lifted aloft on a chair, as is traditional for a Jewish bride, Lehman Brothers filed for bankruptcy. It was Sunday 14 September 2008. Nobody at my wedding quite knew then the scale of the pandemonium that was to play out when the markets opened the following morning. But as they drank the night away and learned of the ensuing crisis on their smartphones, most of them justifiably feared the worst. As guests danced around us shouting “Oy! Oy! Oy!”, outside the venue, the global financial crisis was just getting started.

A decade later, and the world has not only learned to trust again, it has done so on a grand scale unimaginable in the febrile days of the late noughties. Trust has become the global currency – everything, from vintage goods to home improvement loans, is traded by trust. Conventional systems remain, but the trend is all one-way. When I wrote my first book in 2011, I expressed surprise that a little disruptor called Airbnb held 10,000 properties on its site. My gasp of amazement feels faintly embarrassing now. By the end of 2017, Airbnb boasted three million listings in 191 countries. It is – by far – the biggest hospitality company in the world.

The star rating fallacy

But who is regulating the giants of the trust economy? eBay is a peer-to-peer trading platform that is self-regulated by its buyers and sellers, via feedback and review. Yet only 2% of its feedback is negative or neutral. Buyers who don’t like what they get from eBay tend to leave no feedback at all, fearing either tit-for-tat reprisals (the seller can give them bad feedback in retaliation), or else decline to comment because of an antiquated sense of politeness which holds that if you have nothing nice to say then it’s better to say nothing at all. But without negatives, positives are near worthless. Grade inflation on eBay, TripAdvisor, Airbnb and elsewhere is hindering the system. The stars are hopelessly misaligned.

Who is accountable?

If users are not taking responsibility, who is? Whose job is it to act? The buck certainly doesn’t appear to stop with the companies, who have a mixed record at best. When regulator Transport for London revoked Uber’s licence to operate in the city, there was uproar from some, before the city’s mayor Sadiq Khan made the point that it could not be right that a technology platform be waived the responsibility to meet local regulations simply because it had lots
of customers.

Issues of accountability are incredibly complex in an age when platforms offer branded services without owning any assets or employing the providers. Tom Goodwin, a senior vice president at Havas Media, put it well when he wrote in an article: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.” When things go wrong in this brave new world they raise the critical question of where accountability lies, and how we find the right balance between self-regulation and traditional top-down compliance that was not designed for an age of ‘distributed trust’.

The old-fashioned way

In January 2013, UK retail giant Tesco had to withdraw 10 million hamburgers and other products from its shelves. The reason? Many of its products, purporting to be made of beef, had been exposed as containing horsemeat. The discovery sparked a national outcry and was headline news for several days. Even the then prime minister, David Cameron, got involved, reassuring a horse-loving country that everything possible would be done to address a “very shocking crime”. To some degree, Tesco deflected, saying that its supplier the Silvercrest factory, which was then owned by the ABP Food Group, had breached its trust. Yet it issued a full apology and publicly accepted responsibility for the fiasco. When a customer shops at Tesco, their trust clearly lies in the supermarket, not the (largely hidden) companies further down the supply chain. The same principle fails to apply to technology platforms, from Facebook to Amazon, Alibaba to Uber, where the question of who we trust still lacks a clear answer.

Who do we trust?

Just as traditional engineers build bridges, roads and railways so reliable that we don’t consider their use a risk, contemporary trust engineers build computer systems so ostensibly foolproof that our minds settle in a place where we don’t consider the risks we are taking. Tinder, the dating app, is a platform designed to match singletons for nights out or something more; to arrange hook-ups based on shopping through a handful of words and a catalogue of photos. Where once at least some face-to-face contact was required, now a face on a screen is enough. “I used to think it was completely ridiculous to compare trust between people with trust in these online platforms,” Professor Coye Cheshire, a social psychologist at Berkeley tells me. “I’m not certain that is the case anymore, because in some ways we have offloaded some of our cognitive power.”

The challenge of trust in the digital age is the speed at which we grant our trust with a swipe, click, share or accept. Once we are in an accelerated state of trust, it can be hard to slow down. “The problem comes down to social translucence,” says Cheshire. “How much of the social interaction – our behaviours and the underlying mechanisms that enable interactions – is visible?”

The Trust Illusion

In the trust game there are two participants, a sender and receiver. They are anonymous strangers. Both are given $100. The sender can send any amount of the money, or none of it, to the receiver. The sender is told that whatever amount they send will be trebled – by the experimenter – for the receiver. The receiver then has to decide how much of his money to return to the sender. He can send any amount of it back, or none of it. Therefore, the sender can turn a profit or lose everything. The point of the game is that sending a large amount indicates a high degree of trust that the unknown recipient will give back at least what he was sent. The late Professor J Keith Murnighan created an experiment whereby he asked participants to list the names of people they trusted and those they distrusted. These names were then flashed up on a screen in front of the participants – so quickly they had no chance of reading them – prior to their taking part in the game. The results were stunning. Senders who had been exposed to the names of people they trusted sent an average 50% more to the anonymous receiver than those exposed to the names of the people they distrusted. “We found we could stimulate feelings of trust for a stranger without people even realizing,” wrote Murnighan. “Imagine a fanatic fan of Elvis Presley. If I know someone is a huge fan of Elvis, I might casually drop Elvis’s name to activate more trust in me. There is clearly a risk of manipulation.”

Lab rats

For one week in 2012, social psychologist Dr Adam DI Kramer launched an experiment using the world’s biggest laboratory of human behaviour – Facebook. He wanted to discover the propensity for ‘emotional contagion’ – do the emotions expressed by our Facebook friends affect our own mood? By tweaking the algorithms to include more positive or negative content, Kramer was able to assess the extent to which the posts of others made users more or less happy, and more or less likely to
visit Facebook.

What was extraordinary about the experiment was not so much its findings (the changes were, relative to Facebook’s scale, miniscule) but the reaction to it. The experiment became notorious; its users complained of being ‘lab rats’. Yet, even according to Facebook itself, the chances of users of the platform being experimented upon are precisely 100%. “At any given time,” says Dan Farrell, a Facebook data scientist, “any given user will be part of ten experiments the company happens to be conducting.” When the University of Illinois surveyed Facebook users, it found that 62% of them were unaware that it manipulated its feed at all. That means that of its more than 2 billion users, more than 1 billion of them think the system instantly and without prejudice shares whatever they or their friends post. The real truth is out there, but few hear it.

The tech giants – through careful trust engineering – have manufactured an image of being neutral brokers, platforms that act as meeting- or trading-places between users. The shock at which Fake News was exposed as such following the US presidential election was a consequence of a human race that is now more likely to place its trust in a distant algorithm than experts or authorities.

The history of new technology is not that we trust it too little, but that we trust it too much. Traffic lights, stop signs and road lanes – latterly airbags and seatbelts – arrived decades after the motorcar. Similar checks and balances on the trust economy will need to arrive much more quickly. No system is ever completely foolproof, but we must hope that our recent bump with the trust economy is just a minor collision, not a fatal accident.

— This piece is based on Chapter 4: Where Does the Buck Stop? in Rachel Botsman’s new book Who Can You Trust? (Penguin Porfolio) which is available now (Click here for our review)