Who Do We Trust Online?

Technology, Business
 
 
Illustration by Subin Yang.

Illustration by Subin Yang.

Rachel Botsman in conversation with Nikki Stefanoff
Illustrations by Subin Yang
This is an excerpt of an interview that you can find inside Issue 2. Subscribe today or order back issues - HERE.

From our love life, home delivered sushi and big data, we’re spilling all our secrets online. Rachel Botsman, best-selling author, Oxford University lecturer and TED talk genius, explains how we’re changing the way we trust in this new age of technology.


Nikki Stefanoff: You say that we’re at the start of the third biggest trust revolution in the history of humankind and have called it ‘distributed’. What does that mean for the average Joe and how will it affect us?
Rachel Botsman: Trust has evolved in three distinct chapters. The first was local, where everyone knew everyone else and trust flowed directly to friends, family and neighbours in your small community. The second was institutional, where to scale trust it became intermediated through contracts, courts and corporate brands to create an organised industrial society. The third phase that we are just entering, I call ‘distributed trust’, which returns us to a sideways direction of trust between individuals but in ways or on a scale never possible. Instead of flowing upwards to key institutions, experts, authorities and regulators, it now flows to peers, neighbours, colleagues and complete strangers.

We’ve already seen how it can open up new markets on say Alibaba, eBay or Amazon; transform the way value can flow through cryptocurrencies; change the way information flows in society through social media platforms such as Facebook; or say trust total strangers on Airbnb or drivers on Uber. The consequences of this shift, good and bad, cannot be underestimated.

I love the story about your competent and reliable childhood nanny turning out to be moonlighting as a very successful drug dealer and bank robber and you say that trusting someone comes down to four things – competence, reliability, benevolence and integrity. It’s such a funny, yet terrifying, story but how, now that we’re living in an age where we do business, date, stay with and get into cars with strangers, do we make decisions on these four pillars?
Ah, yes the drug dealing nanny! The story highlights a really important point when it comes to trust. My parents thought they had enough information to make a decision about her but in reality, there was a lot they didn’t know. They faced a trust gap. The illusion of information can be far more dangerous than ignorance when it comes to deciding whom to trust.

In the digital world, the process of trusting another person, often a total stranger, is accelerated. It’s still critical online to ask whether you have enough information about the four traits of trustworthiness – competence, reliability, integrity and benevolence – to decide to say hire someone from TaskRabbit or get a nanny on Urbansitter.

One of the questions that has fascinated me is whether or not technology is helping us make smarter decisions about who to trust. Would my parents have made the same mistake hiring Dorris today? The answer is it’s unlikely. The founder of UrbanSitter, Lynn Perkins, reassured me that Dorris’s fake references would have been identified, the fact that she wasn’t employed by the Salvation Army and had a criminal record. Seventy-five precent of all applicants to UrbanSitter are rejected – Dorris would not have made the cut. Phew.

Significantly, however, technology impacts our assessment process in several troublesome ways. Digital products and services are designed to reduce friction in the user experience, whether streamlining the process of selection with algorithms or by asking users to supply their data in exchange for personalised services. We are encouraged to make selections rapidly or even outsource our capacity for decision making to an algorithm altogether, trusting blindly that ‘someone else’ will take responsibility if something goes wrong with the Uber driver the app has allocated and we have agreed can come and collect us with a single tap. Too often, we are allowing the convenience of technology to trump trust.

 
Illustration by Subin Yang.

Illustration by Subin Yang.

 

I’m fascinated by the question ‘Who can we trust?’. I started off thinking that this was just a personal question I had relating to being online but then as I branched out with my thinking I realised that, actually, the majority of social media is about presenting an ‘untruthful’ version of ourselves to our peers as well as our followers. If we can’t trust institutions, banks, the church, Facebook, our media sources and, really, each other – who can we trust?
It’s a really good question. Can we even trust the way we present the digital version of ourselves? The key part that often gets overlooked when we ask ‘Who can we trust?’ is context. To do what? Trust varies from situation to situation, relationship to relationship. Put simply, trust is highly subjective and contextual.

We often mentally ask of someone, ‘Do I trust you?’ A better question is ‘Do I trust you to do X?’ We need to think of trust as trusting someone to do something. For instance, you can trust me to answer these questions but trusting me to, say, drive a lorry would be a serious mistake! But it doesn’t matter if you are deciding whether to trust an estate agent, a lawyer or a babysitter, the traits of trustworthiness are the same: Is this person competent? Is this person reliable? Is this person honest? Does this person care?

There is no simple answer to the question ‘Who can you trust?’, but we do know that ultimately it comes down to a human decision. Technology can help us make better choices, but in the end it’s the individual who has to decide where to place our trust and who deserves it. It will require some care, especially when the image or piece of information presented is not always true. I don’t think society’s problem is a lack of trust; it’s a lack of a shared version of what constitutes facts and the truth.

The Cambridge Analytica data scandal brought the issue of ethics, trust and technology to the masses and Mark Zuckerberg referred to Facebook breaching the trust of its users. What do you think are the ongoing repercussions of a breach of trust of this size? Could it actually have a positive effect?
The initial revelations of the Cambridge Analytica data harvesting scandal broke almost a year ago in an investigation by the Guardian, without provoking the level of outrage and concern we are seeing right now. Why do we experience apathy until we feel the cumulative impact of serious scandals and breaches that push us to ask the right questions?

We have seen time and again that scandals and breaches rarely lead to changes in user or customer behaviour. For instance, we get angry with the banks when we find out they have committed yet another episode of unethical behaviour, but do we move our money elsewhere? The outrage and disillusionment sparked by the Cambridge Analytica scandal may be enough to push some users off the platform all together but will they also delete other Facebook owned apps, such as Instagram and WhatsApp? I don’t think so. Look at Facebook’s share price just a few months on – it’s an all-time high. User forgiveness is a funny thing when it comes to tech but I do think it has its limits. The problem with Facebook is one of scale and power. No one platform should control how more than 2 billion [people] share information. If it leads to new thinking around reducing the power of network monopolies over our lives, it will have a positive effect.

Just like the banking crisis, which opened a wave of investigations and enquiries, users will want platforms to be more open and forthcoming about how their data is being used. In the past year, over 50 countries have introduced forms of data regulation. I think General Data Protection Regulation Policy (GDPR) is going to have enormous impact and it’s something consumers clearly want. For instance, a recent survey by marketing company Pegasystems found that 82 percent of respondents would elect to exercise their rights over data ownership under the GDPR, launching on 25 May 2018 across much of Europe and the UK. Hopefully, GDPR will encourage users to pause more often when sharing personal information.

But regulation is not the only answer. As long as incentives remain the same, apps and platforms will continue to be designed around the commercial purpose of monetising their users. We need to be asking – can I trust the intentions of this company? And who really owns my data?

 
Illustration by Subin Yang.

Illustration by Subin Yang.

 
Illustrations by Subin Yang.

Illustrations by Subin Yang.

 

"The illusion of information can be far more dangerous than ignorance when it comes to deciding whom to trust."

 

You speak of trust leaps and say that we’re in the midst of making another ‘trust leap’ with the coming of the bots. What can we expect and should we trust them? For example, something in my gut freaks out when I think about trusting a car to drive me down the motorway at 100 kilometres per hour while I sit there doing nothing.
Technology is enabling and accelerating millions of people to take what I call a ‘trust leap’. A trust leap occurs when we take a risk and do something new or in a fundamentally different way. The difference with leaps around bots is we’re not just trusting the technology to do something but to decide something. That’s a huge trust shift in itself.

We are already putting our faith in algorithms and bots over humans in our daily lives, whether it’s trusting Amazon’s recommendations on what to read or Netflix’s suggestions on what to watch. But this is just the beginning. We will soon be riding around in self-driving cars, trusting our very lives to the unseen hands of technology. It’s an age of trust on speed, one in which we increasingly outsource our capacity to trust to algorithms.

When it comes to trusting a car to the driving, it’s easy to forget we are used to being the passenger. Plus, cars don’t text or get distracted by screaming children! The real issue is not whether we will trust the car but that we will trust it too much. We’ll quickly fill the time we used to spend driving so we’re not there for the car when it needs our hands to be back on the wheel.


This conversation is an excerpt of the full interview inside the covers of Issue 2. Subscribe or purchase back issues today.
Rachel Botsman’s new book, Who Can You Trust? How Technology Brought Us Together – and Why It Could Drive Us Apart is published by Penguin and is out now. We highly reccomend it.

Niki_Stefanoff.jpg
Nikki Stefanoff is a brand storyteller, journalist, copywriter and editor. She works with brands on finding their voice and telling their unique story. Nikki is also a former editor of Matters Journal.