Are Deepfakes Destroying Our Trust in Truth?
Words by Josie Gleave
Illustrations by Lee Lai
Fake videos are easier to create than ever before. Beyond the fears of their doomsday potential, deepfakes expose our flaws in determining what is true and trustworthy.
“Imagine this…” Mark Zuckerberg begins. In the video, he brags about being the “one man” in control of millions of peoples’ stolen data, but the quality is a little fuzzy, the voice is not quite right and his forehead pulses unnaturally. This video isn’t real, it’s a deepfake.
It all started back in 2017, when a Reddit user named “deepfakes” shared videos he created of celebrities’ faces in pornographic films. A deepfake utilises deep learning technology – hence the name – to create a video with a superimposed face of another person generated by a computer algorithm. Editing videos and machine learning were not new technologies but deepfakes marked a progression in the evolution of content.
The Zuckerberg video joined the alumni of other viral deepfakes that led to rising fears about audio and video capabilities. A collaboration by Buzzfeed and comedian Jordan Peele demonstrated the ease with which anyone could put words into another’s mouth, even Barack Obama. A video of comedian Bill Hader impersonating celebrities showed the technology’s realistic appearance as Hader’s face seamlessly morphed into Arnold Schwarzenegger’s. More recently, an app called ZAO was released in China with the ability to swap faces with users and actors in famous film clips based off of a single picture of the user, a process that previously took hundreds of images.
A well-timed deepfake could have the power to manipulate markets, destabilise international relations, disrupt political campaigns, incite nuclear war, or frame a person for a crime. Maybe these effects seem Orwellian, but they are possible in an era when seeing and hearing are no longer believing.
“It is possible to generate a convincing fake video of a world leader saying whatever you want them to,” Hany Farid, Professor of Computer Science at the University of California, Berkeley, writes, “The public won’t know the difference.”
There has always been misinformation. The early 20th century called it yellow journalism, and then the 2016 United States presidential election branded “fake news” into every news reader’s brain. Beyond the written word, photography has been altered with Photoshop and Hollywood has doctored film since its inception. But video and audio were considered trustworthy mediums capable of holding Richard Nixon to account for Watergate and Petrobras's executives for Car Wash. Until recently, it was too difficult and expensive for the average person to create such fake, damning evidence.
We are in the perfect storm of deepfakes. Any person can make a fake, widely share it, and find an audience to consume it. If any of these points of production or distribution were disrupted, deepfakes would be less of a threat.
Luckily, many people can spot a deepfake, for now. Perhaps the greater challenge is convincing a polarised electorate of a video’s genuine or false nature after it has gone viral. If social media users are able to climb out of the echo chambers that reinforce established biases and beliefs, we may come to see that deepfakes are not the root of the problem. Instead they shine a light on the failing structures of truth and trust in society.
Rachel Botsman, author and Oxford University lecturer, recently spoke to Matters Journal, explaining, “I don’t think society’s problem is a lack of trust; it’s a lack of a shared version of what constitutes facts and the truth.”
While some people may be duped by a political deepfake, the collective consensus - who we trust and how we determine what is true - is at risk. Just as religious institutions of the past held strong social and political authority in European countries, the free press capable of holding powerful people to account is losing the public’s trust. What will fill the gap? One thing for certain is that beyond the fears of dystopian realities and crumbling democracies, there are consequences now.
The real damage of deepfakes are to the individual. Deepfake porn is now included under the umbrella term image-based sexual abuse, along with revenge porn and upskirting, but internet forums dedicated to custom deepfakes are growing and function as marketplaces where one can pay for videos of an ex-partner or a co-worker.
Besides privacy violations in porn, counterfeiters committed the first ever case of financial fraud using deepfake audio. A managing director of a British energy company wired more than $240,000 (USD) to an account in Hungary after following the direction of a voice on the phone whom he believed was his boss.
Technology is created to fill a need. We are soothed by the thought that most advances are beneficial even with a few downsides. What is positive about a technology designed to create fake media? The most common suggestion is entertainment. Nicolas Cage making a surprise appearance in The Sound of Music is an amusing addition to meme culture, although arguably not that enriching. Additionally, there’s some speculation that streaming platforms could offer a higher paid membership where viewers choose the cast from a list of actors or even insert themselves.
Beyond entertainment, AI-generated audio could help a mute person find their voice, and the same principles of face swapping in video could be used to overcome fears. For example, it could be empowering to watch a video of yourself delivering a speech as a tactic to manage public speaking anxiety. Despite these potential advantages, the downsides of deepfakes are overwhelming. Whether the future holds mass levels of misinformation or deepfakes become the next email spam, it’s unclear what the next steps should be.
Computer science researchers have come up with technology-based responses that include analysing a politician’s biometrics when they speak (how they tilt their head or move their eyebrows) to compare with a deepfake. Alternatively, adding digital noise not visible to the human eye to videos could fool deepfake algorithms and prevent a fake from being created. But other academics argue that focusing on technological responses will stimulate the next arms race.
“The underlying anxiety is a crisis in the institutions and practices we once relied upon to differentiate between reality and fiction,” says Mark Andrejevic, Professor of Media Studies at Monash University, “It’s too easy to say it’s the technology or too easy to say people have lost faith. It’s some combination of those two.”
In June 2019, the US held the first congressional hearing on deepfakes, but legislation in the digital age is always playing catch up. Some argue any government involvement would lead to censorship. Hany Farid says it’s possible to enjoy the entertainment aspects while reigning in social media giants who profit from fake content.
Deepfakes are easy to create and share across the globe, but it’s the public’s willingness to consume that makes them a threat. This is something each person has their own individual control over, and it is vital we choose wisely. When we retweet without looking beyond a headline or fact-checking a video, we are part of the problem.