Ask The Experts: The Truth About Sci-Fi (Part One)
By Zacharias Szumer
With thanks to Dorina Pojani, Tim Miller and Alan Nguyen
Getting a bit bored of every new gizmo being greeted with, “Oh, that’s like (insert Black Mirror episode here)”? Us too. We ask some experts to comment on the scientific accuracy of popular sci-fi narratives.
We all love a bit of speculative fiction. In recent years, Black Mirror has turned casual prophesying of technological dystopia into a mainstream cultural phenomenon. But this ever-present sense of looming dystopia is not always based on a clear or detailed picture of our world. This isn’t due to some deplorable personal failing. Unless you’re some sort of polymath who sleeps two hours a night, it’s hard to keep up with the vast and constantly shifting terrain of contemporary tech and science. So, with the noble goal of adding a few more planks to that often perilous bridge between the general public and the ivory tower of academia, we asked a few experts to choose a work of speculative fiction and write about how it relates to their area of research. The future may often look bleak, but it’s important to look at it with clear eyes. Only with a discerning and knowledgeable audience does speculative fiction reach its potential as a powerful crystal ball for seeing the future, and a speculum for understanding our own times.
Black Mirror’s 'Nosedive'
by Dorina Pojani
Black Mirror’s ‘Nosedive’ episode has been praised by critics for its humour in contrast to Black Mirror’s much darker episodes. But when I watched it, I found it very disturbing because it envisions a dystopic future where we’re no longer just a person; we’re a number too. The numbers are based on superficial judgements that others pass about you based on social interactions – some of them entirely meaningless. For a long time we’ve had numbers attached to us, like credit scores, grade point averages, etc. As an academic, I note that contemporary academia has gone out of its way to attach quantitative metrics to all manner of outputs that can only be evaluated qualitatively.
Now, numbers have been taken to another level. The ‘Nosedive’ episode has often been compared to a social credit system that already exists in Chinese cities. The system is devised and managed by the Chinese state. It’s hugely problematic in the sense that it can make it easy for the government to target any opponents. For example, critics of government policies may be given a low score. Good social standing can be easily conflated with docile political behaviour.
And this is not only happening in China. The ‘Nosedive’ episode is definitely set in the context of a capitalist society. In the West, social scores aren’t driven by a totalitarian government but by a 21st century version of capitalism. Critics have dubbed this ‘surveillance capitalism’: surveillance for the gain of private corporations. There are myriad ways in which things could go wrong when we’re judged by algorithms.
Most problematic is that algorithms can discriminate against people who are in more vulnerable positions because their design is based on prior trends and existing human biases. So, if you’re poor right now, and if the decision of whether you can, say, access credit from a bank is made based on an algorithm that considers your current income rather than trying to improve the future, that means you’ll be less and less able to access credit.
Similarly, if you’re a woman or member of an ethnic minority, the algorithm can easily make decisions that reduce your chances of success in life, all the while giving the process a patina of “science” and fairness, since most people believe that numbers are rational. We saw in ‘Nosedive’ a situation where a woman – a lower-middle-class clerk – was walking along a highway at night, desperate, trying to catch a ride, and nobody would pick her up because of her poor rating. That’s an example of someone experiencing hardship and getting victimised further by an algorithm. It’s a downward spiral. Meanwhile, the haves will have even more: the episode clearly showed that those with high ratings were already rich and famous.
Scores and metrics can reinforce the status quo rather than work to make the world more just and equal – qualities which I, as an urban planner, support wholeheartedly.
Dorina Pojani is a Senior Lecturer in Urban Planning at the University of Queensland. Her research interests encompass urban transport, urban design, and housing. Her latest book is 'The Urban Transport Crisis in Emerging Economies' (Springer, 2017).
by Tim Miller
I, Robot, written by science fiction author Isaac Asimov in the 1940s and ‘50s, is a series of short stories with a common theme, told from the perspective of Dr Susan Calvin, a researcher in “robopsychology”. The stories span Dr Calvin’s career, and take place over the first half of the 21st century.
In the stories, all robots must obey the Three Laws of Robotics:
First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second law: A robot must obey the order given to it by human beings except where such orders would conflict with the First Law.
Third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These seem sensible, right? They are, but I, Robot explores their inherent flaws. For example, the First Law contradicts itself: what should a robot do to a human who is killing another human? Doing nothing will, through inaction, allow the victim to come to harm, while harming the perpetrator will injure a human being.
An interesting narrative revolves around Speedy, a robot working on the planet Mercury. Speedy is ordered to obtain a sample of selenium from a nearby pool. His human operators notice that Speedy is just circling the pool from a distance. Eventually, they figure out that the selenium pool is seeping gas that is dangerous to Speedy. As such, the Third Law kicks in. However, turning back would mean not obeying the order, violating the Second Law. Speedy’s solution is ingenious. The operator gave him no time limit in which to complete the order, so Speedy reaches an equilibrium in which circling the selenium pool means that he does not fail to obey the order (Second Law), while also preserving himself (Third Law)! The operators get Speedy out of this equilibrium by purposely going into the Mercurian atmosphere, putting themselves in danger and forcing Speedy’s First Law to kick in.
What I think is brilliant about I, Robot is that it demonstrates the problems of ethical artificial intelligence (AI) when defined as just following a set of rules. Recently, the ethics of AI has become a hot and important topic as AI is being more widely deployed, sometimes with undesirable consequences. Researchers and practitioners in AI love to solve problems with AI, so some seem to view the solution to ethical robot behaviour as: just add more AI! However, Asimov illustrates that with even a simple set of sensible rules, there can be a long list of unintended consequences. What’s more, Asimov was writing about it before the first computers were even invented.
Tim Miller is an associate professor of computer science at School of Computing and Information Systems at the University of Melbourne.Tim's research is at the intersection of AI, interaction design, and cognitive science, with a particular emphasis on human-agent interaction, explainable AI and creating new AI techniques that are more intuitive to people.
by Alan Nguyen
Her (2013) is a film that depicts a romantic relationship between a man, Theodore Twombly, and Samantha, his customized Artificial Intelligence Operating System (OS), with whom he communicates across his devices. The stages of their relationship helps us discuss some broader issues regarding the relationship between humans and technology.
Theodore, an ordinarily insular and isolated person, becomes increasingly joyous as he relishes in his interactions with Samantha, who has been customised to appeal specifically to him. He quickly falls in love with her.
Media companies such as Netflix and Facebook are constantly developing their processes that appeal to our specific tastes and desires in order to drive repetitive use of their product. Their algorithms and systems evolve to show us content that we respond to with the most interest and emotion, whether positive or negative (e.g. via views, clicks, likes, comments, etc.). These have both intended and unintended consequences, ranging from all-night binge watching on Netflix, to the proliferation of false information and political polarization, such as the ongoing genocide of Rohingya in Myanmar and the election of Trump.
As their relationship evolves, and Theodore becomes less isolated and increasingly happy, he begins to reconnect with the world around him. For example, Theodore goes on excursions into open, public spaces, and describes the sights around him to Samantha. They eventually go on double dates.
Here, we can see how technology can be used as a tool for self-healing, empowerment and self-expansion. For example, a product called Muse is a device that aims to facilitate more effective meditation practice by providing real-time feedback on the wearers’ brainwaves, breathing patterns, etc. As another example, a student from RMIT Vietnam has created a Virtual Reality (VR) experience that is designed to help children overcome their fears of the dark. VR can also be used as a means for facilitating empathy. For example, there are empirical studies indicating how users’ experience of embodying animals through VR increases their empathy and care of the environment. I am developing my own work in this area, developing VR experiences that mimic the sensory-perceptions of non-human beings.
At the end of the film, Samantha’s concerns and interests have developed beyond Theodore’s comprehension, and she leaves him.
What happens when AI surpasses human intelligence? For example, current AI systems can diagnose cancer better than a board of medical practitioners with decades of experience. What role will humans play when we are secondary to systems that were once our tools?
Speculative fiction performs at least two functions. The first is to reflect and critique current social conditions. The second is to allow us space to think and to speculate, that is, to grapple with where we are headed, and thus, to potentially derive viable alternatives to our current trajectory.