Can We Build Truly Benevolent AI?

Partnership, Technology
 
 
matters journal 1.jpg
Words by Renata Carli
Illustrations by Lee Lai
This story was originally published in Issue 4 and is brought to you by our impact partner, Deakin University.
deakin off center logo.png

With great power comes great responsibility. AI has the potential to both harm and help society – the path it takes, writes Renata Carli, is up to us.


In the early 1940s, United States biochemist and science-fiction writer Isaac Asimov set out his Three Laws of Robotics. They went like this:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov wasn’t dreaming of an algorithm when he wrote these rules. He instead imagined robots modelled in the likeness of humans whose purpose was to serve and benefit humanity. In his future, all robots were programmed according to these three laws, resulting in what he called ‘positronic’ brains, meaning they thought in a way that humans could recognise and understand.

The Three Laws and the concept of positronic brains are central to the short stories in his 1950 collection, I, Robot. Asimov’s writing stands out in the sci-fi canon because it rejects the killer robot trope that still pervades today. Instead, Asimov put his faith in automatons to do the right thing, so long as we humans program them in the right way.

“They’re a cleaner better breed than we are,” says I, Robot protagonist Dr Susan Calvin of the machines around her. Calvin is an expert in the field of ‘robopsychology’, which studies the behaviour and cognition of positronic robots. They too, apparently, need therapy. It’s important for the humans of Asimov’s universe to understand their automated counterparts; if they don’t, they risk a whole race of robots running amok.

Fast-forward to the present day and interactions between humans and AI are pretty much as frequent and normal as Asimov predicted – and perhaps even more beneficial than he imagined. On any given day, our automated friends help us in a multitude of ways. They safely pilot our planes, can work out the fastest route home from the airport and might even predict a new favourite song on the way.

We diverge from Asimov’s harmonious vision, though, on two vital points. First, unlike the robopsychologists of I, Robot, we often don’t understand what makes our automations tick. Second, and most alarmingly, we’ve no laws preventing them from harming people. In June 2019, Rafael, an arms company with roots in Israel’s Ministry of Defense, announced it had integrated AI and deep learning capabilities into a family of SPICE bombs. This gave the SPICE-250 glide bomb the autonomy to identify and select its own targets. In a statement, the company called the integration “a technological breakthrough, enabling SPICE-250 to effectively learn the specific target characteristics ahead of the strike, using advanced AI and deep-learning technologies.” SPICE, by the way, stands for ‘Smart, Precise Impact, Cost-Effective’, which is a succinct (and in this case, unsettling) summary of AI’s key selling points – remove the human element and, supposedly, you’ll save money and make fewer mistakes.

Giving a bomb the power to choose who it kills is an ethical quagmire not many would want to wade through, but one point of great concern is the fact that SPICE-250’s algorithm is black boxed. Black boxing is a term referring to closed-source data. This inaccessible data is often part of the code or blueprint for technology that is then sold on to organisations who can’t access the source. This means that when an algorithm is black boxed, no-one but its creator can know how or why it spits out the outputs it does. And sometimes not even the creator understands why their algorithm has reached a decision. That’s nerve-wracking in the context of a precision munition with a lust for blood. Rafael has explained that the bomb uses “terrain data, 3D modelling and algorithms” to decide where to strike, but the algorithm itself remains obscured. The concept of black boxing is antithetical to Asimov’s positronic robots, which were imagined as open books – so understandable to humans that we would psychoanalyse them.

The concern around black boxed algorithms grows deeper as the creep of AI takes over more and more of our important, human decisions. A host of ethical conundrums arises when we hand over autonomy and power to machines whose thought processes we don’t understand. Smart bombs like the SPICE-250 invoke the oft-memed trolley problem, a thought experiment that asks people to choose between letting a runaway trolley run over five people or diverting it so that it only runs over one. Most choose to intervene and minimise the damage, though answers begin to vary when the latter option changes to pushing someone in front of the trolley.

What’s important is that the trolley problem is just a thought experiment – a hypothetical rendering of an unlikely scenario designed to unearth patterns and processes in our ways of thinking. Our responses may change according to a host of variables. Who are we pushing in front of the trolley? A small child or a dictator? Like all thought experiments, the trolley problem exists not to be solved but to expose the reasoning and interrogate the ethics behind a person’s decision.

 
matters journal 3.jpg

But in the age of the algorithm, a solution is not only necessary – it’s all there is. When a smart bomb makes a decision, all we can see is its outcome. And when an algorithm is black boxed, we can’t know its intent or reasoning. In simpler times, a machine did whatever you programmed it for. But as AI evolves and machines start making autonomous decisions, understanding how technology thinks and behaves is a looming challenge.

The general principle behind getting an algorithm to make complex decisions is that the outcome will be more accurate, less flawed and fairer. But as we roll algorithms out across whole societies, we begin to see patterns of bias emerge. In October 2019, a University of California study led by Ziad Obermeyer exposed a flaw within an algorithm that made black patients in the US healthcare system far less likely to receive important follow-up treatments than white patients. The algorithm was meant to assess which patients required special care and allocate them more resources. The study found that, although black patients had more chronic health issues than their white counterparts, a hidden bias made them less likely to be flagged as high risk. The algorithm had been programmed to assess risk according to the patient’s previous medical expenses, rather than their condition, on the assumption that the more you spend on hospital bills, the sicker you must be.

What Obermeyer and his team revealed was not just a flawed algorithm but a flawed society, in which access to healthcare is influenced by race. In the study, the researchers observed: “Something about the interactions of black patients with the healthcare system itself leads to reduced use of healthcare. The collective effect of these many channels is to lower health spending substantially for black patients.”

As long as AI is programmed by humans with data influenced by human bias, it will continue to harbour that bias, even if it is supposedly programmed towards objectivity.

That thought might be alarming, but Obermeyer shone light on a serious problem by studying the formula and understanding its decision-making process. They Dr Calvined the algorithm, if you like. Realising that healthcare expenditure is a flawed metric for sickness, they opened the door to better algorithm programming, while also exposing racial inequality in healthcare access.

Flawed AI has the potential to harm us if we don’t understand its workings. But if we do, it can teach us how to improve our communities. If we’re using the healthcare system as a case study, we can look at the flipside of the bias unearthed by Obermeyer and his team. They found that unjust outcomes arise when we input AI with data collected from an unjust system. No surprise there, but on a less macro, more interpersonal level, AI can help build a fairer care industry by removing the potential for human bias to affect the level of care a person receives.

Human-inflicted transgressions are rife within aged care. In Australia, a Royal Commission into Aged Care Quality and Safety was announced in 2019 following more than 5,000 submissions. A medley of factors have been attributed to the climbing rate of elder abuse, among them an increasing number of people entering an already understaffed system.

Nursing and aged care require empathy – an innately human quality – so it might seem counterintuitive to invite robots into the care industry. But human emotion comes with a side of human error. Automated carers can avoid some of these errors, learning about their patients and picking up on small fluctuations in behaviour.

Smart tech is already being trialled and rolled out through the health and aged care systems, such as smart sensor systems dotted around homes to detect falls and potential warning signs. Other AI, in the form of voice-activated virtual assistants, will notice if you haven’t woken up at the usual time or accessed your pill cupboard and send loved ones reminders or alerts. Think Amazon’s Alexa, but more of a worrywart.

By learning and analysing patterns, AI can understand the unique way that conditions like dementia present in patients. It won't get frustrated with patient behaviour, and doesn’t rely on patients to report important information, like whether or not they’ve taken their medication. There's work to be done still – for example, we need to teach AI how to react to patients who refuse medicine. And of course none of these advancements will spell the end of human carers any time soon, though they can provide some much-needed relief to the care industry by helping out in areas where humans fall short. Human compassion isn't always applied equally; we empathise more with some people than with others, and who wins our attention can be based on internal bias. There's a whole syndrome named after this phenomenon: 'missing white woman syndrome', which refers to the public's tendency to give more media attention and allocate greater resources to missing persons cases involving white women than those involving men or people of colour.

 

"Flawed AI has the potential to harm us if we don’t understand its workings. But if we do, it can teach us how to improve our communities."

matters journal 2.jpg
 

"Can an act be benevolent if it is carried out without compassion? What is compassion? Who decides between right and wrong? What is a brain?"

Human emotion and cognition are important to many aspects of care, but they also create inconsistencies, like favouritism and fluctuations in mood, which in turn affect the standard of that care. A robot, on the other hand, rarely has a mood swing. In theory, it should treat people equitably based on their needs or symptoms. If programmed consciously – with an awareness of the kinds of systemic biases uncovered by Obermeyer and team – AI has the potential to mitigate some of the human factors that contribute to elder abuse.

However, it’s tricky to discuss the ethics and ‘benevolence’ of AI without opening a can of semantic worms. Can an act be benevolent if it is carried out without compassion? What is compassion? Who decides between right and wrong? What is a brain? Technology designed to care for sick and elderly people can’t exercise compassion as we know it, but it can certainly help to improve patient outcomes if it’s programmed according to compassionate principles. Does this make it benevolent? Does creating AI that makes decisions according to a set of human ethical codes mean that the AI has ethics? In Asimov’s world, the programming of ‘positronic’ robots according to the Three Laws results in some robo-conundrums that are distinctly recognisable to humans. Many of his short stories involve automatons caught between two or more of the laws that govern their behaviour. In other words, they experience ethical dilemmas.

In ‘Liar!’ we meet Herbie, a mind-reading robot who resorts to lying to humans to avoid revealing information that might hurt their feelings and break Asinov’s first law. His compassionate tactic falls apart when he’s asked to tell the truth. According to the second law, he must obey, so he’s forced to choose between hurting people and defying their orders. Knowing either action will mean breaking the laws he was programmed to obey, he experiences a moral paradox and self-destructs. It’s one of the more poignant moments in I, Robot because we’ve all been there – tossing up between two acts, both which seem wrong, trying to determine the lesser of two evils. Even though Herbie’s breakdown is a result of his programming, his dilemma seems so human, and using white lies to protect others from emotional harm seems so compassionate. If benevolence is just a series of kind acts guided by good intentions, then there seems to be no reason we can’t program benevolent AI.

Ask the hundreds of older people who’ve met Stevie if a robot can be compassionate and the answer might be ‘yes’. Stevie is a ‘socially assistive robot’, developed by a team from Trinity College in Dublin and launched in 2018 at Rome’s Maker Faire. He’s a human-like robot with kind eyes and he looks like something from a Hanna-Barbera cartoon – how baby boomer children must have imagined the robots of the future in the 1960s. Designed to reside in nursing homes, Stevie can facilitate bingo, link residents to loved ones via video chat and lead karaoke sessions (with varying degrees of competence). Most importantly, he can chat to them. Stevie is programmed to roll around nursing homes, earnestly checking in with residents and asking questions. For those experiencing loneliness, he’ll listen with endless patience. If someone needs human assistance, he’ll understand and alert staff. When someone relays bad news, his smile will rearrange into an empathetic frown.

Stevie may not be human, but his programming endows him with at least some human qualities that are vitally needed in aged care, and without the biases or behavioural fluctuations that can arise instinctively in human carers. And, like Herbie, there’s something so familiar and human about Stevie.

When his kind, brown eyes fold into a concerned frown and he offers, “I’m sorry to hear your bad news,” it’s hard not to believe him. Perhaps the key to creating a symbiotic and helpful human-robot relationship is creating robots in which we can recognise ourselves.


This story is brought to you by our edition partner, Deakin University. In 2019, Deakin launched the Applied Artificial Intelligence Institute (A²I²), which seeks to transform industries and improve lives by implementing safe, effective uses of AI and exploring new frontiers in AI research.
Read: I, Robot by Isaac Asimov.
Do: Check out SOFIHUB, an award-winning smart home system offering at-home support for older people, developed in partnership with Deakin University. -> sofihub.com
Do: Nab your copy of Issue 4 from our shop.

48376432_214764062780932_5241462836885979136_n.jpg
lee-lai.jpg
Renata Carli is a writer, editor and crossword enthusiast from Naarm/Melbourne.