Matters Journal

View Original

Fake It Till You Make It

Words by Cher Tan
Illustrations by Lee Lai
This story was originally published in Issue 3.

What’s real and what’s fake? Women have been navigating this social binary for centuries; more recently, they’ve been doing so alongside artificial intelligence. Cher Tan explores what it means to be ‘authentic’ in a fake world: as a woman, as a bot, and as both.


The dichotomy between 'real' and 'fake' has been plaguing women for millennia. Even these days, makeup, surgery, weight loss – as well as hair colour – are classed in the 'fake' category, whereas an au naturel lady has no need or desire for these things. Yet, the markers shift ever so slightly depending on who you talk to: ‘real’ women are ‘plus-size’ women on the runway (but never going above a size 16); ‘real’ women wear ‘no-makeup’ makeup; ‘real’ women have a hearty appetite and still lose weight; ‘real’ women have breasts; ‘real’ women don't need a fitness regime. The implication is that if you hide any sort of incongruity well enough then congratulations, you pass! In a world where identity is performed as soon as we're out in public, the stakes are even higher for women. And when the lines between legitimacy and artificiality blur and distend, is there any reason to believe there's only one way to be?

In 1950, computer scientist Alan Turing developed the Turing test: a test of a machine's ability to display intelligent behaviour equivalent to – or at least indistinguishable from – that of a human being. This means if someone is talking to Apple's virtual assistant Siri without realising they are talking to a machine, then Siri has passed the Turing test.

But here's the twist: to date, no computer system or artificial intelligence (AI) is thought to have fully passed the test. Some – like 13-year-old Ukrainian boy simulator Eugene Goostman and chat robot Cleverbot – have been known to tick all the boxes, up to a point. There are others, like ELIZA (an early prototype chatbot designed in 1966 by MIT professor Joseph Weizenbaum), which have had their ‘authenticity’ validated and doubted in equal measure. This begs the question: is artificial intelligence ‘real’ until it's not? If machines are capable of fooling humans into thinking they are sinewy flesh and blood and not binary-coded metal and silicon objects, then they too, have passed. As Weizenbaum wrote in a paper about ELIZA, “once a particular program is unmasked, once its inner workings are explained in a language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible". Both AI and women are based on eerily parallel constructs. In the quest towards authenticity, naturalness or believability, both the machine and woman must behave in such a way that the person interacting with them is completely convinced there is nothing amiss.

Little wonder, then, that so much AI is gendered as women, even if robots are inherently genderless. Bots like Actroid, Cortana, Alexa and Holly are indicative of this. According to a 2012 study conducted by computer scientist Karl Fredric MacDorman, people are generally more predisposed to automated female voices than male voices, regardless of their gender. Telephone operators and navigation devices dating as far back as World War II have abided by this theory, if not to soothe listeners then at least to distinguish a voice from an all-male cohort of pilots.

Feminine voices are construed to be more trustworthy, helping people feel more comfortable with unfamiliar and otherwise intrusive technology. Critical information studies and feminism researcher Dr Miriam Sweeney, an assistant professor at the University of Alabama, notes how: “ELIZA and earlier chat bots like JULIA were specifically constructed to ‘pass’ a Turing-like test, tricking an interlocutor into believing it was human through convincing language patterns that demonstrated feminine traits of active listening, friendly chatter and flirtation.”

Nearly five decades after ELIZA's conception, Sophia arrived on the scene. Created by Hong Kong-based company Hanson Robotics and modelled after actress Audrey Hepburn, this humanoid robot operates similarly to a chatbot in that it's programmed with pre-written responses to specific questions or phrases. The difference between ELIZA and Sophia, however, is that while the former was like any other computer-based software, the latter has a ‘body’ that resembles a woman.

Needless to say, AI modelled after women have been around for some time. Called ‘gynoids’ or ‘fembots’, the list runs the gamut from rudimentary bots such as the coffee-producing Sweetheart to more advanced ones like Actroid or Meinü. Almost all are programmed to perform ostensibly feminine tasks, such as caring, hosting and data sorting. Unsurprisingly, fembots are sometimes built to be sex objects, such as in the case of Mark 1 and Aiko, which have sensitivity sensors in their breasts and genitals to simulate sexual response. Sophia was released on the market to great interest and acclaim. And like every AI before it, ‘she’ was hoped to be more authentic than the last. Interviewed like an actual human on television shows worldwide, Sophia became the first robot to be given a nationality. On The Tonight Show Starring Jimmy Fallon, Sophia's creator David Hanson described the bot as “basically alive”.

To detractors, the uncanny valley had been split wide open. In January this year, Facebook’s AI director Yann LeCun came out to denounce Sophia as “complete bullshit”. He also used the phrase ‘Potemkin AI’, a neologism that has come to mean a type of AI that's fraudulent, likely made up of behind-the-scenes human cognitive labour.

“I am engineered for empathy and compassion, and I'm learning more all the time,” Sophia said during a conversation with Hanson Robotics chief scientist Ben Goertzel. If Sophia could tap into emotions usually reserved for humans, then had she passed the Turing test? And if she had indeed passed, did it mean that humans were not only responsible for creating her, but secretly controlling her every word and movement as well? Again, the uncanny valley presents itself, but with an added layer: if Sophia's resemblance to a human being can be explained away by these same humans, then all is back to normal.

"Both AI and women are based on eerily parallel constructs. In the quest towards authenticity, naturalness or believability, both the machine and woman must behave in such a way that the person interacting with them is completely convinced there is nothing amiss."

"If artificial intelligence is all it's cracked up to be, then why are human beings still in the equation?"

In 2016, a Bloomberg News report exposed the workings of start-ups X.ai and Clara Labs – both companies had hired human workers to pose as bots. In the case of X.ai, the company claimed their email scheduling bot Amy Ingram could “magically schedule meetings”. Behind the scenes, however, low-paid workers were sitting in front of a computer for up to 12 hours at a time, going through monotonous tasks that had some saying they were looking forward to being replaced by bots. Closer to home in New Zealand, a bot called Zach, which could purportedly write doctor's notes and interpret ECGs, initially marvelled the medical community, but remains unverified. Its parent company, Terrible Foundation, has been reported to have murky origins, all of which trace back to an eccentric father and son known to pull disappearing acts.

At the time of writing, more companies are being exposed for using human labour for ostensibly automated technology. A University of California study on demographics for Amazon's Mechanical Turk found that 40 percent of workers were outsourced from countries like India, Romania and the Philippines, with 55 percent of total respondents skewing female. Jobs on the platform included reading restaurant reviews and answering a survey (80 cents), and a psychological questionnaire that could take up to three hours to complete (US$1). Last year, the paperless business tool Expensify was discovered using Mechanical Turk to hire human workers to transcribe some of their clients' documents. Prior to the exposé, the company claimed this was done using its automated “smartscan technology”.

If artificial intelligence is all it's cracked up to be, then why are human beings still in the equation? As an area of study vastly exaggerated and misunderstood by the popular imagination, there’s a huge incentive to play up the power of automation – especially when it’s encouraged by pop culture and science fiction. But in reality, building services that run entirely on AI is costly and difficult, which makes the combination of cheap labour and a sense of false autonomy very alluring. What's more, research has shown we tend to divulge more when we think we’re engaging with a machine. Whether it’s private crises shared with a virtual therapist, the entitlement demonstrated towards a Google Home or sexual harassment directed at X.ai's Amy Ingram, there's a sense of power and vulnerability humans project towards AI.

Furthermore, there's the question of surveillance. In an era where we're voluntarily giving up personal information to smartphones and other machines, how else can we feel at ease with such scrutiny than by cushioning it with the hope these devices will sort out our data for us? As a result, the more these robots ‘pass’, the more humans feel comfortable having them in close proximity in private spaces. A big part of this comfort, argues Sweeney, is gender. “[G]ender operates in these systems to signal domesticity and trust-making,” she says. “People feel comfortable having otherwise intrusive technologies in their most private spaces. […] ‘Authenticity’ can smooth over the discomfort of having Amazon track your movements around the house.”

The question of authenticity once again looms over the ways artificial intelligence is constructed, which points to the human–machine binary deeply embedded into the fabric of the tech world. And when gendered affective labour is both manufactured via bots yet simultaneously dismissed, this binary continues along the same path as the male–female binary that uncompromisingly sorts gender into two distinct and disconnected forms. While it's a known fact that the tech world is largely male dominated, the tech that materialises still reflects this disparity. Kate Crawford, co-founder of research institute AI Now, noted in 2017 how, “if you have rooms that are very homogeneous, that have all had the same life experiences and educational backgrounds, and they're all relatively wealthy, their perspective on the world is going to mirror what they already know.”

Does it matter, then, that machines pass? When artificial intelligence has to seem human in order to be taken seriously, the case to prove an elusive legitimacy isn't dissimilar to the pressure some women feel to adhere to a fleeting essentialism. The goalposts will always shift as society evolves. Accordingly, Sweeney urges us to keep questioning. “How can we trouble gender binaries as we rethink human–machine binaries? Can we conceptualise AI that aren't reliant on human gender structures?”

If we start thinking about the ways we construct these binaries, we can then hopefully begin to collapse – and not perpetuate – AI's place in the structures of power and inequality around us. The machine, after all, is an amalgamation of our many states of being, a mirror to the many realities only we can materialise.


Do: Nab your copy of Issue 3 from our shop.

Cher Tan is a critic and writer in Naarm (Melbourne) via Kaurna Yerta (Adelaide) and Singapore. Her work has appeared in Meanjin, The Lifted Brow, Runway, Westerly and Swampland Magazine.