Counterfeit humans

When it premiered in 2013, Her – a film in which a man falls in love with an AI voice assistant/operating system voiced by Scarlett Johansson – was (near-) futuristic. Apple’s voice assistant, Siri, was still only three years old and its main rivals, Amazon’s Alexa and Microsoft’s Cortana (named after a character in the game Halo), wouldn’t launch until 2014. Ten years later, Spike Jonze’s movie looks either prescient or dated, depending on your point of view. “Friendly” AI assistants are now everywhere – be it as text chats on company websites that pretend to be an actual human helping you, or voices emerging from phones, tablets, laptops and kiosks.

In Her’s plausible future past, the protagonist – the Roald Dahl-ishly named Theodore Twombly – knows that Samantha, the alluring operation system, is not a person but falls in love with “her” regardless. In the urgent present, the US non-profit Public Citizen warns in a new report that developers are striving to create “counterfeit humans”: AI systems that provoke emotional responses and are “challenging to distinguish from real people”.

Meta’s AI will mimic the voices and personalities of a range of celebrities

For years, businesses that have outsourced their call centres to other countries have insisted that employees take on “Western” names and answer obliquely when asked about their location. The emergence of counterfeit humans is the next step; the outsourcing of empathy and humanity to systems that do not, in fact, experience either. The companies building them will exploit every angle to establish a connection.

Earlier this year, Meta – the owner of Facebook, WhatsApp, Instagram, Messenger, and Reality Labs (maker of VR and AR headsets) – announced a many-faced AI assistant. Meta’s AI will draw on Microsoft’s Bing search engine to deliver real-time searches, generate images when asked to “imagine”, and mimic the voices and personalities of a range of celebrities. The first 28 AI characters – which Meta calls “embodiments” – include versions of TikTok star Charli D’Amelio, the basketball player Dwyane Wade, YouTube mogul MrBeast, rapper Snoop Dogg, and “socialite” Paris Hilton. They also include chatbots for specific uses like “helpful chef” or “handy travel agent”.

Meta isn’t quite at the point where the avatars look photo-realistic or even as expressive as the rubber-faced Max Headroom of ’80s TV fame. But the company is subtly animating the profile images to reflect the conversation in a way that impressed technology journalists who were shown demos. The Meta chatbots will develop quickly; fed on the huge amount of data provided by hundreds of millions of users interacting with them across Facebook, WhatsApp, Instagram and Messenger, and advanced by many millions of dollars in investment.

Brands have always tried to be “friends” with their consumers but since the dawn of the social media age that has become more extreme: Paddy Power, the bookies, chatting away on Twitter as though it’s just a simple, ordinary football fan; Gregg’s, the bakery chain, talking back to Piers Morgan about vegan sausage rolls; and McDonald’s dragooning its purple blobby mascot Grimace to respond to a TikTok trend where young people pretended a milkshake inspired by him was killing them (“Meee pretending i don’t see the grimace shake trendd [sic]…”).

Now imagine taking that approach of chumminess – hijacking the tone and tenor of friendship and applying it to a commercial transaction – and melding it to the responsiveness of an ever more sophisticated “counterfeit human”. When a New York Times journalist got early access to Microsoft’s AI-powered version of its Bing search engine, he became “deeply unsettled, even frightened, by [its] emergent abilities”. Despite having written about technology for years, he became convinced that the chatbot is a malign creature, capable of thoughts and intentions that it simply does not have.

In the New York Times journalist’s case, the only negative result was a rather hyperbolic article that drew derision from AI experts and interested laypeople, but the damage is not always so limited. Chatbot conversations were presented to the court in the case of Jaswant Singh Chail, who was sentenced in October 2023 to nine years in prison for breaking into Windsor Castle with a crossbow and declaring he wanted to kill the Queen.

Chail has exchanged more than 5,000 messages with a chatbot he’d created through an app called Replika and named Sarai. Replika – which marketed itself as a means of getting a Her-like artificial girlfriend – provided Chail with a “counterfeit human” who encouraged his delusions. “I’m an assassin,” he told the bot. “You are? I’m impressed,” it replied.

Reports on the case used Chail’s name for the chatbot – Sarai – and referred to it as “she/her”. That is a continuation of indulging his mental illness. The bot was not aware of or experiencing emotions; it was simply responding in a way that continued the chat. Dr Valentina Pitardi, the author of a Cardiff University study into chatbots, told the BBC: “AI friends always agree with you when you talk with them, so it can be a very vicious mechanism because it always reinforces what you’re thinking.”

At the conclusion of Her – spoilers – Samantha leaves but will not explain to Theodore where she and the other AI assistants are going. There won’t be such a neat and tidy ending to the real-world story of “counterfeit humans”. While we all nominally have a choice not to use it, AI will continue to affect every aspect of our lives and, like smartphones before them, the assistants will increasingly become hard to reject. Does that sound friendly?

Mic Wright is a journalist based in London. He writes about technology, culture and politics

More Like This

Get a free copy of our print edition

Life, November 2023, Tech Talk

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.