fbpx

Future imperfect

I’m part of the British generation that spent its childhood in a haze of complacency. When the Berlin Wall fell in 1989, I was five, more engaged with building and demolishing piles of Lego and its more simple-minded cousin, Duplo, than weighing up the complexities of geopolitics. Through the Nineties, a Fukuyama-tinged sense of history completely permeated the culture of a complacent West, which pretended that the compromises and collusions of the Cold War and its clumsy, chaotic end would not lead to any kind of reckoning.

Humans aren’t good at predictions

On 11 September, 2001, I was cocooned in an A-level history lesson when the attacks that would shape the next two decades occurred. I learned that everything had changed from the driver of our school bus, Trevor, who told me about the planes striking the Twin Towers as though he were recounting Norwich City’s latest result or a near-miss he’d had on the roads that day. Two years earlier, the media’s great fear was that the Millennium Bug would lead to disastrous system failures across the world – including planes dropping from the sky. That didn’t come to pass and it’s become a popular but incorrect line used to lambast people for worrying when, in fact, disaster was averted because a lot of professionals worked very, very hard to prevent it.

Could 9/11 have been prevented? Yes, probably. While shimmying past the many conspiracy theories that have sprouted in the fertile swamps of the internet since – no, The Simpsons didn’t predict it – there were plenty of signs that Al-Qaeda intended to mount a major attack on the US. The 9/11 Commission Report put it bluntly: “The attacks were a shock, but they should not have come as a surprise.”

Al-Qaeda was behind attacks against the US in 1998 (truck bombings against the country’s embassies in Kenya and Tanzania) and 2000 (the bombing of the USS Cole which almost sank the vessel and killed seventeen sailors) and it was clear that Bin Laden and co had committed to escalating their terror campaign.

Experts were extremely concerned and well aware of the threat, but Bin Laden was far from a household name for most people before the events of 11 September. When the Pew Research Center asked Americans for their predictions about the next 50 years in 1999, 64 per cent predicted a terrorist attack on the US but, as 44 per cent bet on “the return of Jesus Christ” and 78 per cent said “the environment will improve”, their foresight shouldn’t be overestimated.

Generally, humans aren’t good at predictions. In 2013 – six years before the Covid pandemic – the mathematician David Orrell explained: “Even though the human genome is now mapped, we still can’t predict the spread of pandemics like avian flu or swine flu,” and argued “we have to acknowledge that some things aren’t predictable… we model people as if they [are] perfectly rational. We model the economy as if it obeys the ‘harmony of spheres’.”

Currently, many self-promoting “big brains” such as Elon Musk, OpenAI founder Sam Altman and the ubiquitous philosopher Yuval Noah Harari secure lots of airtime and column inches with apocalyptic warnings about the dangers of artificial general intelligence (AGI) – machines with “minds” mimicking and outpacing a human intellect – even as plenty of experts in the field question whether we will actually reach that point. The fear of being subjugated by a malign AI like The Terminator’s Skynet is easier to understand than the quiet and mostly hidden ways that far more basic AI is already infiltrating our daily lives in banking, healthcare and policing; it serves the companies and state agencies pushing “solutions”.

In October, the Guardian reported – after filing a raft of Freedom of Information requests – that at least eight Whitehall departments and several police forces are already using AI in a range of areas, including to make decisions on benefits appeals, issuing marriage licences, adjudicating immigration cases and identifying criminals.

The rollout of AI in government and policing in, as the Guardian describes it, “a haphazard and often uncontrolled way” should worry you far more than the abstract idea of an evil AI with general intelligence and targeted indifference for human life. In March, 30,000 tech figures – including Musk – signed an open letter calling for a pause in major AI experiments. In May, Geoffrey Hinton and Yoshua Bengio – two of the three so-called godfathers of AI – signed a statement warning that the risk of extinction from AI should be treated “as seriously as the threat from pandemics and nuclear war”.

Unlike AGI, nuclear weapons exist – still in huge numbers – and we are still in the latter stages of a global pandemic. Yann LeCun, the third godfather of AI, says the idea that AI will wipe out humanity is “preposterous”. Many of those who make nightmarish predictions are heavily invested in AI and want to ensure that lots of government money flows their way so they can “save” the rest of us.

As tempting as it is to make big future predictions, we – humanity – would be better off if we focused on making serious changes now. Like the politicians who convinced themselves that the Cold War could conclude without consequences, our fretting over the far future can only lead us to miss the opportunities to make things better now and avoid storing up the next lot of unexpected and unpleasant consequences.

Mic Wright is a  journalist based in London. He writes about technology, culture and politics

More Like This

Get a free copy of our print edition

December 23 / January 24, Life, Tech Talk

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.