Comment

Tech bros want to pull up the AI drawbridge

The elite are over-egging the risks of this nascent technology to edge out competition

Musk Sunak
In his session with Rishi Sunak, even some of Elon Musk’s more positive suggestions sounded like the dystopian contents of a Philip K Dick fever dream Credit: Tolga Akmen/EPA

You can listen to what tech masters of the universe have to say. But it also pays to keep a close eye on what they actually do. 

Earlier this year, Elon Musk was one of the signatories to an open letter calling for a pause on the development of artificial intelligence systems, arguing the nascent technology was advancing so quicly and unpredictably it could destroy countless jobs, flood the internet with disinformation and even end up – whoops! – eradicating humanity. 

Over the weekend, Musk’s artificial intelligence start-up xAI released its first AI model, a chatbot called Grok that is based on The Hitchhiker’s Guide to the Galaxy, designed to be sarcastic, answers “spicy questions that are rejected by most other AI systems” and gleans real-time information about the world by mainlining feeds from the social media site formerly known as Twitter. 

When this rather obvious contradiction was pointed out, Musk tweeted (is that still the correct verb?): “I signed on to that letter knowing it would be futile. I just wanted to be on the record as recommending a pause.” 

Which is, as explanations go, pretty weak. There’s quite a difference between aligning yourself with a cause you know to be doomed and actively working to undermine it. It’s a bit like campaigning for nuclear disarmament while developing the technology to enrich uranium. 

Normally I wouldn’t waste your time or mine by pointing out that Musk can be a tad capricious. Except this is the guy that the UK government decided to invite over as the star attraction at last week’s AI summit at Bletchley Park. Yes, his appearance generated headlines and therefore publicity. But they were of the cup’s-half-empty variety. 

During this softball Q&A session with Rishi Sunak, even some of Musk’s more positive suggestions – that Terminator-style killer robots will likely be equipped with off switches and that humans will develop meaningful friendships with AI – sounded like the dystopian contents of a Philip K Dick fever dream. And, unfortunately, his vacillating attitude towards AI caught the mood of the whole event. 

Here we all are, mired in economic gloom and struggling to boost productivity, when along comes a potentially transformative technology that could spark a new industrial revolution. And how does the UK attempt to establish its leadership in this new and exciting area? By organising an unremitting doom-fest that focuses on the possibility of AI stealing all our jobs before accidentally kick-starting nuclear armageddon. No, no optimism for us today, thank you very much.

There’s nothing new about this kind of technophobia. The science fiction writer Isaac Asimov dubbed it the “Frankenstein Complex”, arguing that Mary Shelley’s novel articulated deep-seated and irrational worries about the hubristic pursuit of dangerous knowledge. It’s why Shelley subtitled her book “The Modern Prometheus”. Her explicit message was: if you play with fire, you’re going to get burned. 

And, sure, there is that risk. But if you play with fire you can also ward off predators, stay warm and explore harsher climates. Eating cooked food enables you to develop a more efficient digestive tract, free up energy for brain growth and thereby expedite the evolution of the species. So, there are a few pros to weigh against accidentally singeing your eyebrows. The same pattern has been repeating for millennia. 

In more recent history, it has been workers with most to fear from the advent of disruptive technologies that devalued various forms of labour and replaced certain jobs. But, while such fears can definitely be realised in the short-term (steam-powered looms weren’t great news for textile workers), they tend to be overblown in the long-term.

Time and again, new inventions have prompted fears that machines will outperform humans at a lower cost. But, in aggregate, it never works out that way because all the new-fangled whatchamacallits create more jobs than they destroy. 

Over the course of the whole twentieth century – a period of extraordinary innovation – the number of workers in the US increased from 30m to 134m, outpacing population growth. According to MIT’s David Autor, 60pc of all US workers have jobs that didn’t exist before 1940. In 2020, the World Economic Forum estimated that AI would eraticated 85 million jobs around the world by 2025 but create 97 million new ones. 

Now, Musk may be right that AI will eventually be able to do every current job along with all the future ones yet to be imagined. But for that to happen there will have to be a complete inversion of the historical relationship between technological progress and job creation

The more interesting question, which Sunak should have asked Musk, is why the tech billionaire even cares. If new technology replaces workers it will vastly reduce costs. That may be bad news for providers of labour but it will be great news for providers of capital – like, say, tech billionaires. Musk’s warnings are the modern equivalent of a mill owner carrying a banner at the front of a Luddite march. 

Maybe Musk and the other signatories to the open letter who are developing AI systems and simultaneously begging for the authorities to crack down on the development of AI systems genuinely care about the plight of workers. If so, it’s only in the abstract; remember, one of the first things Musk did after buying Twitter was to lay off around four-fifths of the workforce. 

So, perhaps there’s another answer. Andrew Ng, a professor at Stanford University who taught machine learning to the likes of OpenAI co-founder Sam Altman, thinks he knows what it might be.

In a recent interview with the Australian Financial Review he argues big tech companies are over-egging the risks of AI in order to deliberately trigger heavy-handed regulation. Why? Because it would protect the large companies and tech billionaires, who have the wherewithal to deal with a thick rulebook, from having to compete with plucky upstarts that don’t.

Of course it’s entirely sensible to ensure AI is developed with well-designed guardrails in place. But Professor Ng worries the “bad idea that AI could make us go extinct” was merging with the bad idea that regulation could make it safer. 

“When you put those two bad ideas together, you get the massively, colossally dumb idea [of] policy proposals that try to require licensing of AI,” he says. “It would crush innovation.” 

Now, there’s a warning worth heeding.