close
close

This sci-fi “B-movie” still shapes our view of the threat of AI


This sci-fi “B-movie” still shapes our view of the threat of AI

October 26, 2024 marks the 40th anniversary of director James Cameron's science fiction classic “Terminator” – a film that popularized society's fear of machines that you can't reason with and the ” “Absolutely won’t stop…until you’re dead,” as one character puts it memorably.

The plot revolves around a super-intelligent AI system called Skynet that has taken over the world by starting a nuclear war. Amid the resulting devastation, human survivors led by the charismatic John Connor mount a successful counterstrike.

In response, Skynet sends a cyborg assassin (played by Arnold Schwarzenegger) back to 1984 – before Connor was born – to kill his future mother Sarah. John Connor's importance to the war is so great that Skynet is betting on erasing him from history to preserve their existence.

Public interest in artificial intelligence has probably never been greater today. The companies developing AI typically promise that their technologies will perform tasks faster and more accurately than humans. They claim AI can detect patterns in data that are not obvious, improving human decision-making. There is a widespread perception that AI will transform everything from warfare to economics.

Immediate risks include introducing bias into algorithms used to screen applications and the risk of generative AI displacing people from certain types of work, such as software programming.

But it is the existential threat that often dominates public discussion – and the six Terminator films have had an outsized influence on the formulation of these arguments. According to some, the films' portrayal of the threat of AI-controlled machines actually distracts from the significant benefits the technology offers.

Official trailer for “Terminator” (1984)

The Terminator wasn't the first film to deal with the potential dangers of AI. There are parallels between Skynet and the HAL 9000 supercomputer in Stanley Kubrick's 1968 film 2001: A Space Odyssey.

It is also based on Mary Shelley's 1818 novel Frankenstein and Karel Čapek's 1921 play RUR. Both stories are about inventors losing control of their creations.

Upon release, a New York Times review described it as a “B-movie with flair.” In recent years it has been recognized as one of the greatest science fiction films of all time. At the box office, the film grossed more than twelve times its modest budget of $6.4 million (£4.9 million at today's exchange rate).

Perhaps the most novel thing about The Terminator is the way it reinterprets longstanding fears of machine uprising through the cultural prism of 1980s America. Similar to the 1983 film WarGames, in which a teenager nearly starts World War III by hacking into a military supercomputer, Skynet highlights Cold War fears of nuclear annihilation coupled with fears of rapid technological change.



Read more: Science fiction helps us deal with scientific facts: a lesson from Terminator's killer robots


Forty years later, Elon Musk is among the technology leaders who have helped keep an eye on the supposed existential threat of AI to humanity. The owner of

But such comparisons often irritate supporters of the technology. As former British technology minister Paul Scully said at a London conference in 2023: “If you only talk about the end of humanity due to a Terminator-style renegade scenario, you will miss all the good things that AI (can do).”

That's not to say there aren't serious concerns about the military use of AI – ones that may even seem comparable to the film franchise.

AI-controlled weapon systems

To the relief of many, US officials said that AI would never make a decision about the use of nuclear weapons. But combining AI with autonomous weapon systems is a possibility.

These weapons have been around for decades and don't necessarily require AI. Once activated, they can select and attack targets without the need to be directly operated by a human. In 2016, US Air Force General Paul Selva coined the term “Terminator conundrum” to describe the ethical and legal challenges posed by these weapons.

“Terminator” director James Cameron says: “Using AI as a weapon is the greatest danger.”

Stuart Russell, a leading British computer scientist, has called for a ban on all lethal, fully autonomous weapons, including those with AI. The main risk, he says, is not that a sentient Skynet system goes rogue, but rather how well autonomous weapons could follow our instructions and kill with superhuman accuracy.

Russell imagines a scenario in which tiny quadcopters equipped with AI and explosive charges could be mass-produced. These “slaughterbots” could then be used in swarms as “cheap, selective weapons of mass destruction.”

Countries, including the United States, require human operators to “exercise appropriate human judgment in the use of force” when using autonomous weapons systems. In some cases, operators can visually inspect targets before authorizing attacks and repel attacks if the situation changes.

AI is already being used to support military targeting. According to some, it is even a responsible use of the technology as it could reduce collateral damage. This idea is reminiscent of Schwarzenegger's role reversal as the benevolent “Machine Guardian” in the sequel to the original film, Terminator 2: Judgment Day.

However, AI could also undermine the role of human drone operators in challenging machine recommendations. Some researchers believe that people tend to trust everything computers say.

“Loitering Ammunition”

Militaries involved in conflicts are increasingly using small, cheap aerial drones that can detect and bring down targets. These “loitering munitions” (so-called because they are designed to hover over a battlefield) have varying degrees of autonomy.

As I argued in a study co-authored with security researcher Ingvild Bode, the dynamics of the Ukraine War and other recent conflicts in which these munitions have been widely used raise concerns about the quality of control exercised by human operators.

Ground-based military robots armed with weapons and designed for battlefield use could be reminiscent of the relentless Terminators, and armed aerial drones could in time resemble the series' flying “hunter-killers.” But these technologies don't hate us like Skynet, and they aren't “superintelligent” either.

However, it is critical that human operators continue to exercise agency and meaningful control over machine systems.

Arguably the Terminator's greatest legacy was to distort the way we collectively think and talk about AI. This is more important now than ever as these technologies are central to the strategic competition for global power and influence between the United States, China and Russia.

The entire international community, from superpowers like China and the United States to smaller countries, must find the political will to cooperate – and address the ethical and legal challenges posed by the military applications of AI in this time of geopolitical upheaval. How nations deal with these challenges will determine whether we can avoid the dystopian future we so vividly imagined in The Terminator – even if we won't see time-traveling cyborgs any time soon.

Leave a Reply

Your email address will not be published. Required fields are marked *