S

Should we fear The Terminator?

The Rise of Artificial Intelligence

Anyone with even the remotest interest in science and technology can hardly have failed to notice that Artificial Intelligence (AI) has become, well, quite a big thing over the past decade. A stream of apparently remarkable and rapid advances in AI have made headlines across the globe, and what used to be a relatively quiet backwater is now the most feted and rapidly growing area in the whole of science and technology. This is all the more remarkable when one considers that AI has, at various times, suffered from a rather poor reputation in the wider scientific community, somewhat akin to homeopathic medicine.

The apparently rapid progress has led some commentators to believe that perhaps we are now about to achieve the grandest of all AI dreams: machines that are as fully capable intellectually as people. And this prospect has, naturally enough, alarmed many people. The world’s most famous scientist, the late great Stephen Hawking (Univ, 1952) was so alarmed that he publicly stated AI represented an existential risk to humanity. AI, he feared, might lead to the end of the human race. Elon Musk, the brilliant but occasionally eccentric billionaire founder of PayPal and Tesla Motors was similarly concerned, and donated millions to research to understand and mitigate risks from AI. So, what is the reality here? What are the breakthroughs, and what do they mean? Is AI really an existential threat?

What is AI?

One of the perennial frustrations about working in AI is that despite the fact that AI has been the subject of continual research for seven decades, nobody really agrees on what it is. The early AI researchers were very ambitious in their views of where the field can and should go. They wanted nothing less than machines which enjoy the full range of intellectual capabilities that human beings enjoy. If they succeeded with this goal, then machines would be able to do anything a human being could do: conversing, telling a joke, inventing a joke, reading, reading and understanding a novel, critiquing a movie, cooking an omelette, riding a bike, creating a work of art, and so on. They would be able to do everything that we can do. In the early days of AI, progress seemed rapid, and this led to entirely serious predictions that this ambitious goal – the ultimate dream of AI – would be achieved by the year 2000. 

It didn’t work out that way, of course. The rapid progress quickly petered out, and the grand predictions of the early researchers are used by doubters to ridicule progress in AI to the present day. There has in fact been rather little progress towards the ultimate dream, which is nowadays referred to as general AI. Indeed, for much of the last 50 years, general AI has been rather on the sidelines of mainstream AI.

Instead, over time, attention in the AI community shifted to a less ambitious goal. Roughly, the aim of AI began to be understood as getting computers to do more and more of the tasks that currently possible only with animal brains and bodies. In particular, the aim of AI has been to focus on tasks that seem hard to get computers to do. Here are some examples of tasks that have attracted a lot of attention from AI researchers over the past decades:

  • Driving a car
  • Recognising faces in photographs
  • Writing simple captions for pictures (e.g., “a man with a fishing rod in a rowing boat)

You might notice that these are not tasks that you would normally associate with intelligence, and indeed this often causes some confusion for those not familiar with AI. The reality is that it is very easy to get computers to do some things that people find tremendously hard (quickly and accurately processing large numbers of mathematical equations) but very hard to get computers to do some things that people find easy.

Amongst the biggest problems for AI historically has been tasks that involve perception – understanding what is around us in the world. This is, in fact, the biggest problem for building computers that can drive cars. The decision making in driving (whether to speed up, slow down, signal, turn, and so on) is relatively easy if you know what is around you. The difficulty is knowing what is around you, and this is why the prototype driverless cars you might see on the streets of our towns and cities are equipped with a barrage of sensors – laser range finders, radars, and so on. They are there to give the computers on the car as much information as possible about the environment around the car. And the main task of the AI software in the car is to make sense of all the raw data that these sensors are providing – to be able to know that a certain signal from a sensor actually means that there is a pedestrian on the road ahead.

Neural Networks

For much of its history, AI struggled with tasks, like driving a car, which involve perception. This ruled out a huge range of things that might be useful for computers to do. Your robot butler will not be possible unless the robot can make sense of its environment: the fanciest camera in the world will not help unless you have AI software that can interpret the signal generated by the camera. And it is tasks involving perception where AI has seen the breakthroughs in the past decade, which have led to the current excitement.

In a sense, the current breakthroughs are surprising because they involve a very old idea – indeed, an idea that 20 years ago was quite widely dismissed as a dead end for AI. The breakthroughs involve a technique called neural networks. These are very simple computational structures, which take their inspiration from nerve cells called neurons that appear in animal brains and nervous systems. Your brain has unimaginably large numbers of neurons, and they are organised into extremely complex structures, through which individual neurons can communicate with other neurons according to certain patterns. Each individual neuron can only do a very simple computational task, but when they are organised into very large networks, they can do much more complex tasks – such as recognising faces in pictures.

Neural networks were proposed as long ago as the 1940s – the time of the very earliest computers. And it was known back then that, if you could design them appropriately, then they would be able to solve many of the problems facing AI. But there was a problem: we didn’t know how to design them. Specifically, the problem was we didn’t know how to “wire” the neurons together so that they had the right pattern of connections. This task is called “training” a neural network, because the way it is usually done is showing the network lots of examples of things that you want it to do: in the case of recognising faces in pictures, if you want a network to recognise Mike, then you show it lots of pictures of Mike. But until very recently, we simply didn’t know how to train networks that were big enough to do anything useful.

The scientific breakthroughs that made the current advances in AI came around about 2004, in a technique called deep learning. The main developer of this technique is a British-Canadian researcher called Geoff Hinton, who in 2019 was a recipients of the Alan Turing Award for his work on deep learning: this is the “Nobel Prize for computing”, the greatest honour that can be accorded a computer scientist. But Hinton’s deep learning techniques were, in truth, only a part of the story. Two other ingredients were also crucial to making current AI techniques work. First, training neural networks requires lots and lots of computer power – and this is now very cheap and widely available. Second, as I already mentioned, when you train a neural network, you need lots of data to train it. And of course, we are now in the era of “big data”. Every time you upload a picture of yourself and your family to Facebook, you are providing data that is used to train Facebook’s neural networks.

Where is it going?

Deep learning is a very powerful technique. But just how powerful is it? Will it, take us to the grand dream of AI – machines that have the full range of intellectual abilities that people have? Some people believe it will, and they are very concerned about this. What happens, they ask, when machines are as smart as people? Could they then apply their intelligence to making themselves even smarter? And if machines become vastly more intelligent than people, how can we control them? If their interests are not aligned with our own (and why should they be), then might they even represent an existential threat to humanity?

This is not an original idea, of course: it is a staple of science fiction, from movies and TV shows such as Terminator and Battlestar Galactica. In the Terminator movie, a large computer called Skynet is given control of a nuclear weapon’s arsenal. But the computer become self-aware, and decides to eliminate humanity.

Scenarios like this, based on the idea that there is a point at which machine intelligence eclipses that of humans, are called the singularity in AI. I have to say, while many commentators are concerned about the singularity, opinion is very sharply divided about whether it might occur. While some AI researchers believe it is possible, at least as many (and probably more) think it isn’t going to happen – and it certainly isn’t imminent.

The reason for this is simply that current AI techniques are all focussed around getting computers to do very narrow tasks, such as driving a car or playing a particular computer game. They are not directed at the general AI problem, and I believe we have in fact made very little progress to general AI, for all the successes we have had over the past decade. There is no simply route from where we are now to that grandest of AI dreams.

As to the Skynet idea, that we will wake up one morning to find super-intelligent machines, it isn’t remotely plausible. To use an analogy by my colleague Rodney Brooks at MIT, think of an AI system that was generally intelligent as being a Boeing 747. When we developed Boeing 747, it didn’t take us by surprise; it wasn’t built by someone in a shed at the end of their garden; and it didn’t magically fall into place overnight. It was the result of a long, expensive, and careful engineering process. General AI, if it is possible at all, will take a lot longer – and right now, we still don’t know where to start.

Professor Michael Wooldridge is a Senior Research Fellow at Hertford College and was Head of Oxford’s Department of Computer Science from 2014 to 2021. His popular science introduction to AI, The Road to Conscious Machines, was published by Pelican in March 2020.

CategoriesUncategorised

Leave a Reply

Your email address will not be published. Required fields are marked *