Technology is ever evolving. In today's world of high-tech wizardry, it can be hard keeping up. The laptop you bought last year is probably already out of date. 20 years after the inception of the World Wide Web it is hard to imagine a world without it.
Science fiction author and scientist Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic. We may be close to entering into such an age, where all our dreams are realised. But is this a false pariah, a pipe dream? Are we building machines which will one day be the conscious architects of our destruction? Or is the whole thing a foolish fantasy, fueled by “magical” devices, and a dangerous fantasy at that? And what does it all mean?
Some think that this age of invention and technical achievement will not bode well for humanity. Some think that the technological achievement that we've seen in the last century may overtake humanity. Machines will be able to create better machines, which will in turn create better machines, until they overtake us. We will enter a brave new world, where our own technology is master.
The theory goes that at the very least we will then face an existential crisis. In the worst case scenario (if Hollywood is to be believed), our entire species will be marked for termination (The Terminator series), or we will be used as batteries while being deceived into thinking that we are living in the 'real world' (The Matrix series). This point is known as the singularity.
It was first proposed by science fiction author Vernor Vinge in the 1980s. In 1993 he proclaimed that “within 30 years, we will have the technical means to create superhuman intelligence. Shortly after, the human era will be ended.”
It is based on Moore's Law. It describes advances in computer technology in the twentieth century. Advances in processing power happened exponentially and not linearly, and have been made at a startling rate. Moore's law will end, it is predicted, in the 2020s, when computing technology will hit a brick wall.
Vinge's analysis rests on the fact that machines are on the way to becoming more intelligent than ourselves, and this brick wall in technological innovation as described by Moore's law actually represents the point when machine intelligence will break away from its master. At that point, all bets are off.
Though there have been many advances in artificial intelligence (AI), Noel Sharkey, Professor of Artificial Intelligence and Robotics at Sheffield University, is skeptical:
“I would have to say that I am very doubtful of them ever having any type of consciousness like ours. AI is taking quite a different route into intelligence than the biological.
“At present I see no reason to believe that AI will approach or overtake our own intelligence in the next couple of hundred years or so,” he says.
Not only that, but Sharkey thinks that the belief that humanity will be overtaken by its machines may prove to be a very dangerous one.
“The danger is that when people in powerful positions outside of AI (like politicians or the military) form inaccurate beliefs about the current state of the subject they can make bad decisions. An example is having autonomous military robots that make decisions about who to kill when the fact is that we do not have any automatic way to tell the difference between a civilian, a combatant or a wounded soldier and never mind reasoning about the decision. Thus innocent people could lose their lives because of false beliefs.”
A worst case scenario, Sharkey thinks, is that the powerful take advantage of this misplaced faith in technology. It could even be the model for a futuristic dictatorship.
The singularity may be a fantasy, but a dangerous one.
Some see it differently. For futurist Ray Kurzweil the singularity is a reality and represents a great opportunity for the human race. But he believes that artificial intelligence that supersedes our own is not a necessary outcome.
He doesn't see the singularity as a machine takeover. He sees it as a time when we will merge with our machines, living longer, and experiencing more, perhaps even achieving immortality.
This is the nice side of the singularity – utopia compared to Vinge's machine-controlled dystopia.
Theorist Kevin Kelly also believes that the singularity will occur. He too takes the same optimistic view as Kurzweil, free from killer robots. Kevin Kelly also holds that the universe, and everything in it (hence us), works in a similar way to a computer.
“I think that human consciousness is a type of computation. I affirm that nature itself is a type of computation,” he says.
However, here too lies a problem. It may be more pertinent than the concept of the singularity and AI. It is the assumption that the human mind is computational in nature, and that the whole universe is a giant computer. Sharkey agrees.
“It would probably help if we had even the vaguest idea what consciousness is.
“Some argue that consciousness and sentience is not all that important for computing machines. But at the minute sentient machines are the only things on the planet exhibiting intelligence and so I don't see how we can argue that we don't need sentience for intelligence before we produce a machine that exhibits genuine intelligence.”
“It is easy to say that some machine will do this or that in the future and then use that as it was a fact and use it to build arguments. But arguments based on false premises are meaningless - anything can follow,” Sharkey holds.
“When you work in AI and robotics, you come to appreciate how incredible living things are. My main research for many years was on developing learning machines that modelled parts of human thinking (cognition) and animal behaviour. This threw light on some aspects of living creatures but it also made me marvel at the beauty and creativeness of life and how far away we were from understanding it.”
A belief in the singularity presents technology as the great emancipator or, conversely, the great destroyer. The risk lies with mistaking human consciousness as computational. By inferring that AI can reach a similar level or type of consciousness as ourselves, the implication is that we too are based on the same principles as AI: that we are somehow computational in nature.
In 1998 Kevin Kelly wrote an article in Whole Earth entitled The Computational Metaphor outlining the fact that computational language is a metaphor.
“Is this embrace a trick of language? Yes, but that is the unseen revolution. We are compiling a vocabulary and a syntax that is able to describe in a single language all kinds of phenomena that have escaped a common language until now. It is a new universal metaphor,” Kelly writes.
“It has more juice in it than previous metaphors: Freud's dream state, Darwin's variety, Marx's progress, or the Age of Aquarius. And it has more power than anything else in science at the moment. In fact the computational metaphor may eclipse mathematics as a form of universal notation.”
Kelly recognises that the computational metaphor is a powerful one, not confined to humanity, but extending right across the whole cosmos. This is staggering, echoing Darwin's evolution in its scope, but cranking things up a notch (Darwin focused mostly on life; the computational metaphor focuses on the whole universe).
Kelly wrote, “the computational metaphor was already halfway to winning.” It is, however, not something without an air of tension surrounding it.
The idea that we can create intelligent machines which will take over may be laughable whether we live in a computational universe or not. But the singularity may still have a certain kind of reality.
It may ultimately represent an expression of a disparity between the individual and the universal metaphor of computation. Are we all computers, the same as the universe, also computational in nature?
Even more so than Darwin's metaphor of evolution, it forces us to question our place at the top. Before Darwin we were made in God's image; after Darwin we were the best that life could produce. But with the computational metaphor, where do we stand?
Though the singularity may prove to be a damp squib, it at least highlights this discrepancy. In a world where the singularity is real, the computers don't need us. That is to say that the computational universe doesn't need us. We are no longer the pinnacle of creation.
The singularity may be a form of existential angst against the computational metaphor.
We may indeed end up hurtling towards a singularity of sorts. For every reading of Marx there's someone to remind you about Communist Russia. For all the wonders that Darwin showed us there's always someone to remind us of the Holocaust. And for all of Freud's insight there's always that self important twat you know.
Noel Sharkey's warnings about the limits of AI should not go unheeded. A misplaced belief in the thinking power of these machines may be the cause of many problems in the future.
But, more interestingly, what are the implications of a computational metaphor? Where will it lead us?
In the end, it may be best that people remember that a metaphor is a way of describing things, and that there can be many ways of describing things.