Issues Magazine

The Singularity Is Coming

The Singularity article main image


By Ben Goertzel

It may seem like science fiction, but some scientists and technologists predict that developments in their fields of expertise will soon overtake them.

Kurzweil’s Vision of the Singularity

Ray Kurzweil is an accomplished inventor and businessman. His list of achievements includes the invention of musical instruments, speech-recognition software, automated readers for the blind, and a whole lot more. He attributes much of his success as an inventor to his ability to forecast the future of technology and thus understand which inventions will best fit into the overall technological and social landscape – and when. But in recent years he’s been focusing much of his attention on a more dramatic sort of prediction: the concept of a “technological singularity”.

The technological singularity, first labelled as such by science fiction writer Vernor Vinge, is a hypothetical future point in time when the rate of progress in science and technology becomes so fast that the human mind can’t possibly keep pace. The result will be a world with little resemblance to the one we know today – a world where humans are fused with computers; where death due to ageing is abolished; where superhumanly intelligent robots wield great power; and where nanofactories manipulate molecules like Lego, dramatically reducing material scarcity and eliminating the practical need for human labour.

While Kurzweil tends toward optimism, the singularity is not necessarily a utopian vision. It holds a potential for great human fulfilment, but there are also darker options. Anyone who has viewed a few science fiction films can easily envision a singularity in which robots take over and humans are obliterated. Limitations in the capability of science and technology might also become clear after a singularity, ones that we are now too limited in intelligence to foresee.

As a relatively down-to-earth businessman, Kurzweil is well aware that the concept of a technological singularity doesn’t fit well into the average person’s world view. So in his book, The Singularity Is Near, he presents an elaborate and careful argument in favor of his thesis that some sort of technological singularity is going to occur around the middle of this century. He’s drawn a lot of exponential curves showing the rate of advance of various aspects of science and technology, reaching toward infinity in 2040–50. His Singularitarian vision is relentlessly positive, involving humans becoming integrated with various technologies – enhancing our brains and bodies, living as long as we want to, and fusing and networking with powerful artificially intelligent minds.

I count myself among the rapidly increasing group of scientists and technologists who generally agree with Kurzweil’s vision. In fact, I felt this way before I knew who Ray Kurzweil was, simply from my own study of various technologies, interpreted in the context of a world view shaped by reading a lot of science fiction.

I don’t think a singularity is guaranteed, or anything close. Lots of things could happen to prevent it. A world war could knock us back to the Stone Age, either using radical advanced bio- or nano-weapons or just good old fashioned nukes. There could be obstacles to scientific and technological progress that we can’t now foresee. Or human culture could somehow drift away from technological advance.

I think these things are possible, but I think they’re unlikely. The singularity, in my view, is the most likely outcome for humanity.

I don’t have a strong opinion on the accuracy of Kurzweil’s projection of a singularity around 2045, but it feels sensible to me. I’m more comfortable thinking about a range from, say, 2020 to 2080. If some form of technological singularity doesn’t occur in that interval, I’ll be surprised (assuming I’m around to be surprised – I’m 45 years old now, so my survival until 2080 is not that likely in the absence of the same kind of radical technology advancement that Ray Kurzweil and I believe will bring a singularity about).

The Singularity and You

One aspect of the singularity I like to stress is that it’s not something that’s just going to happen to us – it’s something that we’re going to make. From a big picture perspective, huge advances like the creation of language, or tools, or mathematics, or computers are things that just happen, with a force of historical inevitability. But they happen because of the choices made by individual people in the course of conducting their lives. We, who are alive today, are going to play a large role in creating the singularity, and in determining what kind of singularity it is.

Those of us who live in the developed world these days, at the start of the 21st century, have a wonderful amount of freedom to choose what kind of work we do and how to spend our time. It’s not the same level of freedom that will likely be afforded once the singularity has drastically minimised material scarcity, but it’s far more than most of our historical predecessors have. So I’d encourage you to do your own research on the singularity and related ideas – and if you find the Singularitarian thinking of Kurzweil, myself and others compelling, ask yourself what you might be able to do to help!

AI and the Singularity

What I am personally doing to help move humanity toward a positive singularity is focused on artificial intelligence (AI) – specifically what I’ve come to call “artificial general intelligence” (AGI), a term I’ll clarify in a moment. The reason I’ve chosen to spend my time working on AI is that, of all the technologies relevant to the singularity, I believe that artificial intelligence is going to play the most critical role.

After all, if scientific and technological advance is going to proceed so fast that humans can’t keep up, who’s going to be driving these advances? Not humans – at least not humans in anything like their present form.

In the Kurzweilian/Vingean prediction, as singularity approaches, most advances will be made by AI software, or intelligent robots, or humans uploaded into digital substrates, or humans whose brains have been enhanced via neurally implanted computer chips or other similar technologies. Not all of these options match the everyday media portrayal of “artificial intelligence”, but they have one thing in common: in any of these possibilities, discovery and invention are being wholly or largely driven by engineered technology rather than by good old-fashioned human brains.

The technological singularity... is a hypothetical future point in time when the rate of progress in science and technology becomes so fast that the human mind can’t possibly keep pace.

Exactly what forms the first highly powerful AI will take remains unclear at this stage. Will they be humanoid robots serving various realworld roles like house-cleaner or laboratory scientist? Will they be software programs inhabiting the Internet rather than the everyday human world? Will they be uploaded humans – ordinary human personalities scanned from human brains, re-implemented in digital form inside computers or robots, and then improved in various ways? Will they be cyborgs – part human brain, part “brain chip”?

My own guess is that AI and robotic software is going to progress faster than brain scanning or mind uploading – which is why, in my own research work, I’m focusing mainly on trying to create software programs that can think as well as humans.

AGI Versus Narrow AI

I like to clearly distinguish two kinds of AI – one that I call “narrow AI” and the other that I call “general AI”, “Artificial General Intelligence” or simply AGI.

In the early days of the AI research field, no such distinction existed – the original AI researchers in the 1950s were focused on creating software and robots with the same general thinking power as human beings. But in the 1970s, 80s and 90s, the AI field drifted away from this focus toward the easier problem of making software capable of solving highly specific problems in specialised domains. Most of the research done in this era was focused on designing programs to do very particular things like play chess, predict stockmarkets, control industrial robot arms and search databases. These sorts of specialised problem-solving applications are fascinating and valuable, yet very different than trying to build a humanlevel general thinking machine.

Even relatively unintelligent, uneducated humans possess a kind of general problem-solving ability that these narrowly brilliant programs lack: the ability to go into a new kind of situation, gradually orient ourselves and figure out what’s going on, and get incrementally better and better at achieving our goals in the next context.

As paradigm cases of narrow AI, consider IBM’s programs Deep Blue (which beat the human world champion at chess) and Watson (which beat the human world champions at Jeopardy). These programs are wonderful achievements, but generality is not the focus. Each of these programs does one thing and one thing only, and to make it do something else requires reprogramming by humans with a fair bit of general intelligence! These are not AGI programs.

What I mean by an AGI is a system that has a lot of focus on generalisation and the ability to extend intelligence beyond one particular domain. Humans are not infinitely general, but they are far more general than any existing AI system, a situation that I and other AGI researchers are working to change.

Even relatively unintelligent, uneducated humans possess a kind of general problem-solving ability that these narrowly brilliant programs lack: the ability to go into a new kind of situation, gradually orient ourselves and figure out what’s going on, and get incrementally better and better at achieving our goals in the next context.

For instance, if a person became very good at playing chess, they would still be able to play fairly well even if the rules of chess were changed slightly. No brain surgery would be necessary. But to get Deep Blue to play a slightly modified version of chess you’d need to change its programming at least a little – the computer equivalent of brain surgery. Deep Blue can’t automatically adapt to changing situations and figure out how to achieve its goals therein because it has narrow AI, not AGI.

Now Is the Time for AGI

Why is the time now ripe for creating AGI? After all, the AI field has been around for
50 years and no one has created the human-level thinking machine yet. Why is 2012 so different than 2002 or 1972?

A number of different factors are currently converging, all of them contributing critical aspects to the AGI endeavour. None of them on their own will be enough to make a human-level AGI possible. But when you put them together, things start to look extremely promising.
The first factor is well-known: advances in computer hardware, with technological capability doubing every 2 years according to Moore’s law (see p.11). Right now, according to our best understanding of the human brain, there is no computer with as much computing power as the human brain. In a few decades, if the computing hardware industry continues on the same trajectory it’s been on for 50 years, there will be.

And it’s not even clear that we need computers as powerful as the human brain to create human-level AGI. Most AGI research is not aimed at emulating the human brain in detail, but rather at emulating the better aspects of human thought in other ways. In this case, there’s no reason that a human-level AGI requires a computer equal in processing power to the human brain. It may be feasible to create human-level intelligence by making AGI that is specially adapted to the computing hardware available, which is quite different in nature from the human brain.

The second factor is cognitive science, which has advanced tremendously in recent decades. By combining results and ideas from a host of different experiments and theories, cognitive scientists now have a reasonable understanding of the different aspects of the human mind, how they connect with each other, and how they conspire to create human-level intelligence. We had far less of a clue about these things three or four decades ago. Cognitive science came together as an integrated discipline in the 1980s and it’s grown up a lot since.

Neuroscience plays a role here – due to the advent of various wonderful brain imaging technologies (fMRI, PET, MEG, etc.) we understand the brain a lot better now than we did a few years ago. We’re nowhere near understanding “how the brain works” yet, but combining neuroscience information with knowledge from psychology, computer science, philosophy, linguistics, genetics and other areas, cognitive science is putting together a reasonable understanding of how the mind works.

As a third factor paving the way toward the creation of AGI, computer science has advanced tremendously in recent decades. Put simply, we have much better algorithms now. The same algorithms we have now, even if implemented on computers from decades ago, would proceed dozens or hundreds of times faster than the algorithms from decades ago.
Finally, as a fourth factor, revolutionary advances in robotics and virtual worlds make it a lot easier to give our AGI systems something to do. These days it’s feasible for a small AGI research team to embody their prototype AGI system in a commercial humanoid robot (like the Nao or Robokind) or a character in an online multiplayer virtual world. Even a decade ago it was vastly more difficult to give an AGI this kind of embodied, social, interactive experience, and thus more difficult to place an AGI an environment where it can acquire human-like commonsense knowledge via its own experience.

None of these factors guarantees you AGI by any means, but they do make it feasible for researchers to put together AGI designs that respect cognitive science and neuroscience, leverage the best of modern computer science and the best modern software and hardware tools, and power agents embodied in interesting environments. Looking at these general factors doesn’t really tell us whether AGI will happen two years from now or 20 years from now but it does show us rather clearly how various science and technology threads are coming together in a very AGI-friendly fashion.

Much of my own research and development these days focuses on engineering an open-source AGI system called OpenCog, which is based on combining cognitive science principles and computer science algorithms. We’re currently using OpenCog to control an animated character in a video game world, and experimenting with using it to control humanoid robots. I have high hopes for the OpenCog project but, from the general perspective of the singularity, it doesn’t really matter whether it’s OpenCog or some other AGI project that gets to the finish line first. What’s most important is that we’re living at a time when the enabling factors are all in place for teams of serious scientists and engineers to work on creating human-level artificial general intelligence.

From Sputnik to the Singularity

One notion I find interesting, in thinking about the future of AGI, is the possibility of an AGI Sputnik.

Sputnik, the first Russian satellite put into space, was a cool achievement on its own, and also a sort of wakeup call to the rest of the world. People saw Sputnik and they began to ponder the implications of launching objects into space. The US, spurred by the Cold War and an intense struggle for global supremacy with the Russians, developed its own space program in response.

Similarly it may be that, once a certain level of development is reached by any AGI research team, the rest of the world is going to wake up, take notice and effectively exclaim: “Wow, AGI is possible! It may not have done anything amazing yet, it may not have done anything practical yet, but then again Sputnik in itself wasn’t really of much use either. It’s clear as day now that AGI is going to be incredibly amazing sometime in the foreseeable future.”
My prediction is that, once a reasonable percentage of people begin to think that way about AGI, it’s going to become huge in the way that military development or medical research is now. Assuming this is right, it will be one critical step on the path to a singularity as foreseen by Vinge, Kurzweil and others.

My OpenCog colleagues and I are working toward creating an AGI Sputnik-level AGI system, probably taking the form of an OpenCog system controlling a humanoid robot with the mentality of a young human child. But if we don’t succeed, someone else will – and then, after that initial breakthrough, just as with past technologies like space flight, personal computers, the Internet and mobile phones, progress will very rapidly accelerate.
Where this acceleration will lead, we cannot possibly know for sure, any more than an ape or a caveman could have predicted the development of submarines, differential calculus, anime or the Internet. The spectrum of potential social outcomes of advanced AGI development is large, including possibilities both wonderful and terrible.

However, it does seem likely we can exert some guidance. One of the responsibilities we all have, as individuals living in the era when the singularity is plausibly near, is to do our best to nudge the development of advanced technologies in positive directions. In this vein, I have been doing a substantial amount of work on narrow AI systems aimed at biomedical research, and my hope is that this will be a major focus area for early-stage AGI systems.
Imagine AGIs serving as surgeons and inventing radical new life extension drugs at the same time as they improve their own cognitive software, build themselves new robot bodies and help human scientists develop chips to interface AGIs with human brains. This sort of vision may seem wildly science fictional, but so would have the web or the smart phone merely decades ago.

We live at an extraordinary time in history, and have the chance to participate in creating an amazing new world.