Tag Archives: future of mankind

Predicting the Singularity, the Advent of Superintelligence

From thinking about robotics, automation, and artificial intelligence (AI) this week, I’m evolving a picture of the future – the next few years. I think you have to define a super-technological core, so to speak, and understand how the systems of production, communication, and control mesh and interpenetrate across the globe. And how this sets in motion multiple dynamics.

But then there is the “singularity” –  whose main publicizer is Ray Kurzweil, current Director of Engineering at Google. Here’s a particularly clear exposition of his view.

There’s a sort of rebuttal by Paul Root Wolpe.

Part of the controversy, as in many arguments, is a problem of definition. Kurzweil emphasizes a “singularity” of superintelligence of machines. For him, the singularity is at first the point at which the processes of the human brain will be well understood and thinking machines will be available that surpass human capabilities in every respect. Wolpe, on the other hand, emphasizes the “event horizon” connotation of the singularity – that point beyond which out technological powers will have become so immense that it is impossible to see beyond.

And Wolpe’s point about the human brain is probably well-taken. Think, for instance, of how decoding the human genome was supposed to unlock the secrets of genetic engineering, only to find that there were even more complex systems of proteins and so forth.

And the brain may be much more complicated than the current mechanical models suggest – a view espoused by Roger Penrose, English mathematical genius. Penrose advocates a  quantum theory of consciousness. His point, made first in his book The Emperor’s New Mind, is that machines will never overtake human consciousness, because, in fact, human consciousness is, at the limit, nonalgorithmic. Basically, Penrose has been working on the idea that the brain is a quantum computer in some respect.

I think there is no question, however, that superintelligence in the sense of fast computation, fast assimilation of vast amounts of data, as well as implementation of structures resembling emotion and judgment – all these, combined with the already highly developed physical capabilities of machines, mean that we are going to meet some mojo smart machines in the next ten to twenty years, tops.

The dysutopian consequences are enormous. Bill Joy, co-founder of Sun Microsystems, wrote famously about why the future does not need us. I think Joy’s singularity is a sort of devilish mirror image of Kurzweil’s – for Joy the singularity could be a time when nanotechnology, biotechnology, and robotics link together to make human life more or less impossible, or significantly at risk.

There’s is much more to say and think on this topic, to which I hope to return from time to time.

Meanwhile, I am reminded of Voltaire’s Candide who, at the end of pursuing the theories of Dr. Pangloss, concludes “we must cultivate our garden.”