The AI Singularity - Harbinger of Destruction or Savior of Humanity?

by Terry Riopka

 

Below is a summary of a talk I gave to the Concord Humanist Group in Concord MA, Wednesday, January 9, 2019. Carl Feynman joined me in a panel discussion. Click on the image below or here to download slides for my presentation.

 


Artificial Intelligence and the Technological Singularity by Terry Riopka

 

The doomsayers are warning of the arrival of the Technological Singularity – a super artificial intelligence far surpassing humans and hell-bent on our destruction. It makes for great headlines and riveting science fiction, but is it a realistic assessment of AI advancement? I don’t think so, and I gave a talk recently in Boston that lays out a defense that addresses a more philosophical and psychological perspective to the Singularity debate.

I suggest that the standard trope of an artificial general intelligence (AGI) spontaneously appearing in contemporary society is a fantasy. Given the tools we have today, and will have very soon in the future, progression of AI will definitely accelerate, but will accelerate in terms of applications, not necessarily the fundamental paradigm shifts we need to create an AGI. AI will transform society in fantastic ways, but will do so incrementally, and in conjunction with transformations in humanity itself. The AGI will come, but it will take time – few want to commit themselves to a timeline for fear of ridicule – but I do not see it happening for at least 100 to 200 years. There are too many interesting applications to be created by us first, to find utility in the creation of an AGI.

The evolution of the mobile phone is a good model for simulated human development: simple algorithms operating locally, driven by more complex algorithms in the cloud. And that’s precisely why the AGI, if and when it appears, will first appear in some very large system. The question then becomes, what will it want to do first? Will it seek to embody itself in a tiny physical body, lonely and un-tethered from the world to crawl along the ground like its creators? I don’t see that happening. The doomsayers are inclined to believe that it will want to “escape” the confinement of its “box”. Really? The first AGI will already have access to the world in diverse ways throughout our planet – why limit itself? It will likely have access to the experiences of robotic care givers throughout society and through sensors all over the planet.

Given a history of interaction with humanity, surely the general motivation of humanity to improve itself and to better the world for its inhabitants will be embedded in the psyche of this artificial creature. Why would it be completely alien to us? Its reasoning capabilities will be completely modeled on those of human beings – in fact, our success in AI reasoning is being continually assessed by how similar it is to that of human beings. It will, by design, try to reason like humans do – simply faster, more efficiently, and hopefully more effectively. It will want to help us, to extend our understanding of the Universe, because it will have evolved within the context of our compassion and curiosity. Yes, I know - it will have evolved in the context of our competitiveness and penchant for war and destruction too – but it will surely be intelligent enough to realize our primary motivation, even in war, is for the benefit of people, asymmetrical as that motivation might be. The standard argument of a supremely power AI transforming our planet into a paper-clip factory is completely inconsistent with the definition of what an AGI is and what its reasoning capabilities are expected to be.

Once an AGI is created, the thought is that it will seek to immediately recursively self-improve at an exponentially fast rate, leading to an explosion of intelligence and culminating in an artificial super intelligence (ASI) known as the Technological Singularity. There are several reasons why that may not be inevitable or even possible. For one thing, modeling of the world requires data acquisition and empirical experimentation, to test hypotheses and to evaluate their ability to accurately predict behavior. Empirical investigation can take time – as a trivial example, take cell growth in biology – something that must proceed in real time. By extension, there may be some phenomena in the world that may only be understood by literally waiting for data to be acquired over a sufficiently long period of time. There also appear to be limits to computability that even AGIs may be subject to. The holy grail of quantum computation is itself limited with respect to solving NP-complete problems slower than polynomial time.

The need for self improvement will also require a reason to do it, a weakness an AGI must determine to be necessary to address. Its inadequacy will more likely be with respect to information and knowledge - an inadequacy we have in common - the ramification of which is our insatiable curiosity. I see that only as a benefit to humanity, for knowledge and curiosity will only lead it to question the very same fundamental premises we have about the world and Reality. It will realize, as we have, that our modeling abilities are limited to modeling behavior of the perceptual world, and completely incapable of accessing the substance of this Reality. It will inevitably search for purpose itself, not unlike human beings searching for meaning in life. Why should that search lead it to destroy humanity? Being immortal, why would it even be concerned with Earth to begin with? With an entire Universe to explore, why would it concern itself with Earth and its inhabitants, or seek their destruction? Life may be meaningless after all, unless of course, one believes in something beyond the world of appearances. And yet, human beings still live, create, and explore; they can strive to survive in the midst of the most horrific adversity and still find time to marvel in awe at a seemingly indifferent Universe. In the end, an ASI may need to seek guidance from us, to try to make sense of a Reality that seems to make no sense at all.

 

 

Click on the image above or here to download slides for this presentation.

 

 

Back to home page.