Volume 2, Issue 1 
1st Quarter, 2007


The Role of AGI in Cybernetic Immortality

Ben Goertzel, Ph.D.

Page 6 of 7

Brain Modeling versus Computer Science Approaches to AGI

Next, I'm going to take 2 seconds to take a potshot at some of Ray Kurzweil’s predictions regarding AI -- and then I'll spend a couple minutes just describing my own AI work at a very high level. 

I think, in general, it's a lot easier to predict what will happen than predict when it will happen.   This is very true, for example, in terms of software project estimation.  Microsoft can't even tell how long it will take to make the next version of Windows.  And it's also true with more long-term future logical prognostication. 

There is the big question, for those that are interested in general intelligence, of what the route will be.  Is it first going to be achieved through emulating the human brain?  Not necessarily imitating it in detail but learning how the brain works and emulating those processes.  Or is it going to be achieved through a more computer science approach?  Where you take what's known about cognitive science as an inspiration and then use computer science algorithms and architectures to realize intelligence, rather than emulating what the brain does in detail. 

My own view is that we just don't know enough about the brain.  The most interesting and important parts of brain function -- higher level cognition -- are not understood at all.  My guess is it's going to be a while before we know enough about the brain to use neuroscience to guide the creation of a general intelligence.  My own prognostication, for what it's worth, is that this route would take until around 2040.  I myself will be a disturbingly old man (and maybe even a disturbing old man!) before we get an AGI by emulating the human brain – unless some other route gets us to AGI first and helps us scan and map the brain and make better hardware. 

My own feeling is that if a concerted effort is made in funding this area -- it's really not the case right now -- then artificial general intelligence via computer science methods can be achieved long before that.  I’ll take a number -- not quite out of a hat, but it will seem that way of context of this talk, and I don’t have time to go into my reasoning – and propose 2020 as the date.  Twenty, twenty has a nice ring to it, doesn't it?  I'm taking that as a date that powerful AGI could quite possibly be achieved through computer science methods if we don't have to wait for the neuroscientists.  And it could happen much sooner than that, I think.  It could happen in five years from now, with my own Novamente project, if we got enough funding and everything went right.

A computer science based approach is a higher risk approach, in a sense.   If you carry out predictive reasoning in a kind of plodding, methodogical, conservative way, it's really obvious that if you map out what the brain does and emulate it in a machine, you’ve got to be able to make an AI that way.  I mean there's the objection that maybe the brain uses macroscopic quantum effects in some weird way.  But even then you just need to build a quantum computer instead of a classical computer. 

The idea of making AGI by computer science is more risky.  Maybe we're not smart enough, maybe the designs we think of will fail.  On the other hand, although it's more risky, it also has more potential to proceed really fast because you don't have the huge overhead of waiting for the neuroscientists to map the brain.  This is the approach that interests me most. 

The Novamente Approach to AGI

My own approach to general intelligence in the last few years has centered around a software system called Novamante.[1]  Novamente means new mind.  It is also a Portuguese word for "again."  And I chose the Portuguese term since a number of my software collaborators are actually based in Brazil rather than the U.S. 

Novamente is a C++ software system which has been designed in a lot of detail.  It's a big system.  We're slowly plodding through the process of implementing and testing the various components.  Maybe 40 percent complete in implementing the thing.  And this has been a kind of spare-time background project since 2001.   Just recently this year we now have three people full time dedicated to the project.  We’re staring to see a decent pace, although nowhere near the pace we would like to see. 


Image 8 -- [Click image above for larger view]

Now some components of the system have been commercially deployed, in some software consulting projects that we've done in the areas of biology and natural language processing.  But the process of using components of the system in these narrow AI consulting projects has been instructive regarding the big difference between AI and General AI.  The bits and pieces of software we've used to help NIH (National Institute of health), INSCOM (US Armey Intelligence and Security Command) and other customers just really don’t get to the essence.  What their AI projects need is fairly simple stuff for pattern mining or language analysis and -- none of these customers so far has wanted to fund the long and difficult process of constructing a system that can reflect on itself and understand itself. 

What we are doing to move toward general intelligence right now is to embody our AI systems in a 3D simulation world.  The simulation world itself is based on an open source video game engine called Crystal Space.  The AI controls a humanoid agent in the sim world, and the human teacher teaching the AI controls another humanoid agent.  The idea is that you interact with the AI in this world and try to teach it stuff.  You can chat with the AI in a little chat window; and it can walk around and pick things up and so forth.


Image 9: Novamente AI Sim

This simulation world project is just getting started.  It's still a bit buggy.  The agent walks a bit awkwardly -- but it's OK, since it is a robot.  And we haven’t done much with language learning yet, but we're working it.  Right now we’re dealing mostly with very, very simple stuff like playing fetch; hiding an object and seeing if the AI remembers that it still exists (what Piaget [2] called object permanence). 

The learning methodology is to try to build an artificial baby which learns everything it needs to know just based on its own experience and its interaction with you, and progressively gets smarter and smarter.  We chose a simulation world rather than a physical world, largely for pragmatic reasons.  It's lower cost.  It's easier for a distributed team to deal with. 

Ultimately, it would be nice to embody Novamente in a physical robot and make the simulation world a rather detailed simulation of the physical robot itself.  Getting into an uploading vein, you could also make one of these simulated guys be a virtual Martine or a virtual Ben.

In the long run, you could use a massive simulated world like Second Life [3] as a vehicle here.  You could have partial human uploads, baby AI's, and human-controlled avatars all interacting with each other -- and I think that's a great way for AI's to learn. 

Next Page

Footnotes

[1]. Novamente - Novamente is a software product and development firm aimed at bridging the gap between narrow and general purpose Artificial Intelligence. Ongoing research brings the company's Novamente Cognition Engine closer each month to powerful Artificial General Intelligence. Novamente.net February 8, 2007 3:32 pm EST

[2]. Jean Piaget - (August 9, 1896 – September 16, 1980) was a Swiss philosopher, natural scientist and developmental psychologist, well known for his work studying children and his theory of cognitive development. Wikipedia.org February 8, 2007 2:59 pm EST

[3]. Second Life - a 3-D virtual world entirely built and owned by its residents. Since opening to the public in 2003, it has grown explosively and today is inhabited by a total of 3,401,972 people from around the globe. Secondlife.com February 8, 2007 3:02 pm EST

 

 

1 2 3 4 5 6 7 Next Page>