Friday, December 07, 2007

Reducing AGI complexity: copy only high level brain design

In my previous post Complexity and incremental AGI design I claim that complexity has very serious impact on AGI development.
If we want to improve our chances of successful AGI implementation, we need to cut complexity as much as possible.
In this post I want to touch the topic of copying human brain design while developing AGI.
Human brain structure is very complex it's almost impossible to describe in details how exactly brain works.
Richard Loosemore explains why this is the case:
Imagine that we got a bunch of computers and connected them with a network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program: it keeps a handful of local parameters (U, V, W, X, Y) and it updates the values of its own parameters according to what the neighboring machines are doing with their parameters.

How does it do the updating? Well, imagine some really messy and bizarre algorithm that involves looking at the neighbors' values, then using them to cross reference each other, and introduce delays and gradients and stuff.

On the face of it, you might think that the result will be that the U V W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just random mush, there are some (a noticeably large number in fact) that have overall behavior that shows 'regularities'. For example, much to our surprise we might see waves in the U values. And every time two waves hit each other, a vortex is created for exactly 20 minutes, then it stops. I am making this up, but that is the kind of thing that could happen.

2) The algorithm is so messy that we cannot do any math to analyze and predict the behavior of the system. All we can do is say that we have absolutely no techniques that will allow us to mathematical progress on the problem today, and we do not know if at ANY time in future history there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be "explained" in the normal way. We see them happening, but we do not know why they do. The bizarre algorithm is the "low level mechanism" and the waves and vortices are the "high level behavior", and when I say there is a "Global-Local Disconnect" in this system, all I mean is that we are completely stuck when it comes to explaining the high level in terms of the low level.

Believe me, it is childishly easy to write down equations/algorithms for a system like this that are so profoundly intractable that no mathematician would even think of touching them. You have to trust me on this. Call your local Math department at Harvard or somewhere, and check with them if you like.

As soon as the equations involve funky little dependencies such as:

"Pick two neighbors at random, then pick two parameters at random from each of these, and for the next day try to make one of my parameters (chosen at random, again) follow the average of those two as they were exactly 20 minutes ago, EXCEPT when neighbors 5 and 7 both show the same value of the V parameter, in which case drop this algorithm for the rest of the day and instead follow the substitute algorithm B...."

Now, this set of computers would be a wicked example of a complex system, even while the biggest supercomputer in the world, following a nice, well behaved algorithm, would not be complex at all.

The summary of this is as follows: there are some systems in which the interaction of the components are such that we must effectively declare that NO THEORY exists that would enable us to predict certain global regularities observed in these systems.


So, if low level brain design is incredibly complex - how do we copy it?

The answer is: "we don't copy low level brain design".
Low level design is not critical for AGI. Instead we observe high level brain patterns and try to implement them on top of our own, more understandable, low level design.

Labels: , , , ,


Complexity and incremental AGI design

Why is it so hard to build Artificial General Intelligence (AGI)?
It seems we have almost everything we need: great hardware, mature software development industry, Internet, Google, lots of successful narrow AI project ... but AGI is still to hard to crack.

The major reason is -- overall complexity of building AGI.

Richard Loosemore is writing about it:
Do we suspect that complexity is involved in intelligence? I could present lots of reasoning here, but instead I will resort to quoting Ben Goertzel: "There is no doubt that complexity, in the sense typically used in dynamical-systems-theory, presents a major issue for AGI systems"
Can I take it as understood that this is accepted, and move on?
So, yes, there is evidence that complexity is involved.


Richard also explains, how exactly complexity affects system development:
when you examine the way that complexity has an effect on systems, you find that it can have very quiet, subtle effects that do not jump right out at you and say "HERE I AM!", but they just lurk in the background and make it quietly impossible for you to get the system up above a certain level of functioning. To be more specific: when you really allow the symbol-building mechanisms, and the learning mechanisms, and the inference-control mechanisms to do their thing in a full scale system, the effects of tiny bits of complexity in the underlying design CAN have a huge impact. One particular design choice, for example, could mean the difference between a system that looks like it ought to work, but when you set it running autonomously it gradually drifts into imbecility without there being any clear reason.


The is a good technique of dealing with complex system -- increase complexity gradually and carefully test every step.
That's why I think it's so important to build testable narrow AI systems prior to building AGI.
We have many Narrow Artificial Intelligent Systems already, but we need more. And we need them to become more advanced up to the point when they become AGI.

This page is powered by Blogger. Isn't yours?