Tuesday, February 20, 2007

Larry Page talks about AI

Google's Page urges scientists to market themselves
Google co-founder Larry Page has a theory: your DNA is about 600 megabytes compressed, making it smaller than any modern operating system like Linux or Windows.
"We have some people at Google (who) are really trying to build artificial intelligence and to do it on a large scale," Page said to a packed Hilton ballroom of scientists. "It's not as far off as people think."

I agree with Larry Page: human's DNA has relatively small size.
Besides, not all human DNA is in charge of the brain. I guess that something like 10% of the whole DNA is related to brain development.

I wrote about that over 3 years ago:
The time has come The time has come to develop Strong Artificial Intelligence System
Strong AI project is quite complex software project. However even more complex systems were implemented in the past. Many software projects are more complex than human DNA (note that human DNA contains way more than just genocode for intelligence).

What Page is talking about is inaccurate both Biologically and Technologically.

That in itself is not surprising, what is shocking is the fact that the majority of the media/blogosphere seem to be accepting what he says implicitly. Frankly speaking his argument is a joke.
Harish TM, what exactly is your point?

That Larry Page is too pessimistic or too optimistic?

Let's consider your two arguments separately.

1) You claim that strong AI doesn't need as much computational power as inefficient human brain needs.

Assuming that our strong AI algorithm would be more efficient than human intelligence algorithm -- you are right.

But most probably Google will want this strong AI work as fast as possible. May be thousands times faster than a human.

In this case, lots of computational power would be very useful.

2) Your DNA discussion seems to have no point at all.
So what that it's very hard to understand DNA code?
There are lots of programs which are very hard to understand (and they were written by humans, not by mutations and natural selection).

Larry Page doesn't claim that his engineers are going to decode DNA. He's just trying to estimate overall complexity of DNA program.

From my perspective his estimations are appropriate and even slightly pessimistic.
He didn't take into account the fact that most of human DNA has little to do with intelligence.
Another optimist about Google AI:
Google and Artificial Intelligence

Dennis, what you've missed in Harish's argument is that just knowing the sequence of DNA is not enough to predicting how it works as a "program."

Conservative estimates for number of genes that humans contain is about 30-40 thousand. Obviously not all of them code for proteins. So lets make an even more conservative estimate and say only 20,000 proteins are produced and exist in a cell. So here's the problem, you need to figure out what each of these 20,000 proteins are doing, how they interact with each and so on. And when you know that, you only know what is happening inside one cell. What you'd next need to figure out is how each of the trillion or so cells that make up your central nervous system interact with each other and produce what constitutes intelligence.

All of this is just a long winded way of me saying that strong AI may or may not be possible, but using the size of the human genome as an argument for the possibility of strong AI is a flawed argument at best or plain deceptive at worst.
We don't have to figure out how proteins work inside of cell in order to design AI.

We don't even have to know how human brain works in order to design AI.

It's helpful to know how brain works, but not necessary to design AI.

Knowing size of DNA helps to estimate overall complexity of strong AI system.

If you know other ways to estimate complexity of strong AI system (which is not created yet) - please let me know.
Nope, I don't know how to estimate the complexity of an AI, but I can tell you that Page's argument is wrong. Here's what he says:

1. Human DNA is simple.
2. Human DNA "programs" human brains.
3. Humans are intelligent.
4. Therefore, AI is simple.

The flaw here lies in the second step. DNA is not the only factor that creates a living system. In addition to DNA size, you need to factor in the interactions between the proteins produced by that DNA, since the DNA does not say how those proteins interact with each other and since it is the interactions between proteins that bring about the functioning of a cell, etc etc.

Using the metaphor of a "program" for DNA is wrong and its that assumption that I'm contesting.

Strong AI may or may not be simple. All I'm saying is you can't use the "simplicity" of the human genome to prove it.
> since the DNA does not say how those proteins interact with each other


It seems that you are challenging fundamentals of Genetics :-)

Let me ask you this: How all people manage to have two legs and two ears if DNA does not define interaction between proteins?

My point is that brain (and body) structure is predefined by DNA.
Whatever information would be stored in the brain is defined mostly by environment, but structure is defined by DNA.

When we build strong AI we want to hardcode AI's structure and then we want to fill it with information from the environment.

Complexity of our code should be comparable with complexity of whatever defined human brain structure.

According to Genetics, human brain structure is defined by DNA and DNA only (with very little exceptions such as physical traumas).

Therefore Larry Page is right when he claims that strong AI should have complexity comparable with DNA complexity.
It seems that you are challenging fundamentals of Genetics :-)

Nope. I'm a trained microbiologist, so trust me when I say I know what I'm talking about! :)

Let me ask you this: How all people manage to have two legs and two ears if DNA does not define interaction between proteins?

Let me answer that with more question. Given a human genome, how would you create a new human being? Can I put it into a frog egg and expect it to work?

DNA works as a description or a recipe. Just as you cannot say what a cake will taste like when you read the recipe, reading the code of DNA does not tell you everything about how the protein formed by it will work. If fact it says NOTHING at all about the three dimensional structure of the protein. Sure, we can make SOME predictions when we know the linear sequence of amino acids, but cannot be 100% sure. If this were the case, then all human biology would have ended when we sequenced the Human Genome. The fact that we are nowhere near to a complete understanding should serve to remind people that this is manifestly not the case!

Secondly, knowing the three dimensional structure of the protein doesn't give you complete understanding of its functions. A protein functions in the complete physical environment, change that environment and it could function in a completely different way! Prions are just one example of this, but even in a normal cell, "moonlighting" proteins are by far the norm rather than the exception. If we knew everything a protein did, just by its linear sequence, why are hundreds of thousands of cell biologists working their asses off in labs studying different model systems and cell lines trying to figure out what different proteins do?

I'm not claiming that human genome would be easy to decode.

Compiled version of Windows XP is extremely hard to decode into useful source code.

But as well as it's not extremely challengable to reproduce another operating system, it's not extremely challengable to reproduce intelligence.
But as well as it's not extremely challengable to reproduce another operating system, it's not extremely challengable to reproduce intelligence

Cool. We're back to where we started. At no point in time was I debating whether it's easy or hard to reproduce intelligence. To quote from my first comment:

...strong AI may or may not be possible, but using the size of the human genome as an argument for the possibility of strong AI is a flawed argument...

This is all I'm saying. I'll defer to your expertise on AI and how easy it may be, but the fact that DNA is easy or hard to understand is not an argument for how easy or hard AI is. That was my issue with what Larry Page said too. As a biologist I took exception to the position of "genetic determinism" that is implied in his statement, that's all.
You are confusing "complexity" and "hard to understand".

These terms are different (in spite that there is some correlation between them).

That's possible that moderately complex system would be virtually impossible to understand.

Also it's possible that every complex system is quite understandable.

Larry Page implies that DNA complexity is within complexity of modern man-made systems. But Page doesn't claim whether it would be easy to decode DNA or not.
I think his claim about strong AI is kind of misleading. If computational power would be that important for creating AI, present day super computers should be a lot more intelligent than they now are. I think our intelligence is based on clever information processing and our very efficient way of storing information in a complex relational way.

Processing more data is helpfull for advances in AI, but the real breakthroughs have to be made in the fields of information processing and information storing.
mister T,

"Smart algorithms and computational power" are more important for AI development than "very complex smart algorithms".

Do you see the point of Larry Page ?
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?