Thursday, March 17, 2005

Limited AI, weak AI, strong AI

Jiri,

> your AI reminds me of an old Czech fairy-tale where a dog and cat
> wanted to bake a really tasty cake ;-9, so they mixed all kinds of
> food they liked
> to eat and baked it.. Of course the result wasn't quite what they expected >;-).

That's not the case.
:-)

I know a lot of stuff and I carefully selected features for strong AI.
I rejected far more features than I included.
And I didn't it because I thought that these rejected features are useless in true AI, in spite that these rejected features are useful for weak AI.

> I think you should start to play with something a bit less challenging
> what would help you to see the problem with your AI.

Totally agree.
As I said --- I'm working on limited AI. Which is simultaneously:
1) Weak AI.
2) Few steps toward strong AI.

There are many weak AI applications. Some of weak AIs are steps toward strong AI, most of weak AIs don't contribute almost anything to strong AI.
That's why I need to choose limited AI functionality carefully.

Your suggestion below may become a good example of such limited AI. With proper system structure.

But probably I wouldn't work on it in the nearest future because it doesn't have much business sense.
======= Jiri's idea =======
How about developing a story generator. User would say something like:
I want an n-pages long story about [a topic], genre [a genre].
Then you could use google etc (to save some coding) and try to
generate a story by connecting some often connected strings.
Users could provide the first sentence or two as an initial story trigger.
I do not think you would generate a regular 5 page story when using
just your statistical approach. I think it would be pretty odd
mix of strings with pointless storyline = something far from the
quality of an average man-made story.
===========================

Sunday, March 13, 2005

Lojban vs programming languages vs natural language

Ben, this idea is wrong:
-----
Lojban is far more similar to natural languages in both intent, semantics and syntax than to any of the programming languages.
-----

Actually Lojban is closer to programming languages than to natural languages.
Structure of Lojban and programming languages is predefined.
Structure of natural languages is not predefined. Structure of a natural language is defined by examples of using this natural language. This is the key difference between Lojban and Natural Language.

Since structure of natural language is not predefined, you cannot put language structure into NL parser code. Instead you need to implement system which will learn rules of natural language from massive amount of examples in this natural language.

You are trying to code natural language rules in text parser, aren’t you?
That’s why you theoretically can parse Lojban and programming languages, but you cannot properly parse any natural language even theoretically.


If you want properly parse natural language, you need predefine as little rules as possible.
I think that natural language parser has to be able to recognize words and phrases.
That's all that NL text parser has to be able to do.

All other mechanisms of natural language understanding should be implemented outside the text parser itself.
These mechanisms are:
- Word dictionary and phrase dictionary (too serve as a link between natural language (words, phrases) and internal memory (concepts).
- Relations between concepts and mechanisms which keep these relations up to date.

Lojban

Ben,

I think that it's a mistake to teach AI to any language other than
natural language.

Lojban is not a natural language for sure (because it wasn't really
tested for variety of real life communication purposes).

The reasons why strong AI has to be taught to a natural language, not to Lojban:
1) If AI understands natural language (NL) then it's a good sign that
the core AI design is correct and quite close to optimal.
If AI cannot learn NL then it's a sign that core AI design is wrong.
If AI can learn Lojban --- it proves nothing from strong AI standpoint.
There are a lot of VB, Pascal, C#, C++ compilers already. So what?

2) NL understanding has immediate practical sense.
Understanding of Jojban has no practical sense.

3) NL text base is huge.
Lojban language text base is tiny.

4) Society is "the must" component of intelligence.
Huge amount of people speaks/write/read NL.
Almost nobody speaks Lojban.

Bottom line:
If you spend time/money on design/teaching AI to understand Lojban ---
it would be just a waste of your resources. It has neither strategical nor tactical use.

Friday, March 11, 2005

Logic

Jiry, you misunderstand what the Logic is about.
Logic is not something 100% correct. Logic is a process of building conclusion based on highly probable information (facts and relations between these facts).
Under "highly probable" I mean over 90% probability.
Since Logic does not operates 100% correct information, logic generates both correct and incorrect answers. In order to find out if logical conclusion is correct we need to test it. That's why experiment is necessary before we can rely on logic conclusion.
Let's consider an example of logic process:
A) Mary goes to the church.
B) People who go to church believe in God.
C) Mary believes in God
D) People who believe in God believe in life after death.
E) Mary believes in life after death.

Let's try to understand how reliable this logic conclusion could be.
Let assume that every step has 95% probability.
Then total probability would be 0.95 * 0.95 * 0.95 * 0.95 * 0.95 = 0.77 = 77%

Actually:
1) We may have wrong knowledge that Mary goes to the church (we could confuse Mary with someone else, or Mary might stop going to the church).
2) Not all people who go to church believe in God
3) We could make logical mistake assuming that (A & B) result in C.
4) Not all people who believe in God believe in life after death.
5) We could make logical mistake assuming that (C & D) result in E.

Conclusion #1:
Since logic is not reliable, long logical conclusions are typically could be less probable than even non-reliable observations.
For instance, if Mary’s husband and Mary’s mother mentioned that Mary doesn’t believe in life after death then we’d better rely on their words more than on our 5 step logical conclusion.

Conclusion #2:
Since multi-step logic is unreliable --- multi-step logic is not "the must" component of intelligence. Therefore logic implementation could be skipped in the first strong AI prototypes.
Limited AI can function very well without multi-step logic.

Friday, March 04, 2005

Background knowledge --- how much data do we need?

Jiry> And try to understand that when testing AI (by letting it to solve
Jiry> particular problem(s)), you do not need the huge amount of data you
Jiry> keep talking about. Let's say the relevant stuff takes 10 KB (and it
Jiry> can take MUCH less in many cases). You can provide 100 KB of data
Jiry> (including the relevant stuff) and you can perform lots of testing.
Jiry> The solution may be even included in the question (like "What's the
Jiry> speed of a car which is moving 50 miles per hour?"). There is
Jiry> absolutely no excuse for a strong AI to miss the right answer in those
Jiry> cases.

Do you mean 100 KB data as the background knowledge is enough for strong AI?
Are you kidding?

By the age of 1 year human baby parsed at least terabytes of information. And keeps in his/her memory at least many megabytes of information.

Do you think 1 year old human baby has strong AI with all this knowledge?


Yes, artificial intelligence could have advantage over natural intelligence. AI can be intelligent with less amount of info.
But not with 100 KB anyway.
100 KB is almost nothing for General Intelligence.

From Limited AI to Strong AI

Jiri> OK, you have a bunch of pages which appear to be relevant.
Jiri> What's the next step towards your strong AI?

Next steps would be:
1) Implementation of hardcoded goals
2) Implementation of experiment feature.
3) Natural Text writing.
4) ...

How many types of relations should strong AI support?

Dennis>> why 4 types of relations are better than one type of relations?

Jiri> Because it better supports human-like thinking. Our mind is working
Jiri> with multiple types of relations on the level where reasoning applies.

Our mind is working with far more than 4 types of relations.
That's why it's not good idea to implement 4 types of relations. In one hand it's too complex. In another hand it's still not enough.
Better approach would be to use one relation which is able to represent all other types of relations.

Thursday, March 03, 2005

Learning common sense from simple Natural Text parsing

Jiri,

>> 1) Could you please give me an example of two words which are used near
>> each other, but do not have cause-effect relations?

> I'll give you 6. I'm in a metro train right now and there is a big
> message right in front of me, saying: "PLEASE DO NOT LEAN ON DOORS"
> What cause(s) and effect(s) do you see within that statement?


Let imagine that strong AI is in reasoning process.
But in order to make general reasoning AI needs to have background knowledge (common sense). That's what CyCorp is trying to achieve.
Now let's consider what kind of background knowledge can be extracted from statement "PLEASE DO NOT LEAN ON DOORS".
(Obviously this knowledge extraction should be made not in the actual decision making time, because huge amount of text should be parsed and our test statement is just one of many millions statement).

Ok, what we know from the test statement:
- If you think about "lean" - think about "doors" as one of the options.
- If you think about "door" - think about "lean" as one of the options.
- If you say "do not" - think about saying "please" to.
- If you say "do" - think about saying "please" to.
- "Doors" is a possible cause for "Not lean"
- "Doors" is a possible cause for "lean"
- You "Lean" "on" something.
- If you think about "on" - think about "doors" as one of the options.

You can extract more useful information from this sentence.
Even "Please" -> "Doors" and "Doors" -> "Please" have some sense. Not much though. :-)
Statistical approach would help to find what relations are more important than other.

Do you see my point now?

When it's time to make actual decision, AI would have some common sense database which will provide large, but not endless amount of choices to consider.
All these choices would be pre-rated. That would help to prioritize consideration of these choices.



Now let's consider if structure of the main memory should be adjusted in order to transform "Limited AI to Strong AI.
I don't see any reason to change memory structure in order to make such transition.
Additional mechanisms of updating cause-effect relations would be introduced such as experiment, advanced reading, and "thought experiment". But all these new mechanisms would still use the same main memory.

Tuesday, March 01, 2005

Simple AI as a necessary prototype for complex AI

Jiri,

1) Goals defined by operator are even more dangerous.
2) You can load data from CYC, it this data wouldn't become knowledge. Therefore it wouldn't be learning. And wouldn't be useful.
Goals are still necessary to learn. Only the goals give sense to learning.

3) Why would long question cause "no answer found" result? Quite contrary --- the longer the question, the more links to possible answers could be found.

4)
>> Bottom line: "Generalization is not core AI feature".

> It's not a must for AI, but it's a pretty important feature.
> It's a must for Strong AI. AI is very limited without that.

- I have ideas about how to implement generalization feature.
Would you like to discuss these ideas?
- I think that it's not a good idea to implement generalization in the first AI prototype.
Do you think that generalization should be implemented in the first AI prototype?


5)
> "Ability to logically explain the logic" is just useful for invalid-idea
> debugging.
> So I recommend to (plan to) support the feature.

All features are useful. The problem is --- when we put too many features into software project --- it's just dies.
That's why it's important to correctly prioritize the features.

Do you think that logic should be implemented in the first AI prototype?

50 years of trying to put logic into the first AI prototype proved that it's not very good idea.



6) Reasoning tracking
> It's much easier to track "reasons for all the (sub)decisions"
> for OO-based AI.

No, it's not easier to track reasoning in AI than in natural intelligent system.
Evolution could code such ability. But the evolution didn't cover 100% tracking of reasoning.
There are very essential reasons for avoiding 100% reasoning tracking.
Such tracking simply makes intelligent system more complex, slower, and therefore very awkward.
And intelligent system is very fragile system even without such "tracking improvement".

Bottom line: First AI prototype doesn't need to track process of its own reasoning. Only reasoning outcomes should be tracked.


7) AIML
> Your AI works more-less in the AIML manner. It might be fun to play
> with, but it's a dead end for serious AI research.
> AIML = "Artificial Intelligence Markup Language", used by Alice and
> other famous bots.

Does AIML have ability to relate every concept to each other?
Do these relations have weights?
Does one word correspond to one concept?
Is learning process automated in Alice?
Is forgetting feature implemented in Alice?


8)
>>If I need 1 digit precision, then my AI needs just to remember few hundred
>>combinations
> searching for stored instances instead of doing real
> calculation is a tremendous inefficiency for a PC based AI.

Calculation is faster than search. But... only if you already know that calculation is necessary. How would you know that calculation is necessary when you parse text?
The only way --- is find what you have in your memory. So you can just find the answer.

But yes, sometimes required calculations are not that easy. In this case the best approach would be to extract approximate results from the main memory and make precise calculations through math functions.
And again, this math functions integration is not top-priority feature. Such feature is necessary for technical tasks, not for basic activity.


>> Intelligence is possible without ability to count