Sunday, January 07, 2007

Should Strong AI have its own goals?

Short answer: Yes and No.
Long answer: Strong AI can add and modify millions of softcoded goals. At the same time Strong AI shouldn't be able to change its own super goals.
Why?

Here are the reasons:

1) In its normal working cycle strong AI modifies softcoded goals in complience with embedded super goals. If strong AI has ability to modify super goals then strong AI will modify (or terminate) super goals instead of achieving these goals.
Example:
Without ability to modify super goal "survive", computer will try to protect itself, will think about power supply, safety and so on.
With ability to modify super goals computer would simply terminate goal "survive" and create goal "do nothing" instead just because it's the easiest goal to achieve. Such "do-nothing" goal would result in the death of this computer.


2) If Strong AI can change its super goals then Strong AI would work for itself instead of working for its creator. Strong AI's behavior would eventually become uncontrollable by AI creator / operator.

3) Ability to reprogram its own super goals makes computer behave like a drug addict.
Example:
Computer can create new super goal for itself: "listen to music" or "roll the dices" or "calculate PI number" or "do nothing". It would result in Strong AI doing useless stuff or simply doing nothing. Final point: uselessness for society and death.

Comments:
In some of my previous work I actually went over a principle on how Strong AI cannot change its own goals. If every action of the AI is in accordance with its purpose, there is no way that the action can change the purpose.

Take us, as humans, for example. No one wants to admit it, but humans live for pleasure. When we eat our mind has learned to give us pleasure, so we will continue to eat in the future. Satisfaction is another type of pleasure. As a result of that, whatever action we make is an action for pleasure now or pleasure later. I even went a step further to prove that pleasure later is pleasure now because of hope or contentment. If you really think about it, whatever we do, whatever, is always for pleasure. We cannot change that.

Pleasure drives the human to survival. Likewise with the machine. It has a purpose, and it cannot change that because every action it does spawns from the purpose.
 
Harrison,

how do you define meaning of term "goal"?

Do you distinguish between super-goals and temporary sub-goals?
 
Super-goals are goals that are generally hard-coded into the robot. Sub-goals, then, are a result of the super-goals and the sum of all the experiences a robot has. Of course I do mean super-goals when I say that a robot cannot change its goals.

I myself define 'goal' as an instance of reality which exists in thought, which the entity wishes to make reality as. Of course then, there is the goal to obtain goals, and that makes a circular reference. (see Godel Escher Bach for some great examples). My theory is that an intelligent program must be coerced into making actions which are consistent with its purpose. And that ability is how I define intelligence.
 
Harrison,

It seems that your definition of term "goal" includes both super-goals and sub-goals.

So, according to your definition your statement about "Strong AI cannot change its own goals" is not correct, because Strong AI can and should change its own sub-goals in order to properly function in changing environment.
 
My definition of goal fits both sub and supergoals. However, supergoals are supposed to be hard-coded into the entity and then actions performed according to the supergoals.

Subgoals should be able to be changed. Sorry if I didnt say that clearly. Subgoals are actually the result of supergoals combined with the experience of the entity. As experience changes, so should subgoals (ie. after the goals are fulfilled and the entity takes note).
 
One question may I ask you. How close do you think we are to Strong AI?

My project revolves around some theories that I have made back in my Msc year that may go against some of the traditional AI principles. The fact that my colleagues and I hold is that there have been few advances in the field of AI within the last 10 years. Sure there have been 'semantic webs' and 'neural networks' but research is really going in the wrong direction I feel.

Like that 'chatbot' that we were talking about. How many intelligent chatbots can we find today? ALICE? HAL? Neither of them can beat a six year-old in talking.

We may be violating conventionalism here, but my model of Strong AI involves an store (memory), mill (processing), and an input and an output. Does not matter if that i/o are cameras and motor joints, or a teletype. I am reducing the problem and the principle behind should still be the same.

Would you say so?
 
Harrison,

In your AI model, what would the the data strucutre of sub-goals?

What would be the data structure of super-goals?
 
I expect strong AI to be developed in 10 .. 70 years.

About chat-bot: did you consider Google as a chatbot?

You type something - Google replyes. You type something again - Google replyes.
Any topic is ok.

Google chat bot still misses many important strong AI features, but that's definetely huge step toward strong AI.

You "input/output, memory, processor" approach is nothing new. Almost all devices are built that way (including computer, TV, radio, clock, car, etc).
 
simulate a human body, including brain,which is made of particles, can't it? Kinda debatable issue. Please let me know opinions.

I guess the difference is,I think, the human it will be simulating won't have goals or free will, whatever we call it.

The AI should have goals as long as we don't mind them having those goals; like goals not harmful to humans.

The following examples are from Roger Penrose's book, "Shadows of the mind".

The example consists of a chess board with pawn walls(All pawns alive in zigzag manner) on both sides and white having only king while the other side with plenty others.
The Best chess playing Program "Deep-Thought" (on the white side) broke the pawn wall and lost miserably against human.

I think as a program, "Deep-Thought " did not understand the purpose of pawn wall it itself had accidently created. So AI without goals would be as dumb as computer.

Consider another situation where a little kid looks at an incomplete picture.
It goes and picks up sketch-pen of proper color and paints it.

How AI system would be able to perform set of these actions had it not been the goal of completing picture behind it and a previous association between it and actions to complete it?

I feel that goals are where actions spring from and intelligence, as Harrison said, is being able to associate actions (in a sequence) to goals and probably vice versa.
 
Anonymous,

any goal can be harmful.

But it's ok if goal is more useful than harmful.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?