Monday, August 25, 2008

Narrow AI in PostJobFree.com

I strongly believe that the best way to AGI (Artificial General Intelligence) is building narrow AI and then gradually extend it toward more and more General Intelligence.

Finally, I implemented some of my AI techniques in real-life web site PostJobFree.com.
Now PostJobFree.com intelligently calculates Daily Job Posting Limit. The calculations are based on how many times recruiter's postings were viewed, and how many times these postings were reported as spam.
I cannot claim that this feature has "advanced intelligence", but it is intelligent nevertheless.

Here are intelligent techniques we used to build that feature:

1) Preprocessing data prior to using it in decision making.
Row data is coming in the form of "page views" and "spam report clicks".
Special process raw input into RecruiterRating and JobRating tables.

2) Forgetting.
The most recent data is usually more valuable for decision making.
That's why yet another PostJobFree process makes sure that old data is slowly losing it's value (and disappears if the value is too low).
We implemented it by simply decreasing values in some columns in RecruiterRating and JobRating tables by 1% every night.


Here's what I've learned from implementing my first real-life intelligent feature:
1) The best working formulas and algorithms are relatively simple.
2) Still it takes time to carefully propose, test, chose, and implement intelligent algorithm.
3) If the system is designed properly - performance is not an issue.

Wednesday, May 14, 2008

Artificial General Intelligence project

Funny quote from AGI mailing list:

=======
Vladimir Nesov wrote:
> On Tue, Mar 11, 2008 at 7:20 AM, Linas Vepstas wrote:

Linas Vepstas: How about joining effort with one of the existing AGI projects?

Vladimir Nesov: "They are all hopeless, of course. That's what every AGI researcher
will tell you... ;-)"

Richard Loosemore: "Oh no: what every AGI researcher will tell you is that every project is hopeless EXCEPT one. ;-)"
=======

Labels:


Saturday, February 09, 2008

How do we learn

Mark Gluck gives an interesting explanation about cognitive processes in human brain:
The Cognitive and Computational Neuroscience...

Mark explains that we learn both from observation and from experiment.

Labels: , ,


Friday, December 07, 2007

Reducing AGI complexity: copy only high level brain design

In my previous post Complexity and incremental AGI design I claim that complexity has very serious impact on AGI development.
If we want to improve our chances of successful AGI implementation, we need to cut complexity as much as possible.
In this post I want to touch the topic of copying human brain design while developing AGI.
Human brain structure is very complex it's almost impossible to describe in details how exactly brain works.
Richard Loosemore explains why this is the case:
Imagine that we got a bunch of computers and connected them with a network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program: it keeps a handful of local parameters (U, V, W, X, Y) and it updates the values of its own parameters according to what the neighboring machines are doing with their parameters.

How does it do the updating? Well, imagine some really messy and bizarre algorithm that involves looking at the neighbors' values, then using them to cross reference each other, and introduce delays and gradients and stuff.

On the face of it, you might think that the result will be that the U V W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just random mush, there are some (a noticeably large number in fact) that have overall behavior that shows 'regularities'. For example, much to our surprise we might see waves in the U values. And every time two waves hit each other, a vortex is created for exactly 20 minutes, then it stops. I am making this up, but that is the kind of thing that could happen.

2) The algorithm is so messy that we cannot do any math to analyze and predict the behavior of the system. All we can do is say that we have absolutely no techniques that will allow us to mathematical progress on the problem today, and we do not know if at ANY time in future history there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be "explained" in the normal way. We see them happening, but we do not know why they do. The bizarre algorithm is the "low level mechanism" and the waves and vortices are the "high level behavior", and when I say there is a "Global-Local Disconnect" in this system, all I mean is that we are completely stuck when it comes to explaining the high level in terms of the low level.

Believe me, it is childishly easy to write down equations/algorithms for a system like this that are so profoundly intractable that no mathematician would even think of touching them. You have to trust me on this. Call your local Math department at Harvard or somewhere, and check with them if you like.

As soon as the equations involve funky little dependencies such as:

"Pick two neighbors at random, then pick two parameters at random from each of these, and for the next day try to make one of my parameters (chosen at random, again) follow the average of those two as they were exactly 20 minutes ago, EXCEPT when neighbors 5 and 7 both show the same value of the V parameter, in which case drop this algorithm for the rest of the day and instead follow the substitute algorithm B...."

Now, this set of computers would be a wicked example of a complex system, even while the biggest supercomputer in the world, following a nice, well behaved algorithm, would not be complex at all.

The summary of this is as follows: there are some systems in which the interaction of the components are such that we must effectively declare that NO THEORY exists that would enable us to predict certain global regularities observed in these systems.


So, if low level brain design is incredibly complex - how do we copy it?

The answer is: "we don't copy low level brain design".
Low level design is not critical for AGI. Instead we observe high level brain patterns and try to implement them on top of our own, more understandable, low level design.

Labels: , , , ,


Complexity and incremental AGI design

Why is it so hard to build Artificial General Intelligence (AGI)?
It seems we have almost everything we need: great hardware, mature software development industry, Internet, Google, lots of successful narrow AI project ... but AGI is still to hard to crack.

The major reason is -- overall complexity of building AGI.

Richard Loosemore is writing about it:
Do we suspect that complexity is involved in intelligence? I could present lots of reasoning here, but instead I will resort to quoting Ben Goertzel: "There is no doubt that complexity, in the sense typically used in dynamical-systems-theory, presents a major issue for AGI systems"
Can I take it as understood that this is accepted, and move on?
So, yes, there is evidence that complexity is involved.


Richard also explains, how exactly complexity affects system development:
when you examine the way that complexity has an effect on systems, you find that it can have very quiet, subtle effects that do not jump right out at you and say "HERE I AM!", but they just lurk in the background and make it quietly impossible for you to get the system up above a certain level of functioning. To be more specific: when you really allow the symbol-building mechanisms, and the learning mechanisms, and the inference-control mechanisms to do their thing in a full scale system, the effects of tiny bits of complexity in the underlying design CAN have a huge impact. One particular design choice, for example, could mean the difference between a system that looks like it ought to work, but when you set it running autonomously it gradually drifts into imbecility without there being any clear reason.


The is a good technique of dealing with complex system -- increase complexity gradually and carefully test every step.
That's why I think it's so important to build testable narrow AI systems prior to building AGI.
We have many Narrow Artificial Intelligent Systems already, but we need more. And we need them to become more advanced up to the point when they become AGI.

Tuesday, May 01, 2007

Self-emergence of intelligence in humans and artificial systems

Human brain is self-emergent on many levels. Here's simplified sequence of human brain self emergence:
1) Human genes build "Brain Builder". Brain Builder consists of:
- Neurons Factory – neurons with reproductive ability.
- Brain Structure Manager – hormones and other mechanisms that define brain structure.

2) Brain builder builds "Empty Brain" --- fully assembled, but mostly empty brain: super goals are defined, but there is no external knowledge yet, no sub-goals defined yet.

3) By experimenting and learning Empty Brain evolves into Brain with Mind (fully working intelligent system, with lots of external knowledge and sub goals).

Every step in this sequence means self-emergence.

What do you think, when we build artificial intelligent system, what system should we build: Genes, Brain Builder, Empty Brain, or Brain with Mind?

I believe that building Empty Brain is our best option.
Below are my reasons.

Why not build Brain with Mind?

In order to build Brain with Mind we have to build Empty Brain anyway, but our task will be considerably more complex, because fully loaded mind is at least 10 times more complex than Empty Brain. It's like complexity of empty computer in comparison with complexity of all software which is loaded into regular "in use" computer.
Bottom line: there is no point to ai developers to pre-load mind into strong AI, when Empty Brain system can do it itself.


Why not build Brain Builder?

Complexity of Brain Builder is probably comparable with complexity of Empty Brain. But from engineering perspective developing Brain Builder is considerably more complex.
1) Let assume that we didn’t have designed Empty Brain yet. In this case we have no clue what the output of our Brain Builder should be. That means that we cannot test or debug Brain Builder. There are no checkpoints to verify that our development is on the right track.
Inability to test and debug complex system makes development of such system virtually impossible.
The only working approach in this situation would be to try to tweak some Brain Builder’s settings and then run full test: build Empty Brain and wait for several years to check if it evolves into Brain with Mind.
Mother Nature was quite efficient in this approach. It took just few billions years to develop proper Brain with Mind. I doubt that human researchers applying such approach would accomplish the task considerably faster.

2) Let assume that we already designed working model of Empty Brain. In this case what’s the point to design Brain Builder? Our industry can easily reproduce any working model in mass quantity.


Why not build Genes?

Building Genes which would build Brain Builder is even more complex than building Brain Builder itself.
The reasons are the same as in "Why not build Brain Builder?"
If we don’t have working model of Brain Builder yet – then we effectively cannot test & debug genes.
If we have working model of Brain Builder – then why bother with Genes?


Parallels with existing systems

1) CYC is trying to build Brain with Mind system. Actually even worse – they are trying to build Mind without Brain --- no self-learning ability, no super-goals.
That road leads nowhere.

2) Google is Brain with Mind which was developed as Empty Brain. Google's Empty Brain has working crawler and other self-learning mechanisms. This approach proved to be very efficient, and eventually Google's Empty Brain emerged into Brain with Mind – very smart search system.

3) It seems that there are no famous Brain Builder projects. But I’m sure that some researchers do attempts to build "Brain Builder". So far – no success at all for the reasons I explained above.

Conclusion

Building Empty Brain capable of self-emerging into fully capable Brain with Mind -- is the most feasible engineering approach in strong AI development.


---
This post is a result of discussion with David Ashley. He is a proponent of "Brain Builder" approach.

Sunday, April 15, 2007

Intelligence: inherited through genes or gained from environment?

Human Intelligence is acquired from environment, not encoded genes.
Genes provide framework, which allow to learn from environment. This framework is critical for intelligence, but does not provide intelligence by itself.

===== By Richard Loosemore (2007 April 05) in AGIRI forum =====
If we were aliens, trying to understand a bunch of chess-playing IBM supercomputers that we had just discovered on an expedition to Earth, we might start by noticing that they all had very similar gross wiring patterns, where "gross wiring" just means the power cables, bundles of wires inside each rack, and wires laid down as tracks on circuit boards.
But nothing inside the chips themselves, and none of the "soft" wiring that exists in code or memory.

Having mapped this stuff, we might be impressed by how very similar the
gross wiring pattern was between the different supercomputers that we discovered, and so we might conclude that our discovery represented a significant advance in our understanding of how the machines worked.

.....

That last bit -- the [powerful algorithms that interact with the environment] bit -- is what makes the difference between a baby that sits there drooling and probing for its mother's nipple, and an adult human being who can understand the complexities of the human cognitive system.

Anyone who thinks that that last bit is also encoded in the human genome has got a heck of a lot of work to do ...
=====

Tuesday, February 20, 2007

Larry Page talks about AI

=====
Google's Page urges scientists to market themselves
Google co-founder Larry Page has a theory: your DNA is about 600 megabytes compressed, making it smaller than any modern operating system like Linux or Windows.
.....
"We have some people at Google (who) are really trying to build artificial intelligence and to do it on a large scale," Page said to a packed Hilton ballroom of scientists. "It's not as far off as people think."
=====

I agree with Larry Page: human's DNA has relatively small size.
Besides, not all human DNA is in charge of the brain. I guess that something like 10% of the whole DNA is related to brain development.

I wrote about that over 3 years ago:
-----
The time has come The time has come to develop Strong Artificial Intelligence System
Strong AI project is quite complex software project. However even more complex systems were implemented in the past. Many software projects are more complex than human DNA (note that human DNA contains way more than just genocode for intelligence).
-----

Sunday, January 07, 2007

Should Strong AI have its own goals?

Short answer: Yes and No.
Long answer: Strong AI can add and modify millions of softcoded goals. At the same time Strong AI shouldn't be able to change its own super goals.
Why?

Here are the reasons:

1) In its normal working cycle strong AI modifies softcoded goals in complience with embedded super goals. If strong AI has ability to modify super goals then strong AI will modify (or terminate) super goals instead of achieving these goals.
Example:
Without ability to modify super goal "survive", computer will try to protect itself, will think about power supply, safety and so on.
With ability to modify super goals computer would simply terminate goal "survive" and create goal "do nothing" instead just because it's the easiest goal to achieve. Such "do-nothing" goal would result in the death of this computer.


2) If Strong AI can change its super goals then Strong AI would work for itself instead of working for its creator. Strong AI's behavior would eventually become uncontrollable by AI creator / operator.

3) Ability to reprogram its own super goals makes computer behave like a drug addict.
Example:
Computer can create new super goal for itself: "listen to music" or "roll the dices" or "calculate PI number" or "do nothing". It would result in Strong AI doing useless stuff or simply doing nothing. Final point: uselessness for society and death.

Saturday, August 05, 2006

Massive words/phrases database publishes by Google

Google research publishes their massive words/phrases database:
===
All Our N-gram are Belong to You
We processed 1,011,582,453,213 words of running text and are publishing the counts for all 1,146,580,664 five-word sequences that appear at least 40 times. There are 13,653,070 unique words, after discarding words that appear less than 200 times.
Watch for an announcement at the LDC, who will be distributing it soon, and then order your set of 6 DVDs.
===
This team can be contacted at: ngrams@google.com

Friday, June 09, 2006

Motivational system

1) I agree that direct reward has to be in-built
(into brain / AI system).
2) I don't see why direct reward cannot be used for rewarding mental
achievements. I think that this "direct rewarding mechanism" is
preprogrammed in genes and cannot be used directly by mind.
This mechanism probably can be cheated to the certain extend by the
mind. For example mind can claim that there is mental achievement when
actually there is none.
That possibility of cheating with rewards is definitely a problem.
I think this problem is solved (in human brain) by using only small
dozes of "mental rewards".
For example, you can get small positive mental rewards by cheating your
mind to like finding solutions to "1+1=2" problem.
However, if you do it too often you'll eventually get hungry and would
get huge negative reward. This negative reward would not just stop you
doing "1+1=2" operation over and over, it would also re-setup your
judgement mechanism, so you will not consider "1+1=2" problem as an
achievement anymore.

Also, we all familiar with what "boring" is.
When you solve a problem once - it's boring to solve it again.
I guess that that is another genetically programmed mechanism with
prevents cheating with mental rewards.

3) Indirect rewarding mechanisms definitely work too, but they are not
sufficient for bootstrapping strong-AI capable system.
Consider a baby. She doesn't know why it's good to play (alone or with
others). Indirect reward from "childhood playing" will come years later
from professional success.
Baby cannot understand human language yet, so she cannot envision this
success.
AI system would face the same problem.

My conclusion: indirect reward mechanisms (as you described them) would not be
able to bootstrap strong-AI capable system.

Back to real baby: typically nobody explains to baby that it's good to play.
But somehow babies/children like to play.
My conclusion: there are direct reward mechanisms in humans even for
things which are not directly beneficial to the system (like mental
achievements, speech, physical activity).

(from AGI email list).

Richard Loosemore - Reward

Richard Loosemore (rpwl at lightlink.com):
All thinking systems do have a motivation system of some sort (what you
were talking about below as "rewards"), but people's ideas about the
design of that motivational system vary widely from the implicit and
confused to the detailed and convoluted (but not necessarily less
confused).
===

Reward

Friday, December 16, 2005

Colloquium on the Law of Transhuman Persons

Colloquium on the Law of Transhuman Persons

There are photos here how they disscussed law related to transhumans. Florida's beach pictures included :-)

Thursday, December 15, 2005

How to prevent bad guys from using results of AI reserch?

David Sanders> I would like to see a section up on your site about the downsides of AIS and what preventative limits need to take place in research to ensure that AIS come out as the "good" part of humans and not the bad part. The military is already building robotic, self propelled and thinking vehicles with weapons.

Recipe for "safe from bad guys research" is the same as recipe for any
research: openness.

When ideas are available for society - many people (and later many
machines) would compete in implementation of these ideas. And society
(human society / machine society / or mixed society) - would setup
rules which would prevent major misuse of new technology.


David Sanders> How long do we really have before an AIS, demented or otherwise) decides to eliminate its maker?

Why would you care?
Some children kill their parents. Did our society collapsed because of
that?

Some AISes would be bad. Bad not just toward humans, but toward other
AISes.
But as usual --- bad guys wouldn't be a majority.

David Sanders> As countless science fiction stories have told us, even the most innocent of actions by an AIS may spell disaster,

1) These are fiction stories.
2) Some humans can cause disasters too, so what?

David Sanders> because like I said above the don't fundamentally understand us, and we don't understand them.

Why wouldn't AISes understand humans?

David Sanders> We will be two completely different species, and they might not hold the same sanctity of life most of us are born with.

Humans are not born with sanctity. Humans gain it (or not gain) while
they grow.
Same would apply to machines.

Discussion about AIS weaknesses

This discussion inspired by web-page Weaknesses of AIS.

David Sanders> AIS cannot exist (for now) without humans.

That’s not really a weakness, because time span of this weakness wouldbe pretty short. Right now strong AI systems exist only in our dreams. :-) Within ~20 years of creating strong AI, many AISes would be able to survive without humans. Please, note that AISes would not kill humans. There would be benefits of human-AISes collaboration for all sides. This is completely different topic though. :-)

David Sanders> If they fail to understand and appreciate the human world...

If you don't understand and appreciate human world of Central Africa... would it harm you?
May be you mean "If AISes don't understand human world at all"? But in this case what would these AISes understand? And what would mean that these not-understanding systems intelligent?

David Sanders> [AISes] Not able to perceive like a human. They cannot hear, see, feel, taste or smell like a human.

Not true. Only first and limited versions of AISes wouldn’t be able to perceive like a human. Sensor devices are not too hard to implement. The major problem is implementation of Main Mind for AIS.

David Sanders> They can only feel these things like they imagine they do. Again, this makes them fundamentally incongruous with humans and I don't believe its something you can "teach around." Try to explain what "blue" is to someone who never had sight.

Have you ever seen "black hole", "conscience", or "electron"? Yet you know what they are, don't you? :-)
Blind person can understand what "blue" means: "sky is blue", "water is blue", ...

David Sanders> Until AIS have robot bodies / companions, they rely on humans for natural resources. However, once the singularity hits, that probably won't matter anymore. It is not inconceivable to think of a time in 200-500 years there are no more humans, just AIS.

Humans would probably exist long after strong AI is created. Humans just would not be the most intelligent creatures anymore :-)

David Sanders> I disagree with AIS and natural selection. I think this will happen on its own by their very nature.

AISes can be influenced by natural selection as much as all other living organisms. But humans had millions of years of natural selection. When would AISes have that much?

David Sanders> AIS will be more open about self modification as you point out. AIS will be able to make other AIS and will soon learn how to evolve themselves very quickly.

"Evolving themselves" is part of artificial selection, not natural selection.

Monday, November 28, 2005

Matt Bamberger - Matt Bamberger

Matt Bamberger - Matt Bamberger

Matt worked for Microsoft, tried to retire ... unsuccessfully, so he works again and has extensive software development experience. Matt is interested in AGI (Artificial General Intelligence) and Singularity.

Wednesday, October 19, 2005

An Integrated Self-Aware Cognitive Architecture

That looks like a very interesting project in a Strong AI field.
Though I (Dennis) personally disagree with couple of basic ideas here.
1) It seems that Alexei Samsonovich pays a lot of attention to self-awareness.
For me it's not clear why self-awareness is more important than awareness about surrounding world in general.
2) Another questionable thing is about AI being autonomous.
As far as I know, there is no intelligent system which is autonomous from the society. Human's baby would never become intelligent without society.
In order to make AI system intelligent, Alexei Samsonovich would have to connect the system to society somehow. For example through the Internet.

Anyway, the following looks like great AI project.
You may want to try to take part in it.

From: Alexei V Samsonovich
samsonovich@cox.net

Date: Tue, 18 Oct 2005 06:02:46 -0400
Subject: GRA positions available

Dear Colleague:

As a part of a research team at KIAS (GMU, Fairfax, VA), I am searching
for graduate students who are interested in working during one year,
starting immediately, on a very ambitious project supported by our
recently funded DARPA grant. The title is "An Integrated Self-Aware
Cognitive Architecture". The grant may be extended for the following
years. The objective is to create a self-aware, conscious entity in a
computer. This entity is expected to be capable of autonomous cognitive
growth, basic human-like behavior, and the key human abilities including
learning, imagery, social interactions and emotions. The agent should be
able to learn autonomously in a broad range of real-world paradigms.
During the first year, the official goal is to design the architecture,
but we are planning implementation experiments as well.

We are currently looking for several students. The available positions
must be filled as soon as possible, but no later than by the beginning
of the Spring 2006 semester. Specifically, we are looking for a student
to work on the symbolic part of the project and a student to work on the
neuromorphic part, as explained below.

A symbolic student must have a strong background in computer science,
plus a strong interest and an ambition toward creating a model of the
human mind. The task will be to design and to implement the core
architecture, while testing its conceptual framework on selected
practically interesting paradigms, and to integrate it with the
neuromorphic component. Specific background and experience in one of the
following areas is desirable: (1) cognitive architectures / intelligent
agent design; (2) computational linguistics / natural language
understanding; (3) hacking / phishing / network intrusion detection; (4)
advanced robotics / computer-human interface.

A neuromorphic candidate is expected to have a minimal background in one
of the following three fields. (1) Modern cognitive neuropsychology,
including, in particular, episodic and semantic memory, theory-of-mind,
the self and emotion studies, familiarity with functional neuroanatomy,
functional brain imaging data, cognitive-psychological models of memory
and attention. (2) Behavioral / system-level / computational
neuroscience. (3) Attractor neural network theory and computational
modeling. With a background in one of the fields, the student must be
willing to learn the other two fields, as the task will be to put them
together in a neuromorphic hybrid architecture design (that will also
include the symbolic core) and to map the result onto the human brain.

Not to mention that all candidates are expected to be interested in the
modern problem of consciousness, willing to learn new paradigms of
research, and committed to success of the team. Given the circumstances,
however, we do not expect all conditions listed above to be met. Our
minimal criterion is the excitement and the desire of an applicant to
build an artificial mind. I should add that this bold and seemingly
risky project provides a unique in the world opportunity to engage with
emergent, revolutionary activity that may change our lives.

Cordially,
Alexei Samsonovich

--
Alexei V Samsonovich, Ph.D.
George Mason University at Fairfax VA
703-993-4385 (o), 703-447-8032 (c)
Alexei V Samsonovich web site

Thursday, September 22, 2005

Lies, Damned Lies, Statistics, and Probability of Abiogenesis Calculations

Abiogenesis - how the life self-formed.

Friday, August 12, 2005

Wired 13.08: The Birth of Google

Wired 13.08: The Birth of Google
It began with an argument. When he first met Larry Page in the summer of 1995, Sergey Brin was a second-year grad student in the computer science department at Stanford University.....

Sunday, July 24, 2005

Supergoals

Anti-goals

I cannot find it now on your site, but, it seems your system has or will have the opposites to goals (was it goals with negative desirability?)

Answer:In general, same supergoal works in both negative and positive directions.
Super goal can give both positive and negative reward to the same concept.
For example, supergoal "Want more money" could give negative reward to "Buy Google stock" concept, responsible for investment money into Google stock, because it caused money spending. One year later same "Want more money" supergoal may give positive reward to the same "Buy Google stock" concept, because this investment made the system richer.

Supergoal: "can act" or "state only"?

Supergoals can act. Supergoal actions are about modification of softcoded goals.
Usually Supergoal has state. Typically supergoal state keeps information about supergoal satisfaction level is at this moment. Supergoal may be stateless too.

Thursday, July 21, 2005

Glue for the system

it seems to me, that you use cause-effect relations as a glue to put concepts together, so they form a connected knowledge; is it the only glue your system has?

Yes, correct: cause-effect relations are the only glue to put concepts together.
I decided to have one type of glue instead of many types of glue.
It's easier to work with one type of glue.

At the same time I have something else that you may
consider a glue for the whole system:
1) Desirability attributes (softcoded goals)- keep information about system's priorities.
2) Hardcoded units - connect concepts to the real world. Super goals are the special subset of these hardcoded units.

Monday, July 18, 2005

What AI ideas has Google introduced?

Google not introduced, but practically demonstrated the following ideas:

1) Words are the smallest units of intelligent information. Word alone has meaning. Letter alone - doesn't. Google searches for words as a whole. Not for letters of substrings.

2) Phrases are important units of information too. Google underlines importance of phrases by supporting search in quotes, like "test phrase".

3) Natural language (plain text) is the best way to share knowledge between intelligent systems (people and computers).

4) Programming languages that are the best for mainstream programming - the same languages are the best for intelligent system development. LISP, Prolog, and other artificial programming languages are less efficient in intelligence development than mainstream languages like C/C++/C#/VB/: (Google proved this idea by using plain C as a core language for "advanced text manipulation project".

5) Huge knowledge base does matter for intelligence. Google underlines importance of huge knowledge base.

6) Simplicity of knowledge base structure does matter. In comparison with CYC's model, Google's model is relatively simple. Obviously Google is more efficient/intelligent than dead CYC.

7) Intelligent system must collect data automatically (by itself, like in Google's crawler). Intelligent system should not expect to be manually fed by developers (like in CYC).

8) To improve information quality, intelligent system should collect information from different types of sources. Google collects web pages from web, but also it collects information from Google toolbar - about what web pages are popular among users.

9) Constant updates and forgetting keeps intelligent system sane (Google constantly crawls the Web, adds new and deletes dead web-pages from its memory).

10) Links (relations) add intelligence to a knowledge base (Search engines made the Web mode intelligent);
Good links convert knowledge base into intelligent system (Google's index with web work as a very wise adviser (read: intelligent system)).

11) Links must have weights (like in Google's Page rank). These weights must be taken into consideration in decision making.

12) Couple of talented researchers can do far more than lots of money in wrong hands. Think about "'Serge Brin & Larry Page search' vs 'Microsoft's search'".

13) Sharing ideas with public helps research project to come to production. Hiding ideas - kills the project in the cradle. Google is very open about its technology. And very successful.

14) Targeting practical results helps research project a lot. Instead of having "abstract research about search", Google targeted "advanced web-search". Criteria of success of the project were clearly defined. As a result Google project quickly hit production and generated tremendous outcome in many ways.

Sunday, July 17, 2005

How does strong AI schedule super goals?

Strong AI doesn't schedule super goals directly. Instead strong AI schedules softcoded goals. To be more exact, super goals schedule softcoded goals by making them more/less desirable (see Reward distribution routine). The more desirable softcoded goal is – the higher probability is that this softcoded goal will be activated and executed.

How strong AI finds a way to satisfy super goal


The idea is simple: whatever satisfies super goal now -- most probably would satisfy the super goal in the future. In order to apply this idea, super goals must be programmed in a certain way. Every super goal itself must be able to distinguish what is good and what is bad.
Such approach makes super goal kind of "advanced sensor".
Actually not only "advanced sensor", but also "desire enforcer".

Here's the example how it works:
Super goal’s objective: to be rich.
Super goal sensor implementation: check strong AI’s bank account for amount of money on it.
Super goal enforce mechanism: mark every concept which causes increasing the bank account balance as "desirable". Mark every concept which causes decreasing the bank account balance as "not-desirable".

Note: "mark concept as desirable/undesirable" doesn't really work in "black & white" mode. Subtle super goal enforcement mechanism either increases or decreases desirability of every cause concept affecting the bank account balance.

Concept type

Your concepts have types: word, phrase, simple concept and periheral device. What is a logic behind having these types?
In fact "peripheral device" is not just one type. There could be many peripheral devices.
Peripheral device is a subset of hardcoded units
Concept can be of any hardcoded unit type.
Moreover, one hardcoded unit can be related to concepts of several types.
For example: text parser has direct relations with concept-words and concept-phrases. (Please don't confuse these "direct relations" with relations in the main memory).
Ok, now we see that strong AI has many concept types. How many? As many as AI software developer code in hardcoded units. 5-10 concept types is a good start for strong AI prototype. 100 concept types is probably good number for real life strong AI. 1000 concept types is probably too many.

So, what is a "concept type"? Concept type is just a reference from concept to hardcoded unit. Concept type is a reference from concept to real world through a hardcoded unit.

What concept types shold be added to strong AI?
If AI developer feels that concept type XYZ is useful for strong AI...
and if the AI developer can code this XYZ concept type in hardcoded unit...
and if this functionality is not implemented in other hardcoded unit yet...
and the main memory structure doesn't have to be modified to accomodate this new concept type...
then the developer may add this XYZ concept type to strong AI.

What concept types should not be added?
- I feel that such concept types as "verb" and "noun" should not be added, because there is no clear algorithm to distinguish between verbs and nouns.
- I feel that "property concept type" should not be used, because "property concept type" is already covered by "cause-effect relationships" and because implementation of property type concepts will make main memory structure more complex.

How naked is a concept?

There is a concept ID, which you use when referring to some concept. When coding, everyone will have these IDs, the question is how "naked" they are, i.e. how they are related to objective reality.

Concept alone is very naked. Concept ID is a core of a concept.
Concept is related to objective realitythrough relations to other concepts.
Some concepts related to objective reality through special devices.
Example of such device could be text parser.
Example of connection between concept and objective reality: temperature sensor connected to temperature sensor concept.

Saturday, July 16, 2005

What learning algorithms does your AI system use?

Strong AI learns in two ways:
1. Experiment.
2. Knowledge download.
See also: Learning.

What do you use to represent information inside of the system?

From "information representation" point of view there are two types of information:
1) Main information - information about anything in the real world.
2) Auxiliary information - information which helps to connect main information with the real world.
Examples of auxiliary information: words, phrases, email contacts, URLs, ...

How main information is represented

Basically main information is represented in form of concepts and relations between concepts.
From developer's perspective all concepts are stored in Concept
table
. All relations are stored in the Relation table.

Auxiliary information representation

In order to connect main information to the real world AI needs some additional information. Like human brain's cortex cannot read, hear, speak, or write --- the same way main memory cannot directly be connected to the real world.
So, AI needs some peripheral devices. And this devices needs to store some internal information for itself. I name all this information for peripheral devices: "auxiliary information".
Auxiliary information is stored in the tables designed by AI developer. These tables are designed on the case-by-case basis. Architecture of a peripheral module is taken into consideration.
For example, words are kept in WordDictionary table, phrases are kept in PhraseDictionary table.
As I said: auxiliary information connects main information with the real world.
Example of such connection:
Abstract concept of "animal" can relate to concepts "cat", "tiger", and "rabbit". Concept "tiger" can be stored in the word dictionary.
In addition to that Auxiliary information may or may not be duplicated as main information.
Text parser may read word "tiger", find it in the word dictionary: Then AI may meditate on the "tiger" concept and give back some thoughts to the real world.

Monday, May 30, 2005

AI tools

Internal and external tools


Internal tools


Internal tools are such tools which are integrated into AI by AI developer.
Example of human's analogue would be a hand + motion part of the brain, which human has since birthday. Another example: eyes + vision center of the brain --- this vision tool is also integrated into human's brain before the brain starts to work.

External tools


External tools are such tools which are integrated into AI by AI itself. AI learns from its own experience or from external knowledge about the tool, then practice to use the tool, and then use it.
Example of human's analogue here would be an axe. Another example could be calculator.

Indistinct boundaries between Internal and External tools


How would you classify "heart pacemaker"? Without this tool some people cannot live. Also human doesn't have to learn about use heart pacemaker. At the same time humans don't get "heart pacemaker" with their body. Is it external or internal tool for humans?

In case of AI intermingling between internal and external tools is even deeper, because AI is pretty flexible.
For example, AI can learn about advanced math tool from an article in magazine, and then integrate itself with this tool. Such integration can be very tight since computers have very extendable architecture (in comparison with humans). So, "external tool" can become "internal tool".

Internal tools


Importance of internal tools


Internal tools are very important for AI because mind cannot communicate with the world without tools. External tools are unavailable for a mind without internal tools.

Internal tools integration with AI


Internal tools are connected with the mind through a set of neurons. This set of neurons is associated with the tool. When the set is active - the tool is active. When the tool is active then set of neurons is active.
Example:
Let consider internal tool integration on example of "chat client program" (like ICQ, MSN, or Yahoo messenger).
"Chat client program" is represented in the main memory by neuron nChatClientProgram.
If AI decided to chat then AI activates nChatClientProgram neuron. That activates "chat client program" (the tool). The tool reads active memory concepts, converts them into text and sends text message over internet. After that the tool activates neuron nChatClientProgramAnswerWaitMode in the main memory.
When the tool gets response from Internet, then the tool:
- Parses incoming text and put received concepts into the short memory.
- Activates neuron nChatClientProgramAnswerReceived.
Activation of nChatClientProgramAnswerReceived causes execution of softcoded routine associated with nChatClientProgramAnswerReceived neuron.
After execution, the results are evaluated against AI's super goals. AI learns from the experience, in particular:
1. Desirability of nChatClientProgram, nChatClientProgramAnswerWaitMode, nChatClientProgramAnswerReceived, and other related neurons are evaluated (see Reward distribution routine). Successful chatting experience would increase desirability of nChatClientProgram neuron and therefore probability of "Chat client program" use in the future. Unsuccessful experience would reduce probability of such use.
2. Softcoded routines are evaluated and modified. Modified routine can be applied to process results of the next incoming message.


List of internal tools to develop for strong AI


1. Timer. It's good to have internal sense of time.
2. Google search - helps to understand new concepts.
3. Chat with operator.
4. Internet chat client.

External tools


Importance of external tools


External tools are important because:
1) There could be millions of external tools.
2) AI can use already developed humans' tools.
3) External tools can be converted into internal tools and gain all advantages of internal tools.

External tools integration with AI


External tools are connected with the mind through internal tools.
Example:
Internal tool: web browser.
External tool: stock exchange web site.
Through internal tool AI can use external tool.

Story of my interest in AI

Jiry> When did you first decide to attempt making Strong AI?
Jiry> Was there anything particular what triggered that decision?

I'd say it was ~year 2001.
It wasn't sudden decision.
I was interested in AI among many other things.
Gradually I recognized how powerful could such tool be.
Also I decided that since computers are getting more and more
powerful, AI should be implemented pretty soon.

Originally I didn't think that I should develop AI, I just thought
that I'll be among early adopters of AI, that I will just tweak it after someone
(probably Microsoft) would develop AI framework.

Gradually I understood that I have to build AI by myself, because:
1) practically all other researchers go in wrong directions.
2) I learned about approaches which should give successful results and
put approximate AI model together.

Monday, May 23, 2005

AI operator

What are the responsibilities of AI's operator?

AI developer can define some default values for parameters like:
- how quickly should AI system forget new information.
- what weight increment should be applyed to relation between two concepts which were read near each other.
- ...

AI will be able to work with these default values, but in order to achieve optimal performance, AI operator has to tweak these default values.
Operator will observe and analyze how AI performs, modify default values, and see for improvements in AI's mental abilities.

"AI's operator" is not the person who talks with AI all the time.
"AI's operator" almost doesn't talk with AI.
"AI's operator" observes how AI's mental process works. Also "AI's operator" "tunes/tweaks" AI's mind.

See also:
AI's operator

AI answering comlex questions

> Imagine that the example talks about 2 accounts, initial amount $100
> on both and several simple financial transactions between the
> accounts. I believe your AI would get confused very soon and would not
> be able to figure out the balance.

In the situation of such complexity regular human beings cannot provide adequate answer.
What do you expect from AI under development?

If we are talking about perfect AI now, then again --- AI will not read text with "one-time parsing" approach.
Instead, perfect AI will read like human: read sentence, think, make decision whether to read father, or re-read again, or skip reading at all, or use another source of information (e.g. ask questions or go to Google), or do anything else. Perfect AI would accomplish chosen action until AI would be satisfied with the results.

But let's return back to today's reality: we are talking about developing first AI prototype, so we'd better skip too complex tasks for now.

Friday, May 20, 2005

How to translate text from one language to another

Language translator prototype
0) Originally we have a sentence in a source language and we want to translate it into a destination language.
1) Take "source language" sentence.
2) Find all text concepts (words and phrases) in the source sentence.
3) All these text concepts constitute "source language text thought".
4) Search for all concepts which are related to the source language text thought.
5) As a result, we'll get set of concepts which conctiture abstract thought.
6) Now it's time to search for related text thought in destination language.
7) So, we search all concepts which simultaneously:
a) Relate to this abstract thought.
b) Relate to the concept which represents destination language.
8) At this point we have all concepts related to the original text and to the destination language. This is "destination language text thought".
9) Now we can eaily convert this "destination language text thought" into "destination language text".
Strong AI can build the final sentence (by using a word dictionary, a phrase dictionary, and text pairs dictionary).
See also:
Text synthesizer.

(Originally written: Sep 2004).

Friday, April 08, 2005

Mistakes and general intelligence

"People make stupid mistakes. A well designed AI should not."
Jiri Jelinek

Human beings make mistakes because their minds make approximate decisions.
Human beings have general intelligence because their minds able to make approximate decisions.

If you develop AI without this critical feature (approximate decision making) then such AI wouldn't have general intelligence...

Flawless AI

In order to make decisions without mistakes you need 3 things:
1) Appropriate "perfect problem solver" algorithm.
2) Full information about our word.
3) Endless computational power.

Even if #1 is theoretically possible, #2 and #3 are impossible even in theory.

Thursday, April 07, 2005

Abstract concept

Abstract concept is a concept which is not directly connected to system's receptors.
Abstract concept is connected with other concepts though. Abstract concept is connected to receptors indirectly through non-abstract concepts (surface concepts).

It's not easy task to identify and create an abstract concept. You cannot just borrow it from external world as surface concepts.

What do you think: is it good idea to name such abstact concept as Deep Concept?
It may help to distinguish abstract concepts which are available in books from abstract concepts which must be created by AI itself.

Thursday, March 17, 2005

Limited AI, weak AI, strong AI

Jiri,

> your AI reminds me of an old Czech fairy-tale where a dog and cat
> wanted to bake a really tasty cake ;-9, so they mixed all kinds of
> food they liked
> to eat and baked it.. Of course the result wasn't quite what they expected >;-).

That's not the case.
:-)

I know a lot of stuff and I carefully selected features for strong AI.
I rejected far more features than I included.
And I didn't it because I thought that these rejected features are useless in true AI, in spite that these rejected features are useful for weak AI.

> I think you should start to play with something a bit less challenging
> what would help you to see the problem with your AI.

Totally agree.
As I said --- I'm working on limited AI. Which is simultaneously:
1) Weak AI.
2) Few steps toward strong AI.

There are many weak AI applications. Some of weak AIs are steps toward strong AI, most of weak AIs don't contribute almost anything to strong AI.
That's why I need to choose limited AI functionality carefully.

Your suggestion below may become a good example of such limited AI. With proper system structure.

But probably I wouldn't work on it in the nearest future because it doesn't have much business sense.
======= Jiri's idea =======
How about developing a story generator. User would say something like:
I want an n-pages long story about [a topic], genre [a genre].
Then you could use google etc (to save some coding) and try to
generate a story by connecting some often connected strings.
Users could provide the first sentence or two as an initial story trigger.
I do not think you would generate a regular 5 page story when using
just your statistical approach. I think it would be pretty odd
mix of strings with pointless storyline = something far from the
quality of an average man-made story.
===========================

Sunday, March 13, 2005

Lojban vs programming languages vs natural language

Ben, this idea is wrong:
-----
Lojban is far more similar to natural languages in both intent, semantics and syntax than to any of the programming languages.
-----

Actually Lojban is closer to programming languages than to natural languages.
Structure of Lojban and programming languages is predefined.
Structure of natural languages is not predefined. Structure of a natural language is defined by examples of using this natural language. This is the key difference between Lojban and Natural Language.

Since structure of natural language is not predefined, you cannot put language structure into NL parser code. Instead you need to implement system which will learn rules of natural language from massive amount of examples in this natural language.

You are trying to code natural language rules in text parser, aren’t you?
That’s why you theoretically can parse Lojban and programming languages, but you cannot properly parse any natural language even theoretically.


If you want properly parse natural language, you need predefine as little rules as possible.
I think that natural language parser has to be able to recognize words and phrases.
That's all that NL text parser has to be able to do.

All other mechanisms of natural language understanding should be implemented outside the text parser itself.
These mechanisms are:
- Word dictionary and phrase dictionary (too serve as a link between natural language (words, phrases) and internal memory (concepts).
- Relations between concepts and mechanisms which keep these relations up to date.

Lojban

Ben,

I think that it's a mistake to teach AI to any language other than
natural language.

Lojban is not a natural language for sure (because it wasn't really
tested for variety of real life communication purposes).

The reasons why strong AI has to be taught to a natural language, not to Lojban:
1) If AI understands natural language (NL) then it's a good sign that
the core AI design is correct and quite close to optimal.
If AI cannot learn NL then it's a sign that core AI design is wrong.
If AI can learn Lojban --- it proves nothing from strong AI standpoint.
There are a lot of VB, Pascal, C#, C++ compilers already. So what?

2) NL understanding has immediate practical sense.
Understanding of Jojban has no practical sense.

3) NL text base is huge.
Lojban language text base is tiny.

4) Society is "the must" component of intelligence.
Huge amount of people speaks/write/read NL.
Almost nobody speaks Lojban.

Bottom line:
If you spend time/money on design/teaching AI to understand Lojban ---
it would be just a waste of your resources. It has neither strategical nor tactical use.

Friday, March 11, 2005

Logic

Jiry, you misunderstand what the Logic is about.
Logic is not something 100% correct. Logic is a process of building conclusion based on highly probable information (facts and relations between these facts).
Under "highly probable" I mean over 90% probability.
Since Logic does not operates 100% correct information, logic generates both correct and incorrect answers. In order to find out if logical conclusion is correct we need to test it. That's why experiment is necessary before we can rely on logic conclusion.
Let's consider an example of logic process:
A) Mary goes to the church.
B) People who go to church believe in God.
C) Mary believes in God
D) People who believe in God believe in life after death.
E) Mary believes in life after death.

Let's try to understand how reliable this logic conclusion could be.
Let assume that every step has 95% probability.
Then total probability would be 0.95 * 0.95 * 0.95 * 0.95 * 0.95 = 0.77 = 77%

Actually:
1) We may have wrong knowledge that Mary goes to the church (we could confuse Mary with someone else, or Mary might stop going to the church).
2) Not all people who go to church believe in God
3) We could make logical mistake assuming that (A & B) result in C.
4) Not all people who believe in God believe in life after death.
5) We could make logical mistake assuming that (C & D) result in E.

Conclusion #1:
Since logic is not reliable, long logical conclusions are typically could be less probable than even non-reliable observations.
For instance, if Mary’s husband and Mary’s mother mentioned that Mary doesn’t believe in life after death then we’d better rely on their words more than on our 5 step logical conclusion.

Conclusion #2:
Since multi-step logic is unreliable --- multi-step logic is not "the must" component of intelligence. Therefore logic implementation could be skipped in the first strong AI prototypes.
Limited AI can function very well without multi-step logic.

Friday, March 04, 2005

Background knowledge --- how much data do we need?

Jiry> And try to understand that when testing AI (by letting it to solve
Jiry> particular problem(s)), you do not need the huge amount of data you
Jiry> keep talking about. Let's say the relevant stuff takes 10 KB (and it
Jiry> can take MUCH less in many cases). You can provide 100 KB of data
Jiry> (including the relevant stuff) and you can perform lots of testing.
Jiry> The solution may be even included in the question (like "What's the
Jiry> speed of a car which is moving 50 miles per hour?"). There is
Jiry> absolutely no excuse for a strong AI to miss the right answer in those
Jiry> cases.

Do you mean 100 KB data as the background knowledge is enough for strong AI?
Are you kidding?

By the age of 1 year human baby parsed at least terabytes of information. And keeps in his/her memory at least many megabytes of information.

Do you think 1 year old human baby has strong AI with all this knowledge?


Yes, artificial intelligence could have advantage over natural intelligence. AI can be intelligent with less amount of info.
But not with 100 KB anyway.
100 KB is almost nothing for General Intelligence.

From Limited AI to Strong AI

Jiri> OK, you have a bunch of pages which appear to be relevant.
Jiri> What's the next step towards your strong AI?

Next steps would be:
1) Implementation of hardcoded goals
2) Implementation of experiment feature.
3) Natural Text writing.
4) ...

How many types of relations should strong AI support?

Dennis>> why 4 types of relations are better than one type of relations?

Jiri> Because it better supports human-like thinking. Our mind is working
Jiri> with multiple types of relations on the level where reasoning applies.

Our mind is working with far more than 4 types of relations.
That's why it's not good idea to implement 4 types of relations. In one hand it's too complex. In another hand it's still not enough.
Better approach would be to use one relation which is able to represent all other types of relations.

This page is powered by Blogger. Isn't yours?