A Practical Project in Self-Modifying AI
For the practical aspect of the SIAI Research Program, we intend to
take the MOSES probabilistic evolutionary learning system, which
exists in the public domain and was developed by Dr. Moshe Looks in
his PhD work at Washington University in 2006, and deploy it
self-referentially, in a manner that allows MOSES to improve its own
learning methodology.
MOSES is currently implemented in C++, and is configured to learn
software programs that are expressed in a simple language called
Combo. Deploying MOSES self-referentially will require the
re-implementation of MOSES in Combo, and then the improvement of
several aspects of MOSES's internal learning algorithms.
Hitherto MOSES has proved useful for data mining, biological data
analysis, and the control of simple embodied agents in virtual worlds.
In a current project, Novamente LLC and Electric Sheep Company are
using it to control a simple virtual agent acting in Second Life.
Learning to improve MOSES will be the most difficult task yet posed to
MOSES, but also the most interesting.
Applying MOSES self-referentially will give us a fascinating concrete
example of self-modifying AI software – far short of human-level
general intelligence initially, but nevertheless with many lessons to
teach us about the more ambitious self-modifying AI's that may be
possible.
...
Theoretical computer scientists such as Marcus Hutter and Juergen
Schmidhuber, in recent years, have developed a rigorous mathematical
theory of artificial general intelligence (AGI). While this work is
revolutionary, it has its limitations. Most of its conclusions apply
only to AI systems that use a truly massive amount of computational
resources – more than we could ever assemble in physical reality.
What needs to be done, in order to create a mathematical theory that
is useful for studying the self-modifying AI systems we will build in
the future, is to scale Hutter and Schmidhuber's theory down to deal
with AI systems involving more plausible amounts of computational
resources. This is far from an easy task, but it is a concrete
mathematical task, and we have specific conjectures regarding how to
approach it. The self-referential MOSES implementation, mentioned
above, may serve as an important test case here: if a scaled-down
mathematical theory of AGI is any good, it should be able to tell us
something about self-referential MOSES.
---------------------------------------------------------------------------------
There are also various videos on the SIAI site that explain this with
moving pictures. ;-)
http://www.singinst.org/media/Ben Goertzel's talk to Google (search it out) also explains MOSES and
the Novamente AI system in a little more detail. To dumb it way down,
the idea is to combine evolutionary search with more conventional
statistical inference and get them to "trim each other's combinatorial
explosions", narrowing down the search space to something more
manageable and computable.
...
---------------------------------------------------------------------------------
--> seega uurida (TODO):
- MOSES mudelit
- ennastarendavaid mudeleid