Monday, August 3, 2015

Why this approach?

In this post I want to explain the rationale behind the chosen approach.


Common sense

I believe that AGI should have so-called common sense. It's great that I can type in an automatic translator a phrase "I threw a rock through a window" and get accurate translation. However, I cannot ask a follow-up question: "what happened with the glass". Translator does not have internal representation of what a window is, that it contains glass and glass is fragile and cannot process this situation as any human with imagination can.
Another question "My neighbour claims he keeps whale in his house, should I believe him?". Answer to such question require quite a lot of common-sense knowledge about size, needs of ocean mammals and what house may contain. And the negative answer is not 100% certain. I could imagine that your neighbour may be a multi-millionaire and indeed has a huge enough pool in the house and always dreamed of having the biggest pet in the world.
As far as I know even semantic networks cannot cope with such a questions, but if you know of such implementation - please post a link to page where we can test such AI engine.

Virtual worlds

I think that it is much easier for AI to gain common-sense in virtual worlds of for example computer games. They are much simpler. Such a complex notion as human health is expressed there by one integer number. Death is decreasing this number to 0. Objects of one kind are identical. Plato with his search of ideal objects, not some shadows in the cave would be delighted.
If AGI was to learn in the real worlds - doing so in virtual ones should be much, much easier. So I propose a Balgor test: AI without being designed to any computer game (just some specification about the aim - increasing dungeon level and avoiding quitting game) should be able to go through Moria and kill Balrog. Moreover, it should collect internal rules of how the virtual world works. On the contrary, by machine learning you may breed an AI capable of playing Atari games or even Go, but you cannot ask it in any way for example how the other side should play.  These AI are like simple animals, which follow their insticts and act great in their environment. They won't become philosophers. A spider spins a complicated web, but it doesn't understand what is does and how it works. It won't analyse movement of insects and how to catch them.

AGI that undestands


There are even simpler tasks in the development path of such AGI that understands: 
  1. reasoning with full knowledge of the world instead of being given only input from senses
  2. board or card games which are even simpler as they require no computer to process environment behaviour.
  3. intelligence tests of various kinds 
In addition, I expect this AGI to learn fast, from a single example. If you want to build a Skynet :) it needs to learn from a single battle not thousands of them. Neural networks have vast area of application but they require so much repetition. Like this very interesting Chinese verbal IQ test solver, which learned from large corpus of words. It learned like a human from multiple repetitions, but would this approach be useful to decipher an ancient language from just a few found texts? Can a NN catch a rule from one numeric IQ test? Or does it require the same rule be repeated in 10 cases? Maybe it can, but I haven't found any paper nor example of such implementation

No comments:

Post a Comment