From evgen@camd1.kkpcr.re.kr Fri Oct 26 11:51:31 2001
Newsgroups: comp.ai.philosophy
Subject: Re: Free will
From: evgen@camd1.kkpcr.re.kr (Evgenij Barsukov)
Date: Fri, 26 Oct 2001 02:51:31 GMT
--------
Regardding message from Thu, 25 Oct 2001 23:40:23 +0100 by "r.l"


>The issue of free is upon me within phil of AI in uni'. Altho' I profess to
>being a newbie within the subject I believe we are 'conditioned' by our
>environment and therefore do not possess absolute free will.
>
>Would anybody care to refine or dispute this as I am keen to understand this
>issue. Please bare in mind my novice status!

Hold on, here comes! :-)

There can be lots of talk about "free will" in general, but lets concentrate on
its application to AI. That means, I am going to discuss application of free
will _in process of solving a problem_ or even more precisely, _in process of
optimizing a set of outputs in order to maximally satisfy conditions given by
set of inputs_.

It can be easily seen, that given the area of application defined above, I throw
away understanding of free will as ability to do "whatever you like" regardless
to any outside restrictions. Such ability is easily realized by good QM random
number generator, but is not interesting in aspect of AI. In fact, people or any
living creatures also do not have free will of this sort, because if they do
certain things, it hurts.

Understanding of free will in given boundaries rather corresponds to ability to
solve problems (which conditions are given "from outside" and are not subject
of the free will) without following a predefined algorithm. This "not following
algorithm" has one simple consequence, which makes "free will" a good thing, for
both AI or human society. Because any predefined algorithm can be efficiently
used only for fixed set of problems. For example, if you have a labyrinth with
all right turns leading to goal, algorithms saying "turn right" will be best.
But once conditions change, and some correct left turns will happen, the number
of misses will increase. Ultimately the labyrinth with all correct left turns
can not be passed by fixed "right turn" algorithm. It will try and try again
same wrong thing.

Here comes learning. You could make an algorithm of learning from algorithms
mistakes. For example you can program "if turn left fails, turn right next
time". If this correction algorithm fails, you can devise an algorithm
correcting the correction algorithm. Etc. you can create infinite hierarchy of
algorithms correcting each other, and still there will be a problem which they
will fail to solve (prove to it is similar to Turings termination problem, by
diagonalizatioin). That is why learning is different from free will. Learning
also follows an algorithm. Free will is supposed to be able not to follow any
algorithm at all, and yet solve the problem e.g. satisfy external conditions.

So now it is a little clearer what we expect from the free will. Subject of free
will in AI (I'd call it "free will engine") should act in correspondence to
external conditions to be satisfied (e.g. rules of the game) but without
correspondence to any predefined rules of his own behavior. How to design such
engine?
Here we can remember an "absolute free will " realized as physical randomness,
which we earlier discarded. It has at least one property needed for object of
our desire - it has not predefined rules of its own behavior. Now, how to
combine this good property with dependence on external conditions?

If we add random choices at certain step of the algorithm, and let the algorithm
learn from results of random tries by setting weights for next random choices,
it might make a good algorithm for certain class of problems (in fact that is
how neural networks work), but it would still have same restrictions as general
hierarchial learning algorithm - there will be problems for which the structure
of algorithm itself will not be appropriate.

Obviously, what we should do is to add random choices in the process of making
the algorithm itself! What will happen in extreme realization of this idea, I
call a "crazy AI". It is made as follows. Any actor has a fixed set of actions
which he can execute. It is true for all existing machines or living creatures.
Computer for example can do only few things - adding, subtracting, shifting
registers etc, few hundreds of commands. Any algorithm is a predefined sequence
of such commands. Now, our crazy AI can be allowed to execute any of all
available actions in random sequence, until the external condition is satisfied.
Then the answer is returned as the sequence of actions which lead to the
satisfaction of conditions. It is obvious, that in infinite time any problem,
which _can_ be solved given the set of actions, will be solved by crazy AI.
Is this the free will we wanted? Actually yes, it is a limiting case of it. It
satisfies all our conditions, but only at infinite limit of time.

Now, the ultimate call is to create the engine which would have properties of
crazy AI (free will), but be able to learn, e.g. every next try the same problem
would be solved in shorter time. If you look at this requirement closely, you
realize that this two abilities are mutually exclusive. Any "learning" restricts
the freedom of the "free will" by applying some kind of preference to certain
sequence of action. The more probabilistically strong is preference (like "make
addition with 99% probability), the more algorithmic behavior will look. This
will make process faster, yet less general. What will happen if problem has
changed, and for correct solution you need to make subtraction first? Engine
will fail 99 times from 100, until it finds right solution. But (!) as long as
all steps of algorithm are probabilistic (never 100%), it will still find the
solution at least after infinite time! So what we learn is:

1) learning for any particular problem will make solution to different problem
more slow
2) Crazy AI which learns by setting increased probabilities for execution of
actions which have been successful, satisfies condition of having free will in
non-infinite time!

But stop - who will decide how to change the probabilities of successful action?
Do we have to write an algorithm for assigning probabilities? No, the work
around is same: hierarchization which we tried with traditional algorithms. We
will let another learning crazy AI to do the job of weight setting. And for this
one, the weight will be set by yet another...and the lest element in this chain
be ... ?

There we have a difficult choice. Last one could be either non-learning random
"crazy AI" or a fixed algorithm. It can be easily seen, that in first case
whole system becomes unstable and will not learn anything - whole weighing will
be totally changed each step. In second case however, we will fix the "ability
of change" of whole system. But - what a nice surprize! - it will affect only
the _speed_ how system adapt to new type of problem, but not the ability to
solve any problem after (in worst case) infinite time. That is because between
our fixed algorithm and the problem will be a layer of at least one random
algorithm - which guaranties that system can not come into infinite loop and so
problem will always be solved.
Interestingly, similar "weighing" method is used by ants. They remember good
path choices by the smell every ant lives on his path. The more ants went there
- the more smell remains, the higher probability that another ant go there. The
"fixed weighing algorithm" in this case is the speed of the smell evaporation.
It is defined by outside nature, is not adapted or controlled by ants, and yet
they always solve their problems.

The positive difference from the hierarchy of traditional learning algorithms
is, any (!) number of the hierarchy elements always finds the solution for any
problem (in worst case in infinite time) whereas for traditional algorithm even
an infinite number of hierarchy members would _always_ have unsolvable problems.
That means, rising the size of "crazy hierarchy" improves efficiency of learning
without imparting generality. All members of this hierarchy will have property
of free will and will represent a class of "free will engines", with different
levels of learning efficiency.
The hierarchy of learning "crazy AIs" with infinite number of elements will
be most general "free will engine" possible, a crazy equivalent of Turings
"universal calculator", free however of "termination problem" but also from any
responsibility about the question "when the job is done" whatsoever.

Regards,
Evgenij


Built by Text2Html