05 March, 2011

I, Algorithm


(or Artificial Intelligence with Probabilities)

This article in New Scientist (2797) by Anil Ananthaswamy describes how the old (and now dead) Artificial Intelligence based on Formal Logic and Neural Networks has been re-vamped by the inclusion of Noise and Probabilities. It is, I’m afraid, not a new and great step forward, but an old “solution” to the unanswerable problem, “How do you improve upon a purely formal and pluralistic, and hence totally unchanging, artificial system, which is intended to deliver some sort of machine- based intelligence?”

So, instead of strict determinism only, you merely need to add a bit of random chance, and then deal in the probabilities of various alternative outcomes.To put this new system into the language of the participants, these new systems of Artificial Intelligence “add uncertainty to Formal Logic – in order to reason in a noisy and chaotic World”. It is a proposed “new” application of the same standpoint as was used in the Copenhagen Interpretation of Quantum Theory almost a century before. But the real world is NOT basically deterministic PLUS “noise”! It is holistic! And to attempt to analyse it pluralistically is doomed to failure. So the trick, as usual, is to continue with that old methodology, but to heighten the “flavour” with the added “spice” of Random Noise and the coherence of Statistical Methods – using averages and probabilities on top of a still wholly deterministic basis.

Now, to echo the revolution that occurred in Sub Atomic Physics may appear to be an important development, and in the same way may allow better predictions in this sphere as it did in Physics, but in BOTH areas it certainly does NOT deliver the Truth! In this particular instance it seems to apply very well in the area of infectious diseases, but we have to be clear why it works there, but also, and most importantly, why it isn’t the general solution that it is claimed to be.

It works when many factors are acting simultaneously, and with roughly equal weights. In such circumstances many alternative diagnoses are available, and hence various distinct results are possible. The important question is, “What is the correct diagnosis in a given particular case?”

Now Neural Networks had delivered a system that could be modified to more closely match real weightings of various alternative situations, but they were crude to say the least: absolutely NO indication of why and how these changes were effected were revealed. It was merely data without a cause!
Now this new version of AI returns to such ideas, but adds the 1764 ideas of Bayes embodied in the Theorem which carries his name:- which is,

if the conditional probability of Q implies the conditional probability of P
then
the conditional probability of P implies the conditional probability of Q - [Bayes Theorem]

And this was for the first time a basis for being used with Causes and Effects, not only in the usual direction but backwards too (that is diagnostically).

The constructed systems were so-called Bayesian Networks, where the variables were initially purely random, that is of equal weight, BUT thereafter dependant on every other involved variable. Tweak the value of one and you alter the probability distribution of all of the others. Now, this, on the face of it, appears to be very close to Holism, but has a clearly fictitious starting point, where all are equally probable. The “saving grace” was then that if you knew some of the variables you could infer the probabilities of the other contributions. Now, when you think about it, it doesn’t seem likely. Starting from a wholly fictitious starting point, why should the inclusion of some reliable data move ALL of the probabilities in the right direction? Clearly such systems and associated methods would have to be very close to Iterative Numerical Methods, and hence dependant on a convergent starting point for a useful outcome. And, as with such numerical methods, these too needed to be refined and improved until they began to become much more reliable than prior methods.

Even so, it is clear that such methods are full of dangers. How do you know whether you are considering all the necessary factors? Gradually researchers began to produce models in certain areas which were much more reliable. The key was to build them so that new data could be regularly included, which modified the included probability distributions.

But, as it did not deal with answering the question, “Why?”, but only the question, “How?”, it was still dependant on the old methodology, even if it was overlaid with Bayesian add-ons.

Indeed, to facilitate such programs, new languages began to be developed specially designed to help construct such self-modifying models.
To give some idea of their powers AND limitations, it is worth listing the principles on which they were based.

1.Equal likeliness of all contributing factors must be the starting point
2.Algorithms must be very general
3.New data must be straightforwardly included to update the probabilities.

Now, this is clearly the ONLY way that the usual pluralistic conceptions and analyses can be used in a holistic World. The basis is still Formal Logic, but real measured data can modify an initial model in which everything affects everything else, but as to how they do it, there are NO revelations. The ever-new data merely adjusts less and less arbitrary figures, and, by this alone, the model improves. The model learns nothing concrete about relations, but improves as a predictor, based on regularly updated data.

Nevertheless. There could be no guarantees. It is a pragmatic method of improvement and NOT a scientific one.

Also experience has shown that the gathering of new data can be altogether too narrow, and the seriousness with which it is collected much too slight for the methods to always be depended upon. Behind the robot diagnostic program, a very experienced “doctor” would certainly come in handy!
There is also the problem of ”current ideas” guiding the actions of the data collectors, and hence “tending” to confirm those current ideas. You cannot discover a new cause, if you are not measuring for it, can you? The method is NOT a genuine holistic one!

And the most important omission has to be that Time and Trajectory are not part of the schemas. Miller’s famous Experiment was indeed holistic, and produced amino acids from a modelled holistic system, but it too lacked Time and Trajectory information. This author’s (Jim Schofield) redesign of Miller’s Experiment has the same core set up as in the original, but surrounded by a time-triggered set of diagnostic sub-experiments, regularly sampling what was present at crucial positions throughout the set up and throughout the whole time that it was running. The results would then have to be laid out on a series of related timelines, showing WHAT was present and WHEN. The relationships over time and place would then be available and sequences and even cycles of processes could be revealed and interpreted.

The half-cock nature of the latest version of model based on Neural Networks but involving Bayesian principles, though it will produce ever better simulation-type computer programs, is still immovably grounded on pluralist principles, and so will be limited in its applications, and most important of all, will REDUCE the amount of real analysis and explanation to the Lowest Common Denominator of “the computer says that…..”

No comments:

Post a Comment