05 March, 2011

The Life Factory


(Natural Selection before Life?)

Perhaps surprisingly, scientists have finally returned to Miller’s famous experiment concerning the Origin of Life on Earth, but with the purpose of going beyond the limited achievements of that effort so many years ago (1952). In an article in New Scientist (2797) by Katherine Sanderson the ideas of Lee Cronin of the University of Glasgow were presented, which put forward a new slant on that experiment. Along with the rest of the NASA-led sheep, he is persuaded that Life did NOT originate in such circumstances as were the basis for Miller’s Experiment, but in much more surprising places, such as the “black smokers” at the bottom of the oceans, or even at one of the many other unlikely places (that could even be found elsewhere in the Solar System, and even more distantly in the Universe, and hence justify the funding that NASA needs “to investigate”)

Now Cronin’s other new point is that there must have been a whole series of developments in the chemistry involved (in our case organic chemistry, but not necessarily there in other parts of the Universe) prior to Life. And in this he is certainly correct!

Of course, the actual mechanism for selection and development, or even “evolution” in these non-living things could not be Darwin’s Natural Selection, for the processes involved in that are predicated upon Life already being in existence, and upon competition between living organisms.
So some very different form of selection and consequent development must have occurred based upon an entirely different mechanism, to take the “organic broth”, to a position in which all the necessary processes, which would later be included into Life itself were made available.
NOTE: BUT both he, and almost all others investigating this field, assumes that Life was the direct result of the presence of such processes, which almost automatically shifted over into this New Form. But this is NOT the only conception of what actually happened. Indeed the main alternative has Life emerging out of a precipitated catastrophe of dissolution of a prior stability.

So taking his conception of pre-Life selection AND his idea of a direct precipitation of Life, he believes that he has a way of investigating such pre-Life developments. AND, significantly, that they could happen anywhere, and NOT just on Earth. [It begins to sound even more conducive to NASA’s conceptions, does it not?]

Cronin et al do indeed recognise an unavoidable pre-Life development period, in which, long before we could call it Life, there were processes “competing” for the same resources, and thus producing a strong selective effect on a sufficiently initially diverse mix of processes to lead to the dominance of certain sequences of systems of processes. Indeed, though his method is to establish such processes as generally available by experiment with his Polyoxometalates, the idea has already been developed theoretically by this author (J. Schofield) in Organic Chemistry in his paper Truly Natural Selection (2009), and published the following year in SHAPE Journal on the Internet. But Cronin’s experiment expects what he calls autonomous developments to occur right there in his apparatus, and considers that the only extras required to take things to significant levels, will be the external adjustments to various available parameters, and this is, I’m afraid, doomed to failure.

This is because he assumes a continuous, incremental series of steps travelling uninterruptedly through to Life itself, and that is never how such things actually develop. Such revolutionary New Levels never appear surreptitiously and automatically, but ONLY via what are generally termed Revolutions, or more technically as Emergences.

Now such Events do indeed happen throughout the history of Reality, and they always the absolute opposite of continuous and incremental changes into the New. On the contrary, they are invariably initiated by a wholesale collapse of the till-then established Stability, as the Second Law of Thermodynamics types of dissociative processes grow at an increasing rate, until they pass a crucial threshold and precipitate a cataclysmic avalanche of dissociations. This catastrophe seems to be sending things careering back towards an inevitable oblivion.
But it doesn’t do that!

Research into such Events has shown that ONLY via such an almost total dismantling of the prior stability can the available processes begin to rapidly form new systems unhindered by the strong forces of stability, which actually allowed the prior Level’s continuing stability. Only when those conservative processes are finally gone, could the actual possibilities of unhindered competition begin to form systems, which could ultimately be resolved into a single dominant system being finally established as the new Level. Life was no automatic transformation, but a successful Revolution, made possible by a prior, and almost total, collapse, of the preceding stability. Only when the old Level is dead could constructive (opposite to the Second Law) developments actually succeed.

Without any idea of the trajectories within an Emergence, NO experiment could ever be conceived of (never mind constructed) to facilitate these necessary Events. Cronin will produce only a confirmation that selection is possible, but the whole dynamic essential for a revolutionary overturn will NOT be present, and as with Miller’s magnificent attempt, it will not lead to real gains on the Origin of Life ON EARTH!

NOTE: This author’s (J. Schofield) design for a new Miller’s Experiment is already available via the SHAPE Journal’s Blog on the Internet.

I, Algorithm


(or Artificial Intelligence with Probabilities)

This article in New Scientist (2797) by Anil Ananthaswamy describes how the old (and now dead) Artificial Intelligence based on Formal Logic and Neural Networks has been re-vamped by the inclusion of Noise and Probabilities. It is, I’m afraid, not a new and great step forward, but an old “solution” to the unanswerable problem, “How do you improve upon a purely formal and pluralistic, and hence totally unchanging, artificial system, which is intended to deliver some sort of machine- based intelligence?”

So, instead of strict determinism only, you merely need to add a bit of random chance, and then deal in the probabilities of various alternative outcomes.To put this new system into the language of the participants, these new systems of Artificial Intelligence “add uncertainty to Formal Logic – in order to reason in a noisy and chaotic World”. It is a proposed “new” application of the same standpoint as was used in the Copenhagen Interpretation of Quantum Theory almost a century before. But the real world is NOT basically deterministic PLUS “noise”! It is holistic! And to attempt to analyse it pluralistically is doomed to failure. So the trick, as usual, is to continue with that old methodology, but to heighten the “flavour” with the added “spice” of Random Noise and the coherence of Statistical Methods – using averages and probabilities on top of a still wholly deterministic basis.

Now, to echo the revolution that occurred in Sub Atomic Physics may appear to be an important development, and in the same way may allow better predictions in this sphere as it did in Physics, but in BOTH areas it certainly does NOT deliver the Truth! In this particular instance it seems to apply very well in the area of infectious diseases, but we have to be clear why it works there, but also, and most importantly, why it isn’t the general solution that it is claimed to be.

It works when many factors are acting simultaneously, and with roughly equal weights. In such circumstances many alternative diagnoses are available, and hence various distinct results are possible. The important question is, “What is the correct diagnosis in a given particular case?”

Now Neural Networks had delivered a system that could be modified to more closely match real weightings of various alternative situations, but they were crude to say the least: absolutely NO indication of why and how these changes were effected were revealed. It was merely data without a cause!
Now this new version of AI returns to such ideas, but adds the 1764 ideas of Bayes embodied in the Theorem which carries his name:- which is,

if the conditional probability of Q implies the conditional probability of P
then
the conditional probability of P implies the conditional probability of Q - [Bayes Theorem]

And this was for the first time a basis for being used with Causes and Effects, not only in the usual direction but backwards too (that is diagnostically).

The constructed systems were so-called Bayesian Networks, where the variables were initially purely random, that is of equal weight, BUT thereafter dependant on every other involved variable. Tweak the value of one and you alter the probability distribution of all of the others. Now, this, on the face of it, appears to be very close to Holism, but has a clearly fictitious starting point, where all are equally probable. The “saving grace” was then that if you knew some of the variables you could infer the probabilities of the other contributions. Now, when you think about it, it doesn’t seem likely. Starting from a wholly fictitious starting point, why should the inclusion of some reliable data move ALL of the probabilities in the right direction? Clearly such systems and associated methods would have to be very close to Iterative Numerical Methods, and hence dependant on a convergent starting point for a useful outcome. And, as with such numerical methods, these too needed to be refined and improved until they began to become much more reliable than prior methods.

Even so, it is clear that such methods are full of dangers. How do you know whether you are considering all the necessary factors? Gradually researchers began to produce models in certain areas which were much more reliable. The key was to build them so that new data could be regularly included, which modified the included probability distributions.

But, as it did not deal with answering the question, “Why?”, but only the question, “How?”, it was still dependant on the old methodology, even if it was overlaid with Bayesian add-ons.

Indeed, to facilitate such programs, new languages began to be developed specially designed to help construct such self-modifying models.
To give some idea of their powers AND limitations, it is worth listing the principles on which they were based.

1.Equal likeliness of all contributing factors must be the starting point
2.Algorithms must be very general
3.New data must be straightforwardly included to update the probabilities.

Now, this is clearly the ONLY way that the usual pluralistic conceptions and analyses can be used in a holistic World. The basis is still Formal Logic, but real measured data can modify an initial model in which everything affects everything else, but as to how they do it, there are NO revelations. The ever-new data merely adjusts less and less arbitrary figures, and, by this alone, the model improves. The model learns nothing concrete about relations, but improves as a predictor, based on regularly updated data.

Nevertheless. There could be no guarantees. It is a pragmatic method of improvement and NOT a scientific one.

Also experience has shown that the gathering of new data can be altogether too narrow, and the seriousness with which it is collected much too slight for the methods to always be depended upon. Behind the robot diagnostic program, a very experienced “doctor” would certainly come in handy!
There is also the problem of ”current ideas” guiding the actions of the data collectors, and hence “tending” to confirm those current ideas. You cannot discover a new cause, if you are not measuring for it, can you? The method is NOT a genuine holistic one!

And the most important omission has to be that Time and Trajectory are not part of the schemas. Miller’s famous Experiment was indeed holistic, and produced amino acids from a modelled holistic system, but it too lacked Time and Trajectory information. This author’s (Jim Schofield) redesign of Miller’s Experiment has the same core set up as in the original, but surrounded by a time-triggered set of diagnostic sub-experiments, regularly sampling what was present at crucial positions throughout the set up and throughout the whole time that it was running. The results would then have to be laid out on a series of related timelines, showing WHAT was present and WHEN. The relationships over time and place would then be available and sequences and even cycles of processes could be revealed and interpreted.

The half-cock nature of the latest version of model based on Neural Networks but involving Bayesian principles, though it will produce ever better simulation-type computer programs, is still immovably grounded on pluralist principles, and so will be limited in its applications, and most important of all, will REDUCE the amount of real analysis and explanation to the Lowest Common Denominator of “the computer says that…..”