Showing posts with label models. Show all posts
Showing posts with label models. Show all posts

18 May, 2017

Issue 49: The Tree Metaphor - Modelling Human Knowledge






This issue looks at various analogies for the evolution of human knowledge, and how they might reveal where we have gone wrong.

Can we establish a sound metaphor for how we usually establish Human Knowledge - a Model or Pattern for how we do it now, and maybe how we should do it in the future?

The purpose of such an idea is that it delivers an overt Model for how we have done it, heretofore, which, at the same time, gives us a basic framework, to enable us to both criticise and improve upon it, independently of the content that we pack into it. Put in another way, we are attempting to make clear the philosophical bases for this vital process, which are, usually, not only implicit and undeclared, but also rarely even questioned.

30 June, 2015

A New "Constructivist" Experiment

Long Exposure of double pendulum with LED at end of each pendulum

Here is a suggestion for a Couder-like constructivist experiment, to attempt to get similar stable effects as results. The key elements for such experiments have to be in the tuneable interactions of mutually affecting oscillations, underpinned by a constant applied vibration, that also interacts significantly, but is also the crucial energy supply, which keeps the resultant system going.

It is hoped that a final, overall rotation will again deliver the required cup de grace – Quantization!

To deliver the appropriate conditions, let us choose a Compound Pendulum as our starting point, for even without any of the intended extra additions, these produce fascinating and complex behaviours. So, that if it is successively modified and appropriately “tuned”, it will take us through a range of stabilities, and even a final extra example of quantization – entirely as a result of oscillations and rotations alone.




But the usual form of a compound pendulum will require several additions in order to tune it into the possible stable forms that we are seeking. The diagram below shows the necessary construction.





This new form would commence by changes to the structure of the pendulum and the overall vertical vibration to attempt a “Walker-like” stability. Once this has been achieved, we could add a horizontal rotation via the included motor, and see the effects achieved at different rates.

If the discoveries of the Couder “Walker” experiments are applicable generally in appropriate circumstances, when driven by an overall vibration as a key component, that is also a continuous source of energy, we should expect to get both the establishment of a stable system, and, even more important, the clearly quantized results of the various added horizontal rotations.

Who fancies having a go at this task?

10 May, 2015

Why Do Models Work?

“The Cognitive Art of Feynman Diagrams” by
Edward Tufte - installation at the Fermilab art gallery

Why Analogistic Models Contain Significant Content

Without a doubt the actual content of analogistic models has to be the key to their relevance in scientific explanation. For though they are clearly never fully accurate descriptions, and certainly also are always less than totally accurate explanations of the phenomena they are used to tackle, they are also never totally arbitrary inventions, they must contain something objective about the modelled situation.

Let us attempt to reveal what their contained objectivity can be.

Now, though we can, and do, insist that they are analogues for what we are investigating, they are not, and could never be, 100% accurate – containing a full set of one-to-one mappings, they are intrinsically similar situations, and they, therefore, reflect the common sequences, and kinds of entities and properties found throughout Reality quite naturally.

Perhaps surprisingly though, even artificial analogistic models can also be profitably constructed to aid in the understanding of newly addressed phenomena, as long as the constituents involved are taken from concrete evidence of possible components occurring elsewhere in concrete Reality. The method then is to involve such initially unrelated elements into a model, expressly to deliver the actually noticed properties of the thing that you are attempting to explain. Indeed, even more surprisingly, it is often these kinds of analogistic models that deliver the most profound insights, and can also demolish false assumptions dramatically. I will definitely include the mention of such a model later in this paper.

So, let us start by looking at a series of valuable examples of various kinds of analogistic models. James Clerk Maxwell’s famous model of the Ether (that was then presumed to fill all of the Empty Space in the Universe) was just such an informed and creative construct. He knew about many kinds of phenomena, which he had to explain, and the usual simple (and magical) Ether was just too vague to explain anything adequately for him. So, knowing what he wanted to produce from his model, he brought together (without any evidence) the sorts of constituent that might, if appropriately organised, deliver what hew knew was necessary. He adventurously constructed “vortices” and “electrical particles” into an analogistic model, and from this he managed to deliver his famous equations of electromagnetic radiation.

His model did not by any means reveal the actual form of the Ether, and his constructs didn’t exist as such, but his model delivered a great deal more than any of its predecessors, and even more than he designed it to deliver. His resultant Equations were revolutionary. Now, before we explore why such “fictitious” models worked, let us look at some others. Einstein’s Space-Time continuum was also an analogistic model. Once again, no one could prove such a thing actually existed, but it did deliver what Einstein knew were properties that needed explanation. His famous Theory of Relativity was based upon this model, and many things, in addition to what he consciously put into it, which came out of his constructs have since been confirmed in Reality.

Even Niels Bohr’s original model of the structure of the atom with a central positively charged nucleus, surrounded by orbiting electrons in an entity which was mostly empty space, was taken from the Planet-moon systems observed in our Solar System. It was not a true description of it, but yet another analogistic model.

Once again, it defined far more than the models that it replaced, and that was again because it contained more real features within its conceived-of forms.

Even later, when confronted with a confusing maze of “fundamental particles”, Richard Feynman devised his famous Feynman Diagrams – they were, of course, the most abstract of analogistic models, and delivered what no other models could, namely what was called Quantum Electro Dynamics (QED) – the most accurate and useable way of dealing with this amazing Particle Zoo.

And, there is, perhaps, the most audacious version of an analogistic model produced by Yves Couder in his attempt to find a new way of revealing the secrets of the sub atomic world, by modelling it in the Macro World out of unbelievable components. He revolutionised experimental physics by devising and constructing a model entirely out of silicone liquid and various vibrations, resonances and crucial recursions. He managed to create his famous “Walkers” entirely from the above, which was a kind of self-maintaining entity with properties closely comparable to those within the atom.

Finally, the author of this paper, confronted the anomalies of the famed Double Slit Experiments, decided to devise an undetectable Paving of Empty Space composed of undetectable particles – in fact mutually orbiting pairs, each consisting of one electron and one positron, which, because of their opposite matter types and electrostatic charges, became undetectable in this joint form. Yet, this paving actually fully explained the anomalies of the Double Slit Experiments without any recourse to the flights of fancy delivered by the Copenhagen Interpretation of Quantum Theory, when that is used as the sole dependable source for dealing with all sub atomic phenomena.

All the anomalies fell away! Nothing of idealist philosophy was needed to make sense of what occurred, the new materialistic, analogistic model of Empty Space did it without difficulty. [It was both as analogistic, and as artificial, as Maxwell’s model of the very same thing].

Needless to say, a barrage of criticism followed, either from the mechanical materialists of the old school, or from the idealists of the new school, with, as a banker, the fact that no such Paving had been physically detected! But, of course, that isn’t the point, is it? What is important has to be whether this analogistic model explained a great deal more than anything else could. Now, how can we explain these relative successes clearly based upon non-existing constructs?

Their value is that they are determined by the features in Reality to be explained – and, initially, at least, this can only be achieved by organising what is generally known into a purposely constructed model, aimed by using real features from elsewhere, into an amalgam, which delivered what was required. Such a model would never be the Absolute Truth, but it can be both intelligently and intelligibly constructed to contain more Objective Content – elements, parts or aspects of the Truth, than what it replaces. And in doing so, it makes the actual real phenomenon more understandable: and also by crucially revealing things that were absent previously, makes further developments more likely, if only by establishing a whole new kind of model, which allows us a great deal more to consider with some aspects real, and others intelligent placeholders for what has yet to be revealed. But, why should these analogies even be available? Why should such similar (though possibly also profoundly different) resonances occur in such different contexts? The answers must be contained in what it is that is similar in all phenomena, and hence possible everywhere in one way or another?

We really have to address the question, “What is common throughout all possible phenomena that always guarantees that such analogies will certainly exist?” It must be that they are all – every single one of them, always produced as the result of many different, simultaneous factors, which will always come together into overall situations of Stability (if only temporary). Form the possible results of such complexities, when the factors present are NOT separable, eternal laws, but on the contrary, mutually interacting and modifying arrangements, which will finally settle into a selfmaintaining overall stability.

Clearly, features will become present which are a result of this higher level of stability, and hence about how such mutually modifying factors arrive at such a state. Such properties will be true, at least at the abstract level, of all such systems. Indeed, when you think about it, it is likely that all phenomena are such! The search for fundamental particles and their basic eternal laws is therefore a pluralist myth. No matter which “law” you choose to address, it is certain to be the stability reached by multiple factors at an even lower level! The aims of pluralist Sub Atomic Physics are impossible to achieve, with the assumptions and principles that underlie the entire area of study.

The Principle of Reductionism is clearly based entirely upon Plurality, and hence assumes that Analysis will be always possible, all the way down to its targeted Fundamental Particles. These suggested analogistic commonalities seem to indicate very different relations could be expected to get to such stabilities in very similar ways. Such things as spins and orbits are likely to occur at all levels, as are things like vibrations, resonances and recursions. It is very likely that this is what we are settling upon with our analogistic models. Not ultimately Absolute Truths, but commonly occurring natural resonances, which we term as Common Objective Contents.

This article has been taken from the new special issue of the Shape journal Analogistic Models III


New Special Issue: Analogistic Models III



The last installment of our Analogistic Models series of issues.
 

04 April, 2015

New Special Issue: Analogistic Models II



Clearly, the establishment of a comprehensive basis for a whole new standpoint and methodology in Science, was not, and could not be, achieved in the few papers of Analogistic Models I. Indeed, such a demanding and consistent basis will take a great deal of effort, and a considerable amount of time.

However, certain breakthroughs have already been achieved by a number of researchers, some of whom did not fully realise the true import of their contributions. And, indeed, the supertanker that is today’s consensus of Pluralistic Science, will still take an enormous effort to re-direct into an entirely different Holistic direction, especially as the much admired gains of isolation, simplification and idealisation of the Pluralist approach, will be sorely missed in this new and much more difficult realm in which, “Everything affects everything else!”

Some measure of the difficulties involved has been demonstrated by the problems encountered by the two pioneers of this approach, namely Charles Darwin and Stanley Miller. For, in Darwin’s case, the evident strong opposition to what he was doing caused him to continue studies and delay publication of his Origin of Species for decades. While, Miller’s brilliant experiment revealing the natural creation of amino acids in his constructed emulation of the processes taking place in the primitive atmosphere and seas of the early Earth, had to be abandoned as no viable Holistic methodology was available to take things further.

To finally address Reality, in its true complexity, recursivity and evolution, involved a substantial step into much more difficult territory, and, crucially, a return of the currently universally dominant quantitative relations, to their correct and subordinate position in Theory, and the re-instatement of Explanatory Models (based upon analogy) as the primary theoretical achievements of Science.

So clearly, the task cannot possibly involve a quick fix, indeed, based on the discoveries of the philosopher Frederick Hegel, the development of theory is NOT an amassing of many eternal Natural Laws, but the continuing development of a whole infinite series of models, validated by their increased Objective Content. This second in the series on Analogistic Models attempts to clarify this objective.

25 March, 2015

Thinking Randomly


The Inevitable Dead End of Pluralist Science

I am about to tackle a whole special issue of New Scientist (3012) on Random Chance (contributed by several different science writers). Now, I have, for some time, rejected purely formal investigations into physical phenomena, as being solely about Abstract Form, and, therefore, concentrating only upon description, rather than the actual physical causes of the natural happenings. Indeed, I have demonstrated that the seeming coherence of formal equations is more about the consistency in the parallel World of Pure Form alone (which I term Ideality) rather than being about the true nature of, and causes within, Reality itself.

Clearly, there are, indeed, some real situations in which physical reasons for a random-like outcome are also valid. But, both kinds came to be seen as the same – when they are most certainly NOT! Indeed the two get swapped around completely, as with The Copenhagen Interpretation of Quantum Theory, randomness is promoted from being a mere consequence of multiple opposing causes, to itself becoming the cause!

Thus, as the preparation for this may be a massive undertaking, I feel it necessary to prepare for all the usual tricks. And, as I have learned from similar situations in the past, I prepare by consciously laying out my current stance, so that I will be adequately prepared to register precisely where the two approaches begin to diverge significantly. In this preparation, we must, of course, start where the two approaches actually coincide – where physical reasons result in a situation best seen as completely random. And then, thereafter, trace out the diverging paths (and consequent assumptions) from that mutually agreed situation.

We must also, of course, make clear the differing assumptions and principles of the two approaches. They are categorised by the very different principles of Holism and Plurality, which turn out to be the grounds for the two alternative approaches.

Holism sees Reality as one composed of multiple, mutually-affecting factors; while Plurality sees such factors as separate and eternal Laws.

The distinctions turn out to cause the two alternatives to lead off in entirely different directions, because there is no doubt that such complex situations have contributions from all involved factors. All phenomena, in one way or another, are results of many different factors acting simultaneously.

The crux of the problem is whether that joint causality is merely a SUM of independent, unchanging factors, OR the result of all the acting factors being changed by presence of all the others, and delivering a result, which then cannot be explained in the same way at all.

For, the pluralist approach, where phenomena are mere complications of unchanging natural laws, assumes that explanations can be arrived at by extracting each of those involved, one-at-a-time, until all the laws involved have been identified.

Whereas, from the Holist stance, that conclusion is seen as a pragmatic method of deriving simplified and idealised “laws” via experiment, while, in contrast, Holism would have, somehow, to see how all the various factors involved are modified by their context – that is all the other factors, and the same will be true for each and every one of the rest.

Now, the pluralist view does approximate reality very well in carefully isolated, filtered and controlled situations, so, needless-to-say, that approach soon came to dominate our way of doing things. But it could never cope with certain results, the most significant being those occurring in qualitative developments, where wholly new features emerge, and most dramatically in what are termed Emergences (or in another parlance – Revolutions).

Plurality became the norm, for it could be made to be both predictable and very productive, as long as it occurred in stable situations (in both naturally stable situations, if available, or in purpose-built man-made stabilities).

Now, this coloured the whole level of discoveries arrived at with such a stance and method, and also unavoidably caused multiple areas of study to be separated out from one another, simply because causal connections between them were unavailable.




Reality-as-is was literally constantly being sub-divided into many specialisms, or even disciplines, which could not be made to cohere in any simple approach. And, as the required “missing-links” just proliferated, the gulfs between these separate, created areas of study grew ever larger – even though the belief was that, in time, these inexplicable gaps would finally be filled with explanations of the same type as within the known and explained areas, but not, as yet, discovered.

It amounted to cases of investigators “painting themselves into corners”, by following their primary principle – Plurality.

Now, such a trajectory was historically unavoidable, because the gains of such pluralistic science within their defined and maintained contexts were so successful and employable in effective use.
Naturally and pragmatically, Mankind moved as swiftly as possible, where they could, and the flowering of this effective, pluralistic stance was, of course, Technology. But, such approaches could never address the really big questions. And, of course, both understanding and development were replaced by increasing complexity and specialisation.
So, quite clearly, such a mechanistic conception could never cope with the emergence of the entirely new, which were always described as particularly involved complexities, yet never explained as such.

Now, to return to the supposed “common ground”, this was interpreted as a complex, multi-factor situation, in which opposing contributions tended to "cancel out”, and the result was a situation best described as one in which random chance predominated. And, of course, such could indeed be true, in certain situations – for example in an enclosed volume of mixed gases, you could derive both temperature and pressure by assuming a situation of random movements and collisions.

But, such required it to be a continuing and stable overall situation. If the gases were reacting with one another, and other substances were present, and reacting too, then the persistence of that stability might not reign very long.

As always, such an assumption was invariably a simplification and an idealisation of the given situation.

Now, such situations were frequently adhered to, as stabilities are the rule in most circumstances, so such ideas can be used with a measure of confidence, BUT, whenever any development began to occur, such a stance could never deliver any reasons why things behaved as they did.

In Sub Atomic Physics, for example, as investigations penetrated ever deeper into Reality, problems began to occur with increasing frequency. The discovery of the Quantum became the final straw! 



For, in that area, phenomena occurred which blew apart prior fundamental assumptions about the nature of Reality at that level, and, for example, what had always been considered to be fundamental particles, could seem to be acting like extended Waves. Yet, with the smallest of interventions these could immediately revert back to acting as particles again. These phenomena were deemed to be beyond explanation, and what was used instead of the usual explanations, was the mathematics of random chance, not merely about a given position, but significantly about the probabilities of it being in each of a comprehensive set of positions (presumably relating to a wave) when they reverted back to particles again.

The formalisms of waves were used in this very different and illegitimate context. At the same time, physical explanations were dumped completely. The only reliable means of predicting things was in the probability mathematics developed originally for Randomness.

Now, the question arises - what do the Copenhagenists think they are doing with this new position?

First, they have NO explanations as to why it happens the way that it does.

Second, they deal only in Forms – equations and relations! Do they consider these to be causes?

Well, maybe you have guessed their position. They have no adequate answers, and hide behind the known physically-existing random situations (as described earlier), and say that their equations have exactly the same validity as those. Not so! For, this researcher has managed to explain all the anomalies of the Double Slit Experiments merely by adding a universal substrate into the situation. You don’t (as the Copenhagenists insist) need their idealist “interpretation” at all.

But, nevertheless, a great deal more still remains to be done. If the real problem is the incorrect Principle of Plurality, and the only possible alternative is the Principle of Holism, then we are still a very long way from a comprehensive explanation of natural phenomena from this stance.

The whole edifice of current Science (and not just the current problems in Sub Atomic Physics) is undoubtedly based upon flawed assumptions - not only in how things are explained, but also in Method. It is exactly how we do all our experiments!

To change track will be an enormous undertaking, and a look at the current state of “a Holist Science”, reveals that almost nothing has yet been achieved. Honourable exceptions, such as work by Darwin, Wallace and Miller are simply not enough to fully re-establish Science upon a coherent, consistent and comprehensive basis, for that will be an enormous task. But, there is a way!

First, we carry on with the current pluralist methods, BUT no longer stick to the pluralist principle as correctly describing the nature of Reality. It becomes a pragmatic method only. We can use it both to reveal the contents of a situation and to actually organise productive use, BUT we must abandon the myth of eternal Natural Laws revealed by those methods. We must accept that what we can extract in pluralist experiments are NOT “Laws”, but actually approximations confined to the particular imposed conditions, in which such were extracted.

Second, we must begin to devise and implement a new paradigm for scientific experiments. Stanley Miller’s famous experiments into the early stages in the Origin of Life, and Yves Couder’s “Walkers” experiments, clearly show two valid routes.

But, most important of all, scientists must become competent Philosophers!

Pragmatism and positivist philosophical stances are just too damaging. A clear, consistent and developed Materialism must be returned to, but it can no longer be the old Mechanical version.

We must learn the lessons of Hegel, and actively seek the Dichotomous Pairs that indicate conclusively that we have gone astray, and thereafter, via his dialectical investigations into premises, recast our mistaken assumptions and principles to transcend our own limitations, at each and every guaranteed impasse.

The main problem in Holist Science is the fact of the many, mutually-affecting factors in literally all situations in Reality-as-is. Plurality validated the farming of experimental Domains to simplify such situations, and in doing so cut down the affecting contributions either to a single one, or to one that was evidently dominant. So, with repeated experiments with different target relations, such investigations delivered, NOT Reality-as-is, but a simplified and idealised set up, with a recognition of the factors involved, but with each one, in a particular Domain, presenting a version, which could also be replicated to allow effective use of the idealised form extracted. But, the whole series of different tailor-made Domains, each with one of the factors as target, also delivered a list of the factors involved.

Pragmatically, each could be used effectively to some required end, but the list, though each one was idealised, would as a full set, allow a theorist to attempt an explanation of what was seen in the unfettered and natural “Reality-as-is” Though achieved using pluralist methods, and hence distorting each factor, a Holist theoretician could use that list in a very different way.

It was, after all, what had been the situation in the past including BOTH views, that had been the usual way of doing things, before Wave/Particle Duality and the Copenhagen stance had brought that to an end.

Yet, of course, the new stance was not the same as before. For now, plurality was just a pragmatic trick, and the real Science was in Holist explanations.

The pluralist mistake of assuming eternal Natural Laws, which produced all phenomena merely by different summations, was no longer allowed. And the belief in Natural Laws was replaced by an acceptance that all extracted laws would not only be approximations for limited circumstances, but would also and inevitably at some point produce contradictory concepts, and that only by unearthing the premises that had produced these, and changing them for something better, would the produced impasse be transcended, and further theoretical development made

The Holist scientist did not deal in supposedly eternal Natural Laws, but factors that were indeed effectible by their contexts. So, this new breed made the investigation of how factors were indeed changed by their contexts, and were always prepared to directly address major impasses when they arose.
Now, there are different ways of doing this sort of Science. In this theorist’s re-design of Miller’s Experiment, he realised that the provision of inactive pathways within the apparatus would facilitate certain interactions and their sequencing. So, a series of versions of the experiment were devised with such appropriate inactive channels, which were also equipped with non-intrusive sensors, sampling on a time-based schedule. Then, on a particular run with a given internal channelling, data collected from the various phases, would be taken and could be analysed afterwards to attempt to discover certain contexts at different times. 


Something similar is currently done by Weather Forecasters in their simulation models linked to large numbers of sensors at weather stations, but the crucial difference in the New Miller’s Experiment was in the Control that was in the hands of the experimenters. That was never available to the weather Forecaster; he had to take what he was given, but the new holistic experiments allowed something greatly superior to their Thresholds, when one law was sidelined to bring in another. The Weather simulations with thresholds were a crude invention to attempt to deal with a real Holist situation, pluralistically.

In the described Miller-type experiments the investigation would be constantly re-designed, both in the inactive structures provided. And in the necessary kinds of sensors to maximise the information and allow progress to be made. Clearly such control is never available to weather forecasters.

11 July, 2014

New Special Issue: Analogistic Models I




An Important Breakthrough in Theoretical Science?


For those who have attempted to follow (with understandable puzzlement) the extended search for a new standpoint and method for Science based upon Holism, rather than Plurality, they may be pleased (or merely relieved) to read this new collection of papers on Analogistic Modelling.

Though such an alternative has been partially grasped for some time now, it was Margaret Morrison’s article in Physics World on “Fictional Models” that focussed the effort to formulate this absolutely essential change in Science, concerned with Modelling and Truth.

It wasn’t that Morrison “saw the light”, but rather delivered her variations upon the same universally accepted premises, and this made it absolutely clear that the usual fragments of criticism were simply not up to the now urgent task, and this theorist had to “pull up his socks” or “bite the bullet”, or whatever is the apt description for a root and branch critique, coupled with a thoroughly thought-through alternative.

It would clearly be a major undertaking, but sufficient successes over the past decade or so, are now surely sufficient to begin the construction of new premises and assumptions to replace those that have both taken us this far, and have now, finally, led us damagingly astray.

After a series of regular publications over the last five years and a whole spectrum of contributions by others, the long (seemingly interminable) gestation period had to be brought to the conclusion of an actual birth!

The collection is simply called Analogistic Models, and will be initially published as a series of threeSpecial Issues of the SHAPE Journal on the Internet.

The contents will be:-

Analogistic Models I
Introduction
Idealism or Materialism?
How Do We Find Truth?
A Model of Empty Space
The Electromagnetic Effects of the Neutritron

Analogistic Models II
Introduction
The Bases for Plurality & Holism
Mutually Orbiting Particles & the Methodology of Holistic Science
A Critique of Margaret Morrison’s “Fictional Models”

Analogistic Models III

The Crucial Crossroads
Models and Truth
Why Analogistic Models contain Significant Content!
Hierarchical Levels of Stability and their Inevitable Dissolutions


Now, these contributions are current research, so they both enlarge and deepen day-by-day, and hence these are by no means final and definitive descriptions.

More is most certainly in the offing!

Read the Issue.