26 May, 2015

Capitalism’s Major Flaw


Profit!

The article in New Scientist (3022) entitled "Capitalism’s Hidden Web of Power", questioned the current analysis of the 2008 slump. Yet, the slump, itself, was more or less taken for granted, and the only important questions to be addressed were deemed to be about how to police irresponsible companies, so that such things could be “nipped in the bud”, and retrieved before too much damage was done. The idea was that, with sufficient information, an impending crisis could be avoided. The recession would not be so deep, and the recover would be much swifter!

These correcting analysts insisted that the criteria for assessing risk were inadequate. Various institutions and involved researchers now vied to add the obviously as yet excluded component, which they termed “Complexity”.  [Not the many other meanings of this word, but merely how complicated many companies had become was what was meant]. But, of course, they were wrong too!

Complexity as it was now being revealed is there on purpose - to HIDE things!

But, it isn’t the methods used in organisation that are the unintended reasons for this problem, but exactly what they are always trying to hide!

The reason for these desperate swoops is the nature of Capitalism itself! And, of course, this has to be hidden at all costs. Tell me, are these regulators going to rearrange things so that the real causes are plainly evident? Of course they aren’t.

No one really looks closely at how Capitalism works. It is treated as a natural given, and all efforts are concentrated upon re-organising the deckchairs on the sinking Titanic!




So, why do slumps occur? For that they certainly do! How can a system sustain itself in the face of the most unavoidable chronic crisis?

If Capitalism, as is claimed, delivers, of itself alone, “a better life for all”, why is it so frail, and so regularly (and catastrophically) compromised? Can you really blame it all on Complexity? Of course not!

It is much more likely that it is inherently and fatally flawed. And simply cannot avoid these major calamities. Is it not inevitable given how Capitalism actually works? Whilever the so-called experts are studying Complexity, no one is studying modern Capitalism as an economic system. And, no one is addressing its regular crises and inevitable, ultimate collapse.

What is the guiding Principle of Capitalism? It is the acquiring of PROFIT! And, what precisely is that? Is it like wages for work done? NO, it certainly isn’t! It involves having wealth, and investing some of it to get a nice regular addition to it! It is “Much getting More” without really doing anything for it.

“But”, I hear the cry, “we are risking our wealth!”

No, they aren’t! The Stock Exchange guarantees that. Only the uninformed small investors will be wiped out! The big boys never get ruined. They even make money out of slumps. NOTE: In 2008 a famous British Capitalist was seen in Iceland, as its economy was collapsing, buying up whatever he could get for a song! How do you think he did out of his hurried trip?

PROFIT is an added overhead above all real costs and payments, to pay both owners and investors an unearned bonus. And, indeed, some of these playing the Stock Exchange don’t even care what their investments finance. For as soon as a profit is to be made they SELL!


  

Now, you may well ask, “How on earth does it ever work (between the slumps of course)?”

It is because there is always the promise of unearned profit literally forever.

That keeps moneyed people investing, and others setting up companies. But, the values generated by such activities are never real values. So-called confidence inflates expectations and hence Market Values always above Reality. Yes, always!

Indeed, within Capitalism, inflation is not only inevitable, it is actually essential, for it helps the company owners in two different ways. First, it decreases the current values of the wages they pay their workers, and, second, it also decreases the real value of any capital loans that they must pay back. [Just imagine what a mess they would be in when Deflation is in charge – the value of borrowed loans increases, and the value of their workers wages increases too]

Capitalism is a system for owners, and is built upon Credit in its every corner.

And, of course, the mismatch between Reality and inflated values occasionally gets revealed, and the whole edifice begins to crumble.

And who then is made to pay?

Who is still paying for 2008 today?

17 May, 2015

Vortices





Diagrams taken from a forthcoming issue - a new theory of the atom.


Part I of The Atom and the Substrate is now available.




10 May, 2015

Why Do Models Work?

“The Cognitive Art of Feynman Diagrams” by
Edward Tufte - installation at the Fermilab art gallery

Why Analogistic Models Contain Significant Content

Without a doubt the actual content of analogistic models has to be the key to their relevance in scientific explanation. For though they are clearly never fully accurate descriptions, and certainly also are always less than totally accurate explanations of the phenomena they are used to tackle, they are also never totally arbitrary inventions, they must contain something objective about the modelled situation.

Let us attempt to reveal what their contained objectivity can be.

Now, though we can, and do, insist that they are analogues for what we are investigating, they are not, and could never be, 100% accurate – containing a full set of one-to-one mappings, they are intrinsically similar situations, and they, therefore, reflect the common sequences, and kinds of entities and properties found throughout Reality quite naturally.

Perhaps surprisingly though, even artificial analogistic models can also be profitably constructed to aid in the understanding of newly addressed phenomena, as long as the constituents involved are taken from concrete evidence of possible components occurring elsewhere in concrete Reality. The method then is to involve such initially unrelated elements into a model, expressly to deliver the actually noticed properties of the thing that you are attempting to explain. Indeed, even more surprisingly, it is often these kinds of analogistic models that deliver the most profound insights, and can also demolish false assumptions dramatically. I will definitely include the mention of such a model later in this paper.

So, let us start by looking at a series of valuable examples of various kinds of analogistic models. James Clerk Maxwell’s famous model of the Ether (that was then presumed to fill all of the Empty Space in the Universe) was just such an informed and creative construct. He knew about many kinds of phenomena, which he had to explain, and the usual simple (and magical) Ether was just too vague to explain anything adequately for him. So, knowing what he wanted to produce from his model, he brought together (without any evidence) the sorts of constituent that might, if appropriately organised, deliver what hew knew was necessary. He adventurously constructed “vortices” and “electrical particles” into an analogistic model, and from this he managed to deliver his famous equations of electromagnetic radiation.

His model did not by any means reveal the actual form of the Ether, and his constructs didn’t exist as such, but his model delivered a great deal more than any of its predecessors, and even more than he designed it to deliver. His resultant Equations were revolutionary. Now, before we explore why such “fictitious” models worked, let us look at some others. Einstein’s Space-Time continuum was also an analogistic model. Once again, no one could prove such a thing actually existed, but it did deliver what Einstein knew were properties that needed explanation. His famous Theory of Relativity was based upon this model, and many things, in addition to what he consciously put into it, which came out of his constructs have since been confirmed in Reality.

Even Niels Bohr’s original model of the structure of the atom with a central positively charged nucleus, surrounded by orbiting electrons in an entity which was mostly empty space, was taken from the Planet-moon systems observed in our Solar System. It was not a true description of it, but yet another analogistic model.

Once again, it defined far more than the models that it replaced, and that was again because it contained more real features within its conceived-of forms.

Even later, when confronted with a confusing maze of “fundamental particles”, Richard Feynman devised his famous Feynman Diagrams – they were, of course, the most abstract of analogistic models, and delivered what no other models could, namely what was called Quantum Electro Dynamics (QED) – the most accurate and useable way of dealing with this amazing Particle Zoo.

And, there is, perhaps, the most audacious version of an analogistic model produced by Yves Couder in his attempt to find a new way of revealing the secrets of the sub atomic world, by modelling it in the Macro World out of unbelievable components. He revolutionised experimental physics by devising and constructing a model entirely out of silicone liquid and various vibrations, resonances and crucial recursions. He managed to create his famous “Walkers” entirely from the above, which was a kind of self-maintaining entity with properties closely comparable to those within the atom.

Finally, the author of this paper, confronted the anomalies of the famed Double Slit Experiments, decided to devise an undetectable Paving of Empty Space composed of undetectable particles – in fact mutually orbiting pairs, each consisting of one electron and one positron, which, because of their opposite matter types and electrostatic charges, became undetectable in this joint form. Yet, this paving actually fully explained the anomalies of the Double Slit Experiments without any recourse to the flights of fancy delivered by the Copenhagen Interpretation of Quantum Theory, when that is used as the sole dependable source for dealing with all sub atomic phenomena.

All the anomalies fell away! Nothing of idealist philosophy was needed to make sense of what occurred, the new materialistic, analogistic model of Empty Space did it without difficulty. [It was both as analogistic, and as artificial, as Maxwell’s model of the very same thing].

Needless to say, a barrage of criticism followed, either from the mechanical materialists of the old school, or from the idealists of the new school, with, as a banker, the fact that no such Paving had been physically detected! But, of course, that isn’t the point, is it? What is important has to be whether this analogistic model explained a great deal more than anything else could. Now, how can we explain these relative successes clearly based upon non-existing constructs?

Their value is that they are determined by the features in Reality to be explained – and, initially, at least, this can only be achieved by organising what is generally known into a purposely constructed model, aimed by using real features from elsewhere, into an amalgam, which delivered what was required. Such a model would never be the Absolute Truth, but it can be both intelligently and intelligibly constructed to contain more Objective Content – elements, parts or aspects of the Truth, than what it replaces. And in doing so, it makes the actual real phenomenon more understandable: and also by crucially revealing things that were absent previously, makes further developments more likely, if only by establishing a whole new kind of model, which allows us a great deal more to consider with some aspects real, and others intelligent placeholders for what has yet to be revealed. But, why should these analogies even be available? Why should such similar (though possibly also profoundly different) resonances occur in such different contexts? The answers must be contained in what it is that is similar in all phenomena, and hence possible everywhere in one way or another?

We really have to address the question, “What is common throughout all possible phenomena that always guarantees that such analogies will certainly exist?” It must be that they are all – every single one of them, always produced as the result of many different, simultaneous factors, which will always come together into overall situations of Stability (if only temporary). Form the possible results of such complexities, when the factors present are NOT separable, eternal laws, but on the contrary, mutually interacting and modifying arrangements, which will finally settle into a selfmaintaining overall stability.

Clearly, features will become present which are a result of this higher level of stability, and hence about how such mutually modifying factors arrive at such a state. Such properties will be true, at least at the abstract level, of all such systems. Indeed, when you think about it, it is likely that all phenomena are such! The search for fundamental particles and their basic eternal laws is therefore a pluralist myth. No matter which “law” you choose to address, it is certain to be the stability reached by multiple factors at an even lower level! The aims of pluralist Sub Atomic Physics are impossible to achieve, with the assumptions and principles that underlie the entire area of study.

The Principle of Reductionism is clearly based entirely upon Plurality, and hence assumes that Analysis will be always possible, all the way down to its targeted Fundamental Particles. These suggested analogistic commonalities seem to indicate very different relations could be expected to get to such stabilities in very similar ways. Such things as spins and orbits are likely to occur at all levels, as are things like vibrations, resonances and recursions. It is very likely that this is what we are settling upon with our analogistic models. Not ultimately Absolute Truths, but commonly occurring natural resonances, which we term as Common Objective Contents.

This article has been taken from the new special issue of the Shape journal Analogistic Models III


New Special Issue: Analogistic Models III



The last installment of our Analogistic Models series of issues.
 

06 May, 2015

Singularities Suck!



What is a Singularity?

Let us take Zeno’s Achilles and the Tortoise Paradox to investigate. Achilles and the Tortoise are to have a race. Achilles gives the very slow Tortoise a head start, and lets it move out in front, while Achilles waits, confident of his vastly superior speed. Finally, he sets of after the Tortoise. But, by the time he reaches the place, where the Tortoise used to be, some time will have elapsed, so the Tortoise will no longer be there: it will have moved on. Achilles again chases the Tortoise, but when he reaches the place where the Tortoise used to be, some time will have elapsed, and the Tortoise will have moved on. Repeating this line of reasoning, it is clear that it is an infinite repeating cycle. Using this algorithm Achilles will traverse an infinite number of iterations without ever reaching the Tortoise!

Now, the reasoning is flawless, but it isn’t real! It really does produce an infinite, never-ending process.

This is a Singularity!

It is what we call an “ill-formed algorithm”. And, such are very frequently used because they do, for a time, take us ever closer to our sought-for solution. But, they never end, and we have to include a get-out-clause to terminate the infinite process.

An example in our race algorithm above would be something like –“when the gap between the runners drops below one inch (say) terminate the process immediately” The algorithm is one example of how we simplify and idealise situations in order to solve on-going problems. It has long been our primary method, and still is to this day.

For, such forms can press you ever closer to the solution you seek. By terminating at some finite stage in the cycling process, you can be left with a relatively accurate set of results. Indeed, there are many real world situations which have to be addressed in such ways, as there are no others. For example all weather forecasting programs are entirely of such type. The whole method we call Simulation is similarly full of these kinds of methods and terminators.

But, there is even more to it than even that. Some extracted relations, usually encapsulated into equations termed Natural Laws (because they look eternal), also fail in the same way when extrapolated beyond their required conditions of applicability. Indeed, the whole discipline of Mathematics is full of Sinks and Explosions, where they were given the name – Singularities! Get-out-clauses abound!

Now, it should be crystal clear that to transfer such Singularities into reality is obviously a major error.

They do not exist in Reality.

Even the most famous Singularity of all – The Black Hole must never be taken in the way it seems to be – by the extrapolation of a relation beyond its range of applicability. In all such cases, the old relation reveals its inadequacies, it is no longer even approximately true: it is now just WRONG!

If by any chance you have abandoned physical explanations, and put your entire trust in Formal Equations, you are subsequently in very serious trouble. And, that is the Crisis in Modern Physics: for that is precisely what they have done!

So, how on Earth do they cope? With NO explanatory science to use in such situations they are forced to do TWO things. First, they change the way they use their Formalisms to deal in Probabilities. And, second, they resort to unsubstantiated Speculation to fill the (vast) gaps.

And, the dead ends proliferate all the time. Innumerable significant qualitative changes cannot be handled by their extrapolated formulae, and add to the number of unsolved (and by their methods unsolvable) problems of crucial importance, which are given over to unfounded speculations. Whether it is the Origin of the Universe or any wholly new development in reality. Whether it is the Origin of Life or the End of the Universe. None can be tackled because of this catastrophic retreat!

Don’t you believe me?

What about String Theory, Quantum Loop Gravity, Multiple Universes, Branes and the rest? You may well ask, “Why are they locked into this evidently wrong standpoint and methodology? Can’t they just change over to a sounder basis?"

Believe it or not, the answer is “NO”.

Yet, effective alternatives have been around starting 2,500 years ago (and I am not talking of Ancient Greece). The main alternative to the usual Principle of Plurality (the cornerstone of western thought) was delivered then by The Buddha, in which he based himself upon the alternative Principle of Holism. Yet, in spite of others following that line, including the great German Philosopher Frederick Hegel, everyone else chased the clearly pragmatically effective Principle of Plurality. Hundreds of Years and innumerable disciplines have been constructed and added to via such means, and literally NONE of them will ever consider abandoning it.

There are those who have embarked upon this better way, but the establishment of Holist Science is still in its infancy.

Meanwhile the fictitious Singularity rules O.K.

01 May, 2015

The "Purest" Drivel!


You Cannot Make a Silk Purse out of a Pig’s Ear!

The article entitled “Quantum Purity” in New Scientist (3016) delivers no revolution!

Indeed, it is a perfect example of “re-arranging the deck-chairs on the sinking Titanic”. It offers NO profound additions to a majorly flawed consensus Theory about the quantum, and certainly presents no kind of alternative.

For the very real problems, inherent in modern Sub Atomic Physics, are not even recognised as existing, never mind presenting any solutions to them. The “New, Pure Theory” is not even scientific: it explains absolutely nothing, and, at best, infers incomplete treatment of particular key areas, without delivering any solutions.

It does recognise that something is crucially mishandled by the usual “scientific” paradigms, but then proceeds to wrap them up in the same sort of idealist presentations as before! There may be something useful in recognising their two categories, but standing where they do, these theorists are clearly incapable of seeing what is actually causing these two views, and can only offer “insufficient information” as their diagnosis of the problem.

The question is, in a World pre-ordered and limited by a particular mistaken stance, can they ever break out?

The answer, as this article clearly demonstrates, is that the answer has to be “NO!”

And, to clearly describe that stance we have to go back to, long before Copenhagen and even Relativity redirected Physics, we have to be clear on exactly what was, and still is, being studied in Science, and why that perspective leads them astray. 

For, it definitely isn’t Reality-as-is that is addressed - that would be much too messy and unintelligible.

What Mankind found that they could do, however, was carefully choose an area to study, then isolate it from everything else, and begin to selectively modify it – removing confusing factors, and controlling others, until, in an extremely well-farmed situation, some quantitative relation would be clearly displayed, and could, therefore, be extracted.
This is what scientists strive to achieve, and generally succeed in doing it.

But, it most certainly isn’t unfettered Reality, but, on the contrary, it is always an extremely well-farmed and different situation, with ITS consequently evident relation.

Now, let me be clear: all of this is eminently reasonable, especially if your determining objective is to USE that relation in some purposive way!

BUT, it is what is, thereafter, assumed about Reality that is the real and misleading problem.
Is the obtained relation an eternal Truth about Reality: always and everywhere: is it a Natural Law?

The answer has to be, “NO!”

It is a “law” of that precise and purposely limited context, from which it was extracted. And, the unopposable proof of this is revealed when you actually USE that “law” For, you have to re-construct the exact same Domain that was created to deliver it. Otherwise, it just doesn’t work!

All of Technology proves in every one of its applications. And, to get those vital conditions correct is what Engineers do: it is their valuable skill, especially when it comes to commercial production. So clearly, even the “technological” route is an entirely reasonable thing to do.

But, that is NOT the sole aim of the scientist.

Indeed, scientists have to conceive of how to discover it, then carry it out and finally EXPLAIN why it is so. But, our usual methodological tail wags the understanding dog! And, it leads us to assuming that the “extracted law” is actually present all over Reality totally unchanged from the state in which we extracted it. Everywhere in unfettered Reality this law is considered to be the same: it is an eternal Natural Law. No, it isn’t!

And, as we extract further such “laws” (each with its own appropriately farmed Domain), and insist that is precisely these “laws” that together in Reality-as-is make all the phenomena that you see happen.

And, there is more, to establish this assumption, there is an assumed to be universally true Principle, that of Plurality, which sees such “laws” as the real motive forces, and that they, each and every one, are intrinsically separate from one another. These “laws” don’t change, and any situation in Reality is then a mere SUM of such “laws” in various proportions: they remain exactly as we unearthed them in our experiments.

This is so fundamental that the whole discipline has to be necessarily re-labelled as Pluralist Science. And, that isn’t Science!

Now, that would be bad enough, but it isn’t all that is wrong.

For, the whole set of methods employed in Pluralist Science, have as their main objective, the extracting of these relations, via quantitative measurements, and, with these in their hands, they then arrive at an important process – they both simplify and idealise the data set, by fitting it to a pure, formal, mathematical relation. All deviations from this processed and tailored version are dismissed as irrelevant noise, due to non-essential factors.

So, to the error of Plurality, we also have to add that of Transforming Abstraction, which makes the data set into something else.

And, indeed, the formalised “law” actually becomes the “cause”(?)

Now this is clearly fundamental!

Though scientists historically always felt obligated to accompany their pluralist quantitative forms with Physical Explanations, these became more difficult to construct out of the increasingly opaque equations that were dredged from Mathematics to match JUST the extracted data sets, so the requirement for explanation was increasingly dropped.

Instead of the previously dualist philosophical stance, involving both idealistic equations, tempered by materialist explanations – this slipped into a wholly idealist stance, where it was the Equations that were the real determinators.

Now, it is this wholesale retreat that underpins the post-Copenhagen offerings, and without the necessary explanations, these were more and more evidently dead ends in the real understanding of Reality.

Indeed, even the new discoveries of these researchers, which contain real Objective Content, were emasculated by their subsequent theoretical treatments. They were on the threshold of realising the existence of holistic, mutually-affecting situations, which couldn’t be analysed in the old pluralist ways, but once more turned these into their usual solution – to treat them statistically without finding out exactly why they occurred. Indeed, Randomness has become the last refuge of Pluralist Science.


Postscript:

Now, clearly, this criticism needs further definition and amplification, and that has been undertaken in a whole series of papers and even whole Special Issues of the SHAPE Journal, which are already published on the Internet, or are already written and scheduled to be added to these in the near future. But here, we can briefly preface that work by answering the question:-

Where is the Door to Reality?

So having debunked the consensus in present day Sub Atomic Physics, as it has developed since the adoption of The Copenhagen Interpretation of Quantum Theory at Solvay in 1927, it is clearly necessary to indicate a superior alternative.

And, that is already underway and is beginning to deliver profound results.

For, in spite of Plurality and Formalism as the two evident impositions foisted upon Reality-as-is, there is also another purely physical limitation that prohibits an alternative being applied.

It is the assumption of Space being considered to be essentially Totally Empty – and this is not only out there in the Cosmos, but also everywhere else - down to the micro level, even existing between the supposed bottommost particles of Matter. But, that isn’t true either!

Indeed, this theorist (Jim Schofield) decided to tackle the famed Double Slit Experiments that are widely regarded as the most important proofs of the legitimacy of the Copenhagen stance. And, his first assumption was that this “supposed Empty Space” is actually filled with a Universal Substrate.

Of course, it had to be undetectable by the usual means, so he also defined a substrate of particles, which were composed of already known and stable sub particles that could deliver this substrate, as well as things like the propagation of Electromagnetic radiation/ With his defined substrate, he was able to explain ALL the anomalies of the Double Slit Experiments, without any recourse to Copenhagen at all!

The next major error had been, once the Ether could not be detected, was to abandon the concept of a substrate altogether. Now, if this achievement with the Double Slit was all there was, there would be an argument to dismiss it as it just doesn’t explain everything. But, an Indian scientist, Mohan Tambe, has put forward a different particle to make up a substrate, which seemingly effectively tackles Electrical and Magnetic Fields, and the American scientist, Glenn Borchardt has described yet another, which explains Gravity. It is, therefore, already underway to conceive of so-called Empty Space, as actually containing a variety of undetectable particles, which between them (or even in combinations) can deliver what Copenhagen certainly cannot!

Physics and Philosophy


The closing of Middlesex University Philosophy Department


Are both disciplines really about the Nature of Reality?

What do Philosophy Departments in our Universities do?

I am a physicist, and in my experience, Physics Departments (worth their salt, anyway) do Physics! They do experiments.

So, what happens in Philosophy? Is it like the Art Department in my own (undergraduate) University, which didn’t actually do any Art, but just talked about it? Indeed, I, as a physicist, ended up as the secretary of the Art Society in the University for two years.

Do the members of staff in Philosophy actually practice Philosophy? I ask this question because as a physicist today I simply cannot avoid Philosophy – and, I don’t mean the Philosophy of Science, for such restrictions merely end up being about the History of Mankind in Science, and not about the Understanding of Reality. Surely, Science and Philosophy should be very close bedfellows and work together on this same crucial task?

But, I have also observed that very few of the “doing” physicists have any real idea of their own philosophic stance and underlying assumptions. Yet, after generations of scientists being confirmed materialists (as a matter of course), they, as a body, in the latter part of the 19th century, found that the ground beneath their feet was beginning to shake, and deliver ever more damaging tremors, which threatened the very fabric of their once rock-steady stance. The discovery of the Quantum opened up the biggest ever can of worms. Indeed, great fissures opened up, and something had to be done!

The result, finally consummated in 1927 at the Solvay Conference, was a total abandonment of their prior Mechanical Materialism, but what replaced it was even worse.

In Sub Atomic Physics a totally idealist standpoint, with “materialist” experiments was adopted. Instead of the search for physical causes, the whole approach was re-orientated to merely seek Form in experimental results as sufficient in itself. Causes were either unknown or unknowable, but here in our hands were the useable results. We would not only abandon the next question, namely “Why?”, but we would seek our patterns as themselves being the causes of phenomena. The equations produced were conceived of as the Natural Laws of Reality.

The inference was clear: the found equations were the driving imperatives of Nature – they actually caused observed phenomena.

Let us reiterate that stance!

The purely formal relations (abstracted from Reality most certainly, but there had been simplified and idealised from purposely farmed experimental situations) were turned into being the sole drivers of Reality.

Science had become a branch of Mathematics (and could be researched mainly on a blackboard with chalk). It had been changed into an idealist discipline.


Of course, a fig leaf of “explanation” was vigorously defended, but it certainly was not an attempt at real explanation in terms of physical substances and their properties. It had been removed into the Parallel World of Pure Form alone, which I call Ideality, and as such was doomed

Of course, there had been a great deal wrong with the prior scientific standpoint. It has for centuries been a dualist compromise between an idealist belief in form, and a materialist search for causal explanations, that somehow remained together as a workable compromise. The equations were so useful, you see: they had to be an objective in any research!

And, along with this illegitimate compromise, the materialist stance was indeed entirely mechanist: it did not address reality's development at all.

The odd genius, such as Darwin, though transcending that stance, did not, and at that stage could not, change the basic standpoint one iota. The basic sciences were about eternal Natural Laws, as encapsulated in formal equations.

You could not THEN challenge that belief!

The vast majority of physicists really did believe that everything in the Universe could be directly explained in terms of a straight-through series of causes, right back to fundamental particles, and also that these were accurately described by the formal equations – the Natural Eternal Laws. You can see the amalgam. It wasn’t easy to demolish by individual gains in isolated areas. It was the generally agreed ground!

Thus, though many linking gaps were evident, they would ultimately be closed, resulting in everything being derived ultimately from the Primary Science that was Physics. And, increasingly within that subject the nitty gritty would then have to be the Study of Fundamental Particles.

Of course, such a stance did not represent the real situation. Indeed, it was miles away from a comprehensive position, but it had been productive at all levels for centuries, and would not be renounced with the occurrence of as yet unexplained anomalies.

Until undeniably demolished, most scientists would stick to Reductionism – the concept that eternal Natural Laws at the bottommost layer of Reality generated absolutely everything else above it – all the way to Life, Consciousness and Society. And in such a context, the various different subjects be they Chemistry, Biology and even Psychology were then only forced sub divisions of a single structure, as would be proved as the missing links were found one by one.

But, of course, that assumption is incorrect!

The divisions into different sciences are not down to mere human categorisation. They are, in fact, reflective of wholly different Levels of Reality, which when they first happened resulted in new worlds with their own relationships, that did not exist prior to that Event.

NOTE: The inability of scientists to discover the Origin of Life proves this conclusively: it isn’t a mere complication of existing laws from a lower Level, but the product of a seismic Revolution.

And, the reason that one can be so adamant about this is that the prevailing stance ignores Development almost entirely, and simply believes it is merely complication.

Indeed, this criticism is proved by the predominance of the Second Law of Thermodynamics in the views of present day Physicists. For such a Meta law is ONLY about dissolution, and says absolutely nothing about what created the evident structures in an indisputable prior Development. Puny efforts have been made to ally the Second Law with Randomness and Chance, in which that crucial progress in the history of Reality is put down to the roles of these random contributions, but the evident ineffectiveness of such efforts prove that they are mere constructs and reflect nothing of the real Development that has occurred at all!

The problem is, and always was, the shortness of the life spans of individual human beings. No individual could actually observe big developments in reality, and indeed, for millennia, Humanity has considered reality as a fixed thing – an achievement and then a maintenance of a natural stability in all things.

Now, stability, as commonly observed, appeared to be the normal state of Reality, and all change, if there was any at all, was tiny and incremental – occasionally passing some formal quantitative threshold, and thereafter slipping things over into a more conducive and maintainable alternative stability. It was an understandable mistake, but incorrect due to lack of evidence.

The first cracks appeared, centuries ago, in the studies of geologists, who revealed a changing world clearly recorded in the rocks beneath our feet.



But, this development was incredibly long winded – even thousands of years were insufficient to reveal changes, and significant transformative changes were much rarer – usually only apparent over millions of years. And, as the science developed, what it revealed not only included vast changes in the material forms of continents and even the oceans, but also revealed an indisputable evolution of Life itself, AND even the actual time of Life’s Origin on earth.

Yet this subject, Geology, was regarded as merely a “secondary discipline”: its conclusions were majorly distorted by the fact that Geology was a limited discipline, where investigative experiments were impossible, so clearly, the testable and superior discipline of Physics, which could explain things from bottom to top, was still unchallenged as the product of everything there is.

And sadly, even Geology was mute when it came to the clearly evident step-changes in the record of the rocks.

For such a record, very slowly built up over tens and even hundreds of millions of years, could only clearly deliver established stable levels gradually laid down over colossal periods of time, and the crucial dramatic changes would be located in time, but totally absent as an investigatable record. The essential interludes of major and often qualitative changes were simply unrecorded: all that could be clearly seen would be an impenetrable step-change.

These crucial interludes were invariably of relatively short duration. Indeed, the first indications that such transforming interludes actually occurred were found in the recorded histories and the archaeological revelations of earlier Human Societies. For there, without any doubt, such transforming interludes definitely occurred.

Even in the found remains of earlier human beings, there were clear examples of indisputable evidence of what was then described as the Neolithic Revolution. And, this occurred at a time when the only tools available were knapped fragments of flint, plus a few shaped softer materials such as wood and horn.

Yet in this remarkable interlude, Mankind invented animal husbandry, farming, pottery, weaving and many other new techniques. It was no, slow, incremental set of changes, but just such a relatively short-length Revolution.

Also, from the opposite end of Mankind’s studies, in Social History, came the concepts of Stability and Revolution – an oscillation between long periods of slow, incremental quantitative changes, and short interludes of major, qualitatively transforming changes.

Could this be the natural pattern of all developments at all levels?

Slowly, evidence began to accumulate that this was indeed the case... But, did it change the stance of our physicists? No! They actually admitted that switches occurred in the development of Reality, but also insisted that all of these would ultimately be totally explicable in the usual mechanical materialist ways. No change was necessary in their philosophic stance.

Clearly, following this blinkering, a crisis was unavoidably looming, and it would be a biggy! Indeed, there was a Revolution in the offing. Then finally, that time arrived - the avalanche of contradictions multiplied all the time, and a transcending of the causes (long overdue) became imperative. These head-down scientists were extremely unlikely to allow any generalist, head-up conceptions (by uninformed dreamers) to deflect them from their “holy” path.

The intervention just had to come from without!

It had happened in 19th century politics with the intervention of Karl Marx (a philosopher) into areas such as Social History and Economics, but the necessary intervention in Science was never carried through. Yet, Marx was initially only an academic philosopher – a follower of the idealist giant Hegel, yet his intervention transformed politics.


The question therefore posed at the outset of this paper can now be restated:- “Do academic philosophers DO philosophy?”

What is their position on the Copenhagen Interpretation of Quantum Theory, and the consequent switch among scientists to Reality itself, being determined by Formal Relations, as embodied in Equations?

Something MUST be going on, which simply must be in the remit of practising philosophers. For in my developmental studies, contributions by the philosopher Salmon enabled me to see the role of the Principle of Plurality in science. Yet, no moves are then made by today’s philosophers to address this flawed principle, as Hegel would most certainly have done 200 years ago!

The question has to be, “Why?”

Now, if, as seems to be the case, what is studied (and explained?) is merely the History of Philosophy, cannot the trajectory towards Truth be discerned in that, as it was in History by the philosopher Marx?