17 December, 2016

Christmas Bumper Edition

Merry Christmas readers!

Here are TWO new issues of the journal for December.

This issue is a discussion of the philosophical principals behind a new kind of 'aether theory' - one that is staunchly materialist but that transcends the pitfalls of classical physics. 

Here is a new collection of papers examining some fundamental problems with the Big Bang theory, and the flawed philosophy of physicists that lead to it.

15 December, 2016

Trump's real plans

Richard Wolff's analysis of Trump's decisions as president-elect

amongst other things!

11 December, 2016

Thoughts on Substrate Magnetons & The Necessary Revolution in Physics III

Magnetism art by Ling Meng

To really address the interaction of phenomena of any kind assuming a Universal Substrate of the kind suggested by this theorist, we will, first, have to conquer the internal relationships between the different particles of this Substrate, as they have so far been devised.

For, though they are all dual, mutually-orbiting particles with only Lepton components, they constitute two very different sizes.

The 'Neutritron' as we call it has been found in accelerators and named Positronium

The Neutritrons

The initially devised Neutritron is considered to be composed of a mutually-orbiting pair - consisting of one electron and one positron. So, this is a very tiny particle, even as a mutually-orbiting pair. For, compared even with the smallest atom - Hydrogen, it is extremely tiny - each of its components being only one two-thousandth the size of the proton nucleus of that atom, and a very much smaller proportion of the size of the Hydrogen atom as a whole.

It is so small that it will, without any difficulty, completely-occupy the "insides" of both that and indeed all other atoms - there will be piles of room. So, there will be a set-of-different-relations between the components of the Atom and these included Neutritron units of the Substrate.

The MOST Substrate-affecting part of such an atom will undoubtedly be the orbiting-electron. Indeed, any established Paving, of the neutritrons, will certainly be, at least temporarily dissociated by the oft-repeated passage of this particle, around its orbit.

And, once so dissociated, collections of these now freely-moving substrate-units could also be turned into associated vortices. And these would be of a special type, as the repeated orbiting would allow energy transfers BOTH WAYS - that is TO the vortices from the orbiting electron, and also FROM the vortices back to the orbiting electron.

Indeed, elsewhere, this has suggested an alternative Theory of Quantized Orbits (within the Atom).

The Magnetons

Now the two magneton units of the Substrate, though also mutually-orbiting, Majorana-type pairs of Leptons, are both different in size and in properties.

The larger sub-unit in each of the two magnetons involved - either of the Tau or the anti Tau, are TWICE as big as the Proton, while the smaller units - the Muon and anti Muon, though smaller than the Taus are still much bigger than the electron and the positron.

These much-larger joint-particles, because of their differently-sized components, also have magnetic moments. So, both these features limit the presence of these substrate particles within to the Atoms. They are simply too big, and too individually-active (both in their movements and their magnetic effects) to exist, in a stable manner, within atoms. But, as described in their definitions, they can cease their normally-mobile existence, if attracted by Charged Particles, or by magnets, when they form either static Fields or linked as "Lines-of-Force".

Now clearly, whatever the undisturbed nature of all these Substrate Particles, they will be significantly affected by, say. the energetic passage through them by a Charged Particle. [Indeed, the above reference to the affect on neutritrons by the orbiting electron within an atom, already gives an idea of how they will be affected in a quiescent part of the Substrate]

Certainly, the always weak interconnections of neutritrons in a Paving, will be dissociated, and the forming into vortices will then be possible. Yet once the disturbance has passed, there will be a slow, reformation back into their prior stable Paving-arrangement.

Indeed, though, in spite of these local-and-temporary effects, the Paving-pattern will still dominate, across vast areas of what used to be called Empty Space, while, elsewhere, a patchwork of paving-areas with temporary dissociated areas, involving both Streams and Vortices, will also occur.

NOTE: There will always be relatively straight propagations via these neutritron units, as continuously-connected regions of Paving, which will always be available too.

The magnetons, with their normally random movements, and no available stable structure, will be merely be given extra Kinetic Energy by the disturbing passage of a traversing particle, but they will be significantly-structured by gathering around a charged particle - to deliver "its Field", or reformed into linked "Lines-of-Force" strings, by the presence of a strong magnetic effect.

Ling Meng

Let us briefly recapitulate the performance of this complex Substrate. When undisturbed it is undetectable!

All its units, either singly in the case of neutritrons, or over-populations, in the case of randomly-moving magnetons, are neutral in every way, and hence are undetectable by the usual means.

But, they can both affect things, and be affected-by things, that can interact with their various features.

For example, the Propagation of Electromagnetic Energy in Quanta is possible by the bucket-brigade, temporary promotion of the internal orbits, in sequence, across adjacent units. While, the presence of a charged particle will allow magnetons to gather around them, in concentric, aligned shells, to produce a field, and a magnetic presence will line up magnetons into Lines-of-Force continuous with the physical cause of the magnetic effect.

In special circumstances, such as within atoms, neutritron Paving can be dissociated, allowing them to move freely, and be turned into separate vortices by orbiting electrons, and via the repeated interactions, thereafter, will only allow quantized orbits to be possible.

05 December, 2016

Thoughts on Substrate Magnetons & The Necessary Revolution in Physics II

Magnetic Fields by Berenice Abbott

Now, perhaps, a start can be made in explaining the experiment achieved at Princeton - supposedly involving the emergence of Majorana Particles, via this theorist's (Jim Schofield) definition of a Universal Substrate, and the role of some of its Units in delivering both Electrical fields around charged particles, and Magnetic fields, with "Lines-of-Force" - associated with the presence of permanent magnets.

As it happened, the Neutritron - the first-devised Universal Substrate particle, was originally conceived-of to explain the anomalies of the Double Slit Experiments, as well as many other phenomena - such as Pair Productions and Pair Annihilations, and, of course, Electromagnetic Energy Propagation via quanta: but they, I'm afraid, could not explain Fields at all.

Each Neutritron consists of a mutually-orbiting pair of one ordinary-matter electron, and one antimatter positron - in other words a kind of Majorana particle.

So, the extra particles that needed to be added to the Universal Substrate, to possibly deliver Fields, were also devised as Majorana particles, but this time of differently sized components to endow them with magnetic moments.

Once again, the components used were Leptons: in this form, either a mutually-orbiting pair of one Tau and one anti Muon, or, alternatively, of an anti Tau with a Muon.

These two joint particles were effectively mirror images of one another, and their magnetic moments cancelled each other out via their existence as a randomly-moving population of equal numbers of each kind.

So, these two were normally also undetectable, but only as long as they maintained their random, and hence cancelling, movements.

But, in addition to this commonest case, there were other situations, in which that existence would be radically transformed.

The presence of an alien Charged Particle, within such a Substrate, caused these magnetons (as they came to be called) to gather around the charged particle in magnetic-moment-aligned, concentric shells - thus producing an associated Field.

While, when something like a permanent magnet was present, the magnetons formed aligned chains - continuing the pole given by aligned atoms within the magnet, as lines-of-force outside of it, and carrying on around to return to the magnet's opposite pole.

What had been eminently neutral, overall, as a population of randomly-moving-entities, became Fields when statically-aligned via their magnetic moments to other affecting entities.

Now, what this theorist intends to attempt, is address all these new researches in an "Explanatory Way" - that is using the properties of what are involved to give causal reasons for why things happen in the way that they do.

Such an approach is very different from the usually applied approach for three different reasons.

First, it will not assume the Principle of Plurality, which, though it admits of a multiplicity of affecting factors, in any natural investigated situation, always only targets a single factor at a time, and achieves this by greatly tailoring the investigated context, to make the targeted factor dominant, and then assuming that what is then extractable by this method is exactly the same as that factor when occurring naturally, without any of that revealing tailoring. That is, most certainly, not true: for the extracted factor is always definitely changed by its context. It will be different in different contexts.

Second, it will not be assumed that the natural context can be achieved by merely adding together all the separately extracted factors achieved when applying different farming for each and every one.

Third, the purely quantitative relations achieved by the above means will NOT be considered as Causing Laws, but only as mere descriptions of what occurs in the farmed, arranged-for context. Causes are qualitative, and require the revelation of all the mutually-affecting properties involved.

Solvay Conference 1927

Now, up until the victory of the Copenhagen Interpretation of Quantum Theory in 1927, physicists always followed the quantitative formulation of relations between the variables involved, with an attempt to explain WHY things behaved as they did, and did so in terms of the various substances involved and their properties.

The initially acquired formal description was always completed by the explanation of the causes involved. Indeed, this final stage was regarded as the most important one, for it alone could be extrapolated as a contributed cause from there to new areas involving common substances and conditions.

Mere quantitative forms are indeed Universal - cropping up all over the place, but they DO NOT infer identical causes!

"Scientific Understanding" resides only in this last and crucial stage in investigating Reality.

It is there, and there alone, that the Objective Content - that revealable part or aspect of the Truth actually resides.

The Majorana fermion at end of superconducting wire

An Explanation?

Now, perhaps we have enough to attempt to explain the discoveries at Princeton - concerning the emergent Majorana particles.

Clearly the presence of a usually undetectable, yet ever-present Universal Substrate - composed entirely of different Majorana particles, which can behave very differently depending upon circumstances, the odd arrangements set up for the Princeton investigation, must be guaranteed to deliver remarkable outcomes.

For, what are regularly proving to be "outside-the-box" situations abound here, namely:-

1. Extremely Low Temperature Physics
2. Superconductivity
3. Extremely meagre traces of substances

And all these occurring in an ever-present, multi-particle and multi-mode Substrate, that is totally undetectable by the usual means. Clearly, in such circumstances, unusual situations will not only arise, but produce individual and detectable phenomena, normally swamped (and hence missed) in more abundant conditions!

The problem will boil down to just how the Substrate Units will interact with these unusual conditions, AND affect those conditions.

After all, this involves a single strand of Iron atoms (with its magnetic potentials), closely involved, both with the complex Substrate and the unusual superconductivity of the lead too.

It seems to me that the two kinds of Magneton particles (both of which are Majoranas), are certain to react to the Iron atom strand, by aligning their magnetic moments with those of the Iron atoms - and particularly at the ends of the strand!

For that is what is assumed to happen with Lines-of-Force around permanent magnets.

Clearly, in this set-up, the presence of the superconducting Lead upon which this occurs, must play a role.

28 November, 2016

Thoughts on Substrate Magnetons & The Necessary Revolution in Physics I

Ferrofluid on Magnet

The original particles devised (by this theorist) to "effectively begin" to deliver a totally undetectable Universal Substrate - the so-called neutritrons, though they were successful in several very important ways, they did not deliver either Electrical or Magnetic Fields in response to the presence there of charged particles or permanent magnets.

The nature of the neutritrons - as mutually-orbiting-pairs, each one consisting of both a negatively-charged electron, and a positively-charged positron, seemed to be sound, for they would, indeed, be undetectable, while delivering several necessary properties required by that Substrate.

See The Atom and the Substrate for more on this 'aether' theory

So, three things were clear:

1: New additional particles with a similar mutually-orbiting form - that would enable a similar cancelling of opposing properties, and

2: that these would have to consist of two mirror-image pairs (in equal numbers) and moving with a constant "random motion", so that, overall, they would appear undetectable too, and

3: the two new joint-particles would, separately, also have to be capable of delivering electromagnetic effects if aligned around either charged particles, or involved in delivering magnetic "lines-of-force".

Now, these are, clearly, contradictory features, and initially seemed impossible to simultaneously deliver. Until, that is the mutually-orbiting pairs were considered to be of differently-sized sub-particles, with exactly opposite properties across the two proposed new Substrate units.

The neutritron had been devised via two diametrically opposite sub particles of the same size: so the task seemed-to-be to find other Leptons that would fit the bill. Other, already-known joint-entities, such as a "mock atom", consisting of a Proton and a Muon, indicated what these necessary extra particles might well be possible.

By using oppositely-charged pairs consisting of Muons, Taus and anti-Muons and anti-Taus, the required joint-particles seemed possible.

And, whilever they were in constant random movement, involving equal numbers of each kind, they would be undetectable - overall!

But, their unavoidable magnetic moments (due to the differing sizes of the sub particles involved) would create the necessary electromagnetic effects if they were both immobilised-and-aligned around charged particles, or linked in static aligned chains, by their magnetic moments, in magnetic situations.

Now, all this has been published elsewhere, but such a Universal Substrate - available absolutely everywhere, presents all kinds of possibilities in special situations.

The so-far-assumed situations do seem to cover crucial phenomena in the most commonly occurring situations. But anomalies have been mounting in unusual areas of study - such as very low temperature environments and thin film situations..

Indeed, the 2016 Nobel Prize in Physics was awarded for work successfully carried out in precisely these areas, by a trio of British-born scientists (commencing 40 years ago). Also a group at Princeton in the USA have produced "examples" of the fabled Majorana particles where a single "string" of iron atoms are closely associated with a lead substrate in a superconducting state.

Such evidence cries out for an explanation involving an undetectable Universal Substrate - composed, as this theorist's version does, of "Majorana-type" joint-particles, involving both matter and antimatter sub-units.

Physicists at Princeton

But, attempts to address these objectives has been hopelessly stymied by the "formulae-first" approach being employed, along with the Copenhagen standpoint. Crucial explanatory aspects are simply never dealt with.

Frankly - "the tail always wags the dog", so attempts to pull together some sort of explanation are made impossible, for what is needed to attempt to do that, is simply never addressed!

Now, this theorist was, in the past, confronted by the very same problems in addressing the anomalies in the famed Double Slit Experiments, until, that is, he included a Universal undetectable Substrate, that played an enabling role. And, when all the posed-questions of such a Substrate were fully addressed, all of those anomalies were solved, without any recourse whatsoever to the premises of the Copenhagen approach.

But, a great deal of experimental work was available for the many different versions of the Double Slit phenomena, but here, in these investigations, that isn't the case.

The language used in all the accounts I have read is ambiguous - the Princeton example is characteristic. They seem to be seeking Majorana particles at all costs (and with purely pragmatic reasons - associated with Quantum Computers). Accounts switch from talking about electrons of the iron atoms - to locating them in the underlying superconductor.

You can see the problem!

If, as I assume, a Universal Substrate (composed of Majorana particles) is literally everywhere, yet undetectable:-

1: In the surrounding Space

2: In the Superconductor

3: Inside the Iron atoms

Then, as with the changing conditions that have become apparent, in Fields and elsewhere, in such a Substrate, it seems evident that in the very special conditions of the Princeton set-up, all sorts of unique circumstances will be inevitable.

And, this isn't a small matter.

The Crisis in Physics, which precipitated the Formalist Retreat that culminated in the Copenhagen Stance, had been building up for centuries, and meant that physicists were forced to switch their stance, constantly, to enable the addressing of different objectives.

The finally, assumed-to-be, over-riding one was both purely formal representation, along with a long-established, deep-seated and effective Pragmatism - concerning both Prediction and Production as the primary purposes in studying Reality, and its Explanation as, at first secondary, and finally an inconvenient luxury that could be totally dispensed with!

The inevitable anomalies and impasses that naturally arose from the involved contradictory premises, were avoided by dealing only with observed patterns, and depending upon the "assumed Consistency" of Mathematics, in dealing with such things, as the real underlying determinators of all phenomena.

It was of course, not only mistaken, but also entirely an idealist stance.

Now, this whole philosophical problem has been addressed elsewhere, and is available for those who wish a fuller investigation, but clearly it cannot be fully re-addressed here. Suffice it to say that just such a comprehensive investigation has been carried out over many years, and has also recently focussed its attention upon a revolutionary, new approach in the Sciences, and particularly in Physics, with a strictly Materialist - Holist stance, which can finally address the idealist-pragmatist cul de sac that is Copenhagen, and allow a major advance in Explanations of Reality. But, when the attention of literally all physicists is solely focussed upon Forms and Formulae, and their pragmatic use, the important questions are never addressed.

Any break-through will have to be of a similar impact to Darwin's Origin of Species in Biology, but, this time, necessarily involving a major philosophical revolution, and will not be given the credence it requires until it totally buries Copenhagen, once and for all.

22 November, 2016

The Head-Up, Non-Specialist, Theoretical approach?

The Necessary Role of Philosophy in Science

On reading a collection of short articles under the general title of The Unknown Genome in New Scientist (2765), I realised that I had perhaps hit pay-dirt with regard to my own ideas on the possible policing of genetic materials within all organisms.

I found, in this series, substantial evidential support for some of the hypotheses I have been formulating about this area, particularly in the maintenance and policing of the genetic material (though, of course, my considerations were, perhaps surprisingly, almost wholly philosophical).

NOTE: A fuller discussion of the content of the above mentioned series in New Scientist will be addressed in a separate paper on completion of this one.

I am no professional biologist, and I must depend wholly on those who are, for the content that I must attempt to make sense of. So, I hadn’t arrived at my suggestions via personally-newly discovered concrete evidence, but, on the contrary, solely in response to my dissatisfaction with the usual consensus explanations of Mutation and Species Change, and involving possible alternative accounts of my own.

The usual explanations were much too hit and miss, and as is usually the case, often latched onto the ubiquitous use of Randomness to “explain” everything!

No, I was convinced that Life would immediately and vigorously react to mutation damage to its absolutely vital genetic materials, and hence, apart from Natural Selection of the adult phenotype, other processes would occur within the genotype to remove, alleviate, or “wrap-up” and store any genetic damage, which, by some set of criteria was labelled as wholly deleterious.

I must admit that while I am not a biologist, I have always been very interested in all aspects of the subject, and have followed developments closely. I have never been taught the subject at any level, and my experimental experience was, and still is, is precisely NIL.

But I am by no means un-informed. I have been reading extensively on this subject for over 40 years, not to mention on many other very wide-ranging areas, and was indeed extensively educated as a scientist and mathematician, prior to changing my specialisms several times, and even achieving a professorial appointment in one of these latter areas.

So, what is it that I must have been doing to now attempt to integrate such state of the art discoveries into my own propositions?

The resonance between what these real biologists were finding and my suggestions as to what I considered were necessary processes have been surprisingly close. The usual response to such a person as myself making any worthwhile contribution at all on such a specialist subject is universally agreed to be of minimal value.

But such a reaction is not always justified.

And this same situation has occurred several times for me in widely different disciplines. It has even occurred within my legitimate disciplines of Physics and Mathematics, because I was “making judgements well outside my specialist areas”. More expected similar responses have been coming my way in many other areas from Painting and Sculpture to even Dance at one extreme to Geology, Politics and Pedagogy on another. Yet, though such condemnations would usually be correct, they will not always be so. It will depend on how such a wide range of subjects are considered by the outside interloper.

It will most certainly depend on his ground! In other words it will be basically determined by how that person deals with knowledge and understanding from disparate areas: it will depend on his worldview.

Specialisation does indeed allow a remarkable focus to be achieved and discoveries to be made. It is, of course, essential for each and every serious area of study. But it is also invariably what I term a Head-Down approach. It limits the considerations of the expert to his/her own narrow area. And it must be contrasted with a Head-Up approach, which builds its worldview out of the widest possible Knowledge and Understanding.

It should really be the approach of the philosopher, but even there it is rarely the normal mode.

All problems, no matter what the specialism, will not be solved by concentrating only within that specialism. Indeed, along with the accumulated wisdom of that specialism, such a limitation will also justify and firmly embed in addition its current assumptions and errors. Many practitioners will never see the wood for the trees. And a generalist approach can, and sometimes does, reveal things invisible to the Head-Down expert.

A real philosopher MUST be multi-discipline, if he/she is to benefit from human gains across the board in understanding the World. All understanding is, of course, social, but it is also multi-discipline. Even the greatest specialist experts show almost unbelievable errors in their generalist thinking.

It is almost universally true that all specialists make rubbish philosophers. And they also cannot switch disciplines and produce as good work thereafter as they did in their own prior area.

To give an exemplar of this which may establish my own approach, I will relate the experience of my major diversion into Dance!

I have become the leading author of Multimedia Resources for the Teaching of Dance (along with an excellent Dance specialist colleague). And this situation was established some 21 years ago, and has remained the case ever since. In addition I also designed a teaching aid for Dance Teachers employing Rudolf Laban’s ideas in their area, and related to his famous Labanotation – the world-wide employed method for recording Dance. 

In that very different world, I became an expert in Computer Systems and Programming, not only producing that high point of systems design – a machine independent compiler, but finally achieved a post as Director of Information Technology in one of the colleges of the University of London. I had received zero instruction in computing also.

But, I was always a Head-Up philosopher, and every discipline was relevant to that! Recently I have been making significant contributions in the Theory of Emergences, as applied to the Origin of Life on Earth and to its subsequent Evolution.

Have I any right to tackle such problems? Many would tender an emphatic, “No!”, but they would also be mistaken.

In the last five years I have again changed course and spent all my time writing about Philosophy and now run an online Journal (SHAPE) concerned with Philosophy and it is full of new and legitimate ideas.

Now, at this point, the reader may well be yawning at “my efforts to show how clever I am”, but they would be mistaken if they are. I am, and purport to be no genius.

I got a lower second in Physics from Leeds University, and was throughout my education damned with the faint praise of “promising”. No, my descriptions of what I do are not to establish any sort of superiority, but, on the contrary, to reveal an approach that enables me to address such a very wide area of disciplines and to do something worthwhile in every one. It is because I am, and always have been, multi-discipline in my interests. NO! “Interests” is much too weak a word. I should have said “concerns”.

And though much of conventional education is to tell us HOW things happen, I always wanted to know WHY they happened the way that they did. 

AND I demanded (of myself) a philosophy that could face all ways, and cope with all expectations. After all, what is the use of a philosophy that is strictly limited to a specific discipline – as, for example, the current consensus in Sub-Atomic Physics – the Copenhagen Interpretation of Quantum Theory and its various developments.

Not only should my philosophy be entirely general, but it should never be only an academic subject. 

As Robert Pirsig tried to insist in Zen and the Art of Motorcycle Maintenance, Philosophy is for every day! It should illuminate your life and purposes, and it should never be second hand!

Though you may, and indeed frequently have to, take on what you learn from others, you must never be converted “hook, line and sinker” for most of what you “take on” as your position, you will not fully understand if you do.

But, if, on the other hand, what you learn from others must be integrate-able with what you already understand, it will be very different. It may be difficult to achieve such integration, but without it, you really have nothing new to take on.

What you have learnt cannot actually mean anything!

Of course, if your “core” is constant and immutable, you will also be in deep trouble! Integration is a two-way process, and its successful achievement changes the receiving “core”, and frequently the new material too. Indeed, the greatest understanding comes only from transcending what appears to be unbridgeable contradictions between what you already understand and what you are trying to integrate.

The reason for such an impasse is always that your usually-inviolate assumptions are incorrect, and the only way to traverse the seeming full-stop is by a radical change in those assumptions.

I have recently been fascinated by Evolution and have even written a paper entitled Truly Natural Selection, which generalised Darwin and Wallace’s Natural Selection to apply also to non-living, purely-chemical processes by means of a very different selection process.

My Theory of Emergences tackles the trajectory of Emergence Events such as the Origin of Life on Earth, NOT particularly, of course, but generally as a revolutionary interlude in the on-going Evolution of Reality. It concerns itself with the form or shape of the episode – its turnovers and consequent phases until there is some sort of resolution in the establishment of a wholly New Level.

What has therefore emerged is extremely surprising!

The assumption that minor constructive processes prior to such an Event actually precipitate a whole new Level determined by the direction of those prior processes is shown to be totally INCORRECT! The nature of the new Level does NOT emanate from changes in the prior Level at all. The only thing that within-Level processes can produce is a catastrophic collapse of the prior system, and that is very different indeed from the usual assumption. 

Indeed, it is ONLY the Second Law of Thermodynamics type sub-processes that bring about the demise of every Stable Level.

The first phase in an Emergence is the opposite of anything emerging: it is composed of a catastrophic collapse of the prior stability, which then seems to be hurling headlong down to total chaos.

But it doesn’t!

The dismantling of the stability-ensuring processes within the previous Level, plus its history and still-remaining, productive-process content, allows new things to occur, which within the stable Level were prohibited, and a brief period of remarkably diverse and numerous processes leads, by a selective process to the creation of entirely-new proto-systems. Nevertheless, these quickly generate their own Second Law curtailing, and this leads into a period of oscillations between creation and dissolution, which nevertheless gradually ascends to a point where a completely new and self-maintaining Level is established and PERSISTS!

It is no empty myth, when legend talks of the Phoenix arising from the flames of destruction.

That is the ONLY way that the wholly NEW can ever emerge!

The real myth is that which asserts that innovation can be achieved by small quantitative and incremental steps - it cannot.

Now, interestingly, my work on Emergences naturally progressed to seeing what their role must be in the actual Evolution of Life, and many questions immediately arose about our universally agreed assumptions of how new species emerged, and also how matter ascended from inert particles to produce Life, Humankind and indeed Consciousness. The incrementalist myth would just NOT suffice, for such a remarkable trajectory, and all our basic assumptions had to be thoroughly investigated.

My work in this area (remember I am NOT a biologist) has recently been confirmed by a whole series of unconnected discoveries by real experimental research biologists in the various academic Journals and Magazines.

The point of this paper is NOT self-congratulation, but instead to try to reveal why an ordinary man from a poor working-class background (my grandmother could neither read nor write) could be in a position to make such significant contributions.

It HAS to be important, and though it may dismay the elitists and the privileged, it should encourage all who really want to understand the World, rather than merely join-the-club, accept the consensus, and live comfortably.

But, the barriers to doing it are indeed considerable, I must admit! Such researches are MORE than a full time job and you have to earn your living.

I chose, and luckily it was the correct choice, to be a teacher, and have taught at every level of Education from lower schools to Universities. But to get anywhere I had to move fairly frequently. I had eight posts culminating in my Directorship at Goldsmiths’ College, and always tackling new things. 

At Goldsmiths’ I devised and commissioned the first Campus-wide Fibre-Optic Network in any of the Colleges of London University, while in Glasgow I had to turn myself into a systems expert to set up an appropriate teaching-orientated computer network and system for an educational institution at the highest level, and also to become an expert in Computers-in-Control to help many researchers with their chosen questions.

The thing is to tackle what needs doing, and learn as you go. Nevertheless, you do not have to have a goal from the start. It expands with each new job and the challenges they deliver. I seemed to arrive at a professorial level final post by a totally unplanned route. (Though I often spent very long periods in a given post, because the job demanded it).

But, what does happen is that as your achievements are your own, and never facilitated by contacts and influence, you gain in both reputation and confidence.

From an initially shy working class boy from West Gorton, Manchester, I am now a confident philosopher! How about YOU?

14 November, 2016

The Imperatives & Trajectory of Writing

A Muse by Scientist and Philosopher 

Jim Schofield

Having been a full-time writer, initially, of directed academic papers, and, thereafter, individual essays, for a developing period of almost nine years now, I am, in retrospect, interested in the unplanned trajectory that I have been directed upon by that experience.

Initially, my topics were extremely varied, not only coming from my later career in Further and Higher Education - as a Lecturer and Researcher in Computing, but also from many, much earlier phases, when I was involved in teaching Physics, Mathematics, Biology, Music and even Revolutionary Politics.

Though, I occasionally also wrote upon Art, it was as a practicing Sculptor that I put in the hours there, and though I did write extensively for a time, on Music, it was as a rather incompetent performer, and perhaps a somewhat better analyst, that I spent my time in that area.

Writing was an intellectual activity, and initially, was limited by my own current inadequate knowledge, though I always found that I had something to say upon the latest News, and upon articles in Scientific Magazines, the results initially were invariably just critical one-offs.

But, being aware of those inadequacies, I resolved to attempt to overcome them, and read a great deal to that end. And, crucially, as a life-long teacher, what gains I made for myself, I wrote up as if I was teaching a class - so my style was never very literary - with the none of the usual abundance of quotes, references and examples of relevant experience.

I saw writing as teaching, and delivered accordingly, as if I was there in front of a class (though I had to imagine for myself any puzzled expressions and probable consequent questions).

I invariably got an inordinate number of criticisms from "professionals", who felt it necessary to dismiss my "style", punctuation and incorrect language, as betraying clear unprofessional inadequacies. But, as a highly successful teacher for over 40 years, I felt that I knew better.

I was never attempting to earn a place in Academia, but merely to teach what I had learned.

Now, as a qualified computer expert, I had managed to land my perfect job - helping Higher Education researchers (across the whole range of disciplines), by both devising and delivering computer programs to help them with their work.

I wrote tailor-made and usually completely original software aids for research in disciplines as widely different as Engineering, Taxonomy, Control of complex testing and analysis machines, Nursing Care Plans, Mathematical Chaos, and I finally won a British Interactive Video Award, for The Dance Disc - a multimedia aid for the teaching of Dance Performance (all of these were, of course, only achieved along with top experts in the discipline-field involved).

The language I used, when talking with my co-workers, was exactly how I wrote, and, it always seemed to work very well.

What I am keen to communicate here, is how my writing changed over the years.

From the outset of the current, writing-only phase, I worked 7 days-a-week, 12 months-a-year for a minimum of 4 hours a day ( and often a lot more), and quickly reached the level of output of a "Paper-a-Day". I began to fill 80 page display books with printed versions of my work, which rapidly grew to over 150 volumes, at which point, I switched to much more capacious A4-size polythene boxes.

A current estimate of my writing is around 6 million words, and after only a couple of years into this phase, I (with the help of my son, Michael Schofield) had set up three dedicated websites, where my work was published.

The most important site was SHAPE Journal, which, by October 2016, had published 91 Issues, each containing around 6-10 original papers. It didn't take long for us to, in addition, publish what we called Specials, which were originally conceived as SHAPE Issues dedicated to a single theme.

Now, from the very beginning of this endeavour, I had been losing my sight: so changes to my writing facilities were regularly necessary.

Initially, I wrote on paper with a pen, but soon had to switch to a more readable fibre-tip marker, and enlarge my manuscripts. The second stage was always to type from the MSS into my computer, but, then, the text on screen became too small to see, so a bigger screen became a regular update. And latterly, I couldn't even read my own manuscripts, so I switched to direct-typing-in, using a truly mammoth screen.

Another development in method was also derived from teaching, for I frequently changed course within a lesson, as a response to evident problems and questions - and, following an unresolved question, I even made sure I had cracked it by the next lesson.

So, when writing, I had to be my own sternest critic, particularly during a reading of what I had just produced, so necessary additions were then carried out, and inserted within the prior text. Many times, it was incomplete premises, assumptions, or prior ideas that were mistaken or even missing, so resolving these, produced Prefaces and Introductions, and topics often stretched into series of related papers. 

Perhaps the most important development was the Necessary-Interruption-Technique - where I realised the need for a necessary area of work. So, I immediately suspended the current writing, and diverted to researching the as yet unresolved question that was required, before I could complete the prior paper. But, most directly-available information was rarely an Explanation, so I regularly had to sit down and think it through for myself.

Brief notes helped guide my later writing, but clearly the Thinking Sessions were becoming more and more vital. The gathering of mere Knowledge was clearly insufficient!

In the present World Knowledge has, indeed, become the main objective, but that is surely NOT the main purpose of Education: that is now, and always has been, the Understanding of phenomena.

In addition to "How?", we have also know "Why?"

Instead of the mere dissemination of prior Knowledge, the emphasis changed markedly to explaining why things behaved as they did, and my writing became Original Theoretical Research.

As a qualified physicist, I tackled the infamous Double Slit Experiments, in Sub Atomic Physics, and managed to arrive at a comprehensive Explanatory Theory, at variance with the now consensus Copenhagen Interpretation.

In research into the work of the philosopher GWF Hegel, and his famous student Karl Marx, I finally arrived at an original Theory of Emergences.

And, elsewhere, also managed enhancements to Darwin's Theory of Natural Selection, as well as significant improvements in Stanley Miller's famous Primeval Atmosphere Emulation Experiment, which had naturally generated the absolutely vital amino acids in less than a week.

Having had a very wide experience over a long career, I was able to not only write, but also make original contributions across a range of disciplines, and despite my increasing blindness (I have advanced Macular Degeneration), I have accelerated my rate of production considerably.

In the coming Summer (2017) we will celebrate with the 100th Issue of the SHAPE Journal online, which will be a Special - composed entirely of the Illustrations, Montages, Diagrams, Graphic Art and even YouTube videos - all selected or created by my son and colleague, Michael Schofield, who is currently studying for his Ph.D. in Photography at Leeds University, England.

It will be quite an issue!

12 November, 2016

New Special Issue: Computerised Solutions

Computerised Solutions,
The Nature of Mathematics
and The Necessary Revolution in Philosophy

The Myth of the Intelligent Computer:

With so many media fairytales about so-called “Intelligent Computers”, projected with confidence, by seemingly all pundits, into all our futures, we must, from a both well-informed and sound position, trounce such hopeful or even fearful myths completely.

The statement, “The computer says...” is, of course, total nonsense, as all computer programs are written by people, AND, crucially, limits the means they use to considerably more restricted methods, than can be carried out in the best of Human Thinking.

Indeed, they are mostly iterative techniques for getting closer and closer to a sought, quantitatuve solution. Their value is that they can carry out such processes at colossal speeds, delivering useable results very quickly indeed.

But computers cannot think...

11 November, 2016

The People Are Many And Their Hands Are All Empty

Oh, what’ll you do now, my blue-eyed son?

Oh, what’ll you do now, my darling young one?

I’m a-goin’ back out ’fore the rain starts a-fallin’

I’ll walk to the depths of the deepest black forest

Where the people are many and their hands are all empty

Where the pellets of poison are flooding their waters

Where the home in the valley meets the damp dirty prison

Where the executioner’s face is always well hidden

Where hunger is ugly, where souls are forgotten

Where black is the color, where none is the number

And I’ll tell it and think it and speak it and breathe it

And reflect it from the mountain so all souls can see it

Then I’ll stand on the ocean until I start sinking’

But I’ll know my song well before I start singing’

And it’s a hard, it’s a hard, it’s a hard, it’s a hard

It’s a hard rain’s a-gonna fall

Bob Dylan

Richard Wolff quotes this great song in his latest update on Trump and Capitalism... Worth a good listen this one.

10 November, 2016


Protesting Trump's election as US President



For, now you must know

These things NEVER reflect the needs and wants of working people. Even if Clinton had got in it would be only marginally better for most Americans. The sad oscillation between the mega-rich Republicans and mega-rich Democrats has to be finally terminated - forever.

Bernie Sanders was only a pale hint of the necessary 

Shape of Things to Come

The next step gets steeper every single day, and we must climb it soon before it precipitates the inevitable global war....

...as happened last time

Richard Wolff on Trump's election:

09 November, 2016

Ideal and Real Worlds?

Mathematical Landscapes
by Zarko D. Mijajlovich

The crucial significance of both Simplification and Idealisation in Mankind's attempts to, first describe, and then understand, Reality, must be understood for how they have enabled progress in this demanding endeavour, while at the same time guaranteeing both mistakes and even impasses - some of which seem totally impossible to transcend, and have remained so for extremely long periods in Man's relatively short history. Indeed, it is this contradictory status that these "gains" most certainly possess, which makes their relationships to what actually pertains in Reality-as-is, so difficult to grasp.

Mankind's frequent solution to such conundrums, has always been their quality of remarkable flexibility, which allowed them to use the pragmatic stance embodied in, "If it works, it is right!", to actually step-around such impasses, merely on the basis of experience - without necessarily understanding why, and in what circumstances, a particular assumption works. Such "successful-steps" appear everywhere in the panoply of Man's ideas of his World, and deliver, therefore, many such universal, "sticking-plaster-solutions" throughout that constructed-view of Reality.

In contrast, with a purely holist view of Reality, such means may seem wholly wrong, but that clearly isn't the case. Such eclectic methods can, and indeed do (to an extent) reflect Reality, especially in certain situations.

The seeming contradiction between "Everything affects everything else" and fixed, Idealised Forms and Relationships, is NOT a mutually-exclusive, and contradictory pair of stances.

Indeed, because of simplifying in various persisting natural situations, occurring at particular times - which we call Stable Situations (or natural Stabilities), these assumptions can indeed approximate to what pertains there.

It is, of course, due to an arrived-at "balance" of contradictory factors, which can be, for an extended period, self-maintaining, and, consequently and quite-naturally, simplifies the situation, in an overall way, delivering an extractable combined relation.

We don't know why this occurs, but we clearly see it, and can extract and use it, for as long as the Stability persists!

Now, situations can naturally occur, which are close to being stable, and these allow glimpses of such simple relationships, which observers can latch on to as the key-producing-parts of such complex situations, and, if they are extractable, they can become the assumed-to-be producing "idealised components" of Reality. So, such Idealisations can indeed reflect such situations, and can be successfully used to predict what will happen under any particular, non-dissociating changes, within a Stability.

But, even so, the question, "Why?" is never addressed.

So, disentangling such naturally-complex, yet reasonably-stable situations is often impossible: and there can be no doubt that multiple, individual factors are involved, and also that, in specially arranged-for circumstances, a particular Single Factor can actually dominate, and its individual contribution in those circumstances, can be extracted. But, it will not be the same as it would be in other non-dominant situations.

That assumption - that it will always be the same, is the flaw in Idealisation!

The extracted Form of the individual contributing factor, taken from the dominant situation, is merely the Idealised Version of that factor, and the assumption that it is always exactly like that in all situations, that it is eternal, is quite definitely incorrect!

The assumption (that it is fixed) depends upon the universally-adopted Principle of Plurality, which underpins the whole of the usual methodology of Science. So clearly, such a mistake is exceedingly important.

The much more truthful, but currently "technologically-unusable" stance, is that delivered by the totally-opposite Principle of Holism!

Andy Goldsworthy

For multiple simultaneous factors, acting together are neither eternal, nor do they merely add-together, in varying amounts, to produce all possible situations. Indeed, every single one of them is different in different situations: for they most certainly affect one another!

They can, however, be organised to approach Plurality, for a given individual, targeted factor, by the careful-farming, and then sustained-maintenance of the exact same conditions, under which that particular simplified and idealised factor was made overt and then extracted.

Now, perhaps surprisingly, these transforming pluralist assumptions do not prohibit effective Use! As long as the appropriate, farmed conditions are provided for a given idealised factor, that will deliver what its extracted Form predicts.

But, to get anywhere near what the original, unfettered, many-factor, complex situation actually produced (though really, even then, only something similar) would always need a new application for each-and-every extracted factor, each in its own farmed environment, and carried through as a complete sequence over time!

[See all production processes in Industry for proof of this!]

Anhydrous ammonia plant, ca. 1954

But, there was, still, a major fly-in-the-ointment: the pragmatic assumptions which did deliver-usefully in production, were always significantly-damaging in attempts to actually explain phenomena. For, any individual equations produced by those pluralistic methods could not be brought together to explain the original unfettered situation.

Initially the Explanation was always attempted holistically, in terms of substances-and-their-properties, but though successful, these never gelled with the extracted equations.

So, the two approaches gradually changed in their roles. While the equations were considered reflections of underlying eternal Natural Laws, the holistic explanations became something of an apologetic-accompanying-narrative: a tale to tell to the uninitiated, who couldn't possibly appreciate the beauty and power of the abstracted eternal Natural Laws.

The only solution to this contradictory situation was to stress the Principle of Plurality, and insist that the unfettered phenomenon was merely an addition of the full set of eternal Natural Laws, in varying quantitative proportions.

Theory in these circumstances had been abandoned for mere Productive Reliability. Pragmatism had re-established its old dominance, and if anyone asked for an explanation, they were now just given the equation.

Gradually, participants began to consider that "Theory" was just the skilful manipulation of just such equations, to fit all possible circumstances. And, of course, that wasn't ever correct or explanatory! It was a frig: and any consistency evident was that of Idealised Mathematical Form, and NOT of physically existing Reality.

Real Physical Theory was rapidly being abandoned, and the results would be the current Crisis in Physics, which has now existed ever since the decision at the Solvay Conference in 1927, when Bohr and Heisenberg defeated Einstein and Schrödinger, with their Copenhagen Interpretation of Quantum Theory.

Mathematics is never the Essence of Reality, but only the study of idealised Forms, which are always just inviolate-able patterns or formal relations. To make them primary, as the "actual Drivers of Reality", instead of simplified and idealised forms derived from a carefully-tailored Reality, is clearly Idealism - a far cry from the avowed Materialist basis of Science!

Math Fantasy

The whole strategy based upon Plurality was a means of approximating to Reality via both farmed-situations and the simplification and idealisation of carefully-isolated, individual relations. Pragmatically, such means could be effectively-used to achieve intended outcomes, but could never deliver actual explanations!

When it came to understanding what was really going on, and "Why?", it was gravely flawed and limped across multiple impasses via old fashioned "suck-it-and-see" Pragmatism!

We have to be crystal-clear on all this!

Mathematics (like Formal Logic) is valid system of study for idealised Forms.

But, it is not the underlying driving basis of concrete Reality!

It was, and still is, a brilliant man-made simplification of aspects of Reality, which when used in appropriately simplified and maintained Domains - that actually bend a situation to something approaching that idealised form. But, it is, however, never appropriate in unfettered Reality, or in any attempt to understand and explain phenomena as they actually occur, naturally, in Reality-as-is!

Fundamentally, the pluralist approach and methodology, separates individual factors, artificially, from their natural joint occurrences, to use each one separately and sequentially, in tailored situations, to enable reliable predictions.