18 October, 2015

Man & Reality II



Applied Mathematics - The Toolbox!

Though it is rarely evident in the teaching of the subject, there are very different roles for maths in the modern world. Perhaps the first historically, and the most prosaic, is its use in production – in manufacture of all types. When relationships were detected in nature, the requirement was to find-and-fit a mathematical form to the revealed relation to allow quantitative questions to be asked and answered easily. Such “fitting” did not require any theory to be elaborated. No philosophy was involved. A mathematical artisan could rummage around in his toolbox of forms and find a rough fit, then use a few modifications and adjustments to effect a pretty useful final result. The maths would then be indispensable in the effective use of the revealed relation in diverse ways. Over what amounts to millennia, mankind developed a wide range of techniques which facilitated such undertakings, using every conceivable mathematical invention to purely practical ends.

This cycle of discovery, fitting of maths forms and USE has developed into a clearly delineated area, which keeps clear of theory (except as a source of yet more tools) and engages in practical tasks.

We call it Technology, or even Engineering, and its “fitting” activities are often very pragmatic, while being at variance with the concerns of pure scientists, who demand answers to the question “Why?” The pragmatists of concrete world problems are much more interested in the question “How?”

And the incessant clamour for the maths to facilitate their labours has led to a rich set of techniques which could only rarely be said to help in understanding. These techniques basically are superlative “fitting” methods. A few examples will give the clearest idea of what they are like. The most famous is the method of “Equating Coefficients” in generalised polynomial equations. Such generalised polynomials can have no theoretical basis, but can be put forward as the first pragmatic step in covering a well researched relationship (liberally supplied with data) in Nature.

So general, in fact, is this form that every single term is given an unknown constant – not much good so far! But with sufficient sets of related data from the real world, these can be substituted into the polynomial for a number of different cases, and the result can be a coherent set of simultaneous equations in the unknown “constants”. With these, there are algebraic methods (and later on determinants) that enable the solution of these equations involving the exact values of these unknowns. And when these are substituted back into the general polynomial, we end up with a mathematical formula that fits the facts.

Notice the total absence of explanation in these processes. They established a solid cycle between experimental data and mathematical expressions that can, and do, produce powerful, useable formulae.


Another similar process is the so-called “Fourier Analysis”, where almost any time based repeated pattern in nature can be “fitted up” by the addition of multiple “sine waves” suitably weighted. The method does work, but it would be incorrect to say that it throws any real light at all on the actual causality of the situation being modelled – quite the reverse. If anything such a method hides the causality. It is interesting to see that a modern example of such an approach is actually used to produce a so-called “theory”. This is the renowned String Theory which turns out to be of exactly the same ilk. There, oscillations of strings (?) are added together to produce Everything (?) in the Universe. And, if we are trotting out famous examples we must not omit the enduring Ptolemaic Theory of the heavens, which matched the recorded data with the ever more complex addition of epicycles to model the movements of planets, sun and moon as observed.

These are a few examples of the power (and weaknesses) of “fitting”. Mankind was not able to refine the Ptolemaic Theory until it arrived at the Copernican System, was it? For over a thousand years the former had held sway, AND was a barrier to a better theory. A revolution in thinking (and, I believe, in society) was necessary before this edifice was pulled down and something nearer the truth erected.

Perhaps I should include one final example. I am sure that I have made the point I wish to make, but I feel that this last inclusion is nonetheless unavoidable. It involves that icon of technology – the computer. Many calculations and manipulations in mathematics proved to be long-winded and tedious, and it soon became cleat that such tasks would perhaps best be carried out by some mechanistic aid – such as computers. These tireless mechanisms, given an effective algorithm (computer program or set of instructions) could trawl through the data until an acceptably accurate result was achieved. The very inclusion of the computer, though, caused an interesting regression in techniques. Over the centuries many, almost mindless, iterative techniques had been developed for finding the quantitative information required without understanding the causal features involved. These had not been attractive to human employment because of the mind-numbing boredom of repeated application, but also because they added nothing to our understanding. Computers, as you may guess, changed all that. Pragmatists wanting numbers to a certain accuracy were quite happy to consign the job to a computer program, which could churn away at lightning speed, and produce exactly what was required. The era of “the computer says” was born.



Computers paper over the cracks

Computers had another significant effect on the modelling of reality. The inevitable breakdown of individual formulae at domain boundaries was obviously a major problem in constructing effective computer-based models, and restricted such models to very limited context. But there was a way round this difficulty! Computer scientists had been including tests in programs since the beginning, and re-routing the path to different sets of instructions. But, normal procedural languages involved detailed programming of all the tests and switches, and because instructions were only obeyed sequentially, there were often delays until the requisite tests had been made. The solution was a new breed of computer languages called Object Orientated Programming Systems (OOPS!) These languages were effectively “interrupt driven”. They could be given rules that were of general significance, and could be kept separately from sequences of instructions. These rules encapsulated the precise conditions when one domain had become defunct and another had to be set up with its own, and different, instruction sequences. These were handled so that they were ever-available. This meant that the language implemented a runtime version in which the “house-keeping” roles CAME FIRST. That is, the rules were tested out at every single time-slot cycle. A positive result would mean that the current sequences of instructions would be interrupted and the switch in mode effected.

These features effectively papered over the cracks between different domains. As soon as the conditions for a change were encountered the switch was implemented. No understanding of why the switch was necessary - was involved. Some threshold or set of thresholds were designated as sufficient to implement the change. Significantly, the transition seemed “seamless” and “natural”. How lovely!

The dynamic content that always accompanies such changes was, of course, totally absent from these transitions. It was thresholds – Switch! I feel impelled at this point to bring in my evergreen anecdote about reaction fronts in liquids.

From time immemorial, budding scientists had been told to “stir well” and wait for equilibrium conditions before any meaningful data could be taken from an experiment. Breaking this rule led to all sorts of inexplicable data, and no conclusions could be drawn. In the 1980s I was lucky enough to work with some researchers who consciously disobeyed this rule. They wanted to study the reaction fronts when two different liquids reacted chemically. They never stirred! They almost forgot to breathe, as the slightest disturbance would ruin their experiments. They also chose a situation where a reversible reaction could be quite easily be caused to oscillate to and fro between the products at each end of the reversible reaction. They also carefully chose a situation where the products were of significantly different colours. The test tubes unfolded beautiful, striped structures as the oscillation proceeded, and the reaction fronts were clearly shown to be TOROIDAL SCROLLS. So much for stirring and equilibrium then!



Innumerable further examples could be put forward here, but I am sure that the point has been established. But, “Is that all there is?”, as they say. No, it isn’t! The methods described above use mathematics that was disinterestedly developed by pure mathematicians, but to purely pragmatic ends. Indeed this approach has been consolidated into, what may be called a philosophy. The philosophy of Pragmatism.

This is the second in a new series exploring philosophy and mathematics: Man & Reality. Part III will be published here next Monday. Part 1 can be found here.

No comments:

Post a Comment