Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

11 September, 2017

Face Recognition Surveillance




On whom, by whom and why?


A recent TV programme in the UK considered, at length, the opinions of a police commander and various others engaged in "Counter Terrorism", such as the previous problems with the IRA, and the current ones with the Islamist jihadists, with regard to new facial recognition software, which could be linked to surveillance cameras at key places.

So, before I consider the arguments, for and against the current proposals, I feel that I must refer back to the questions raised in support of the already-existing surveillance cameras all over the place, when first proposed, which were argued to be vital in combating crime.

And, now this country has the greatest concentrations of such cameras in the World, so clearly that argument did then win the day. 





So, I have to ask, "Did they really make the difference in the fight against crime?"

For, the answer is surely "No!", and one of the reasons has to be the colossal overheads in gathering collating and generally studying massive quantities of such footage, time-after-time over significantly extended periods.

They certanly didn't have the manpower for that, and surely would require substantial resoures-and-people for any new initiative, for no matter how "automated" such systems may now have become, it will still need intelligent and trained people to make it really work.

And, of course, the vast reduction in police numbers under the Tories mean that they couldn't do it. So, they will ceertainly now argue for an increase in police and intelligence numbers, but make damn sure that they will be mostly in the latter, and not in the former categories!

The software that would have to be involved (and I am software developer so I have a good idea what would be needed), would have to have been trained, by previously having been delivered with prior images of a suspect individual, to parameterise, in a wholly unique way, their facial features. 




A single shot from a particular angle just wouldn't do!

It would have to be an undertaking in many different circumstances, angles of shot, and lighting conditions. But, with sufficient exposure of this sort, a reliable means of identifying an individual could be put into a database, and, thereafter, made available to be used subsequently for identification purposes, by similar means.

Now, the above points were NOT emphasized in the content of the programme, but they are important, because the question arises, "How is such definitive amounts of shots to be compiled?", and, "Who decides who should be targeted?". The arguments aired in the programme were based upon a List of 23,000 known Islamist terrorism "sympathisers", and the fact that monitoring a single individual for just a day or two could involve 40 different officers, if carried out by current man-to-man surveillance methods.

Clearly, those charged with "keeping-an-eye" upon possible suspects were greatly in favour of a distributed system of surveillance cameras with access to a comprehensive database compiled by such a software system, along with an Interrogator System, for matching just-seen faces with that record. Surely, if what was seen was only a momentary glimpse, and at an inconvenient angle, it would be unlikely to be sufficient, so the obvious factors would include optimised positions for the most useful, easily-analysable views, and a following system of other cameras to have a chance of confirming the supposed recognition, no matter how inadequate were the new-images.

Now, IF all this is to be automatically gathered, without decision-making operators, then the recognition alone wouldn't be enough. They would also have to be timed, specifically situated and linked to other recognitions at various places and times, and checked for similar movements by other contacts on the database, or even requiring newly-occurring contacts to be decided on as being necessary additions to the list.

We are talking about a significant surveillance system, which would unavoidably also capture many images of the general public too. Clearly, such a system could, and indeed would be most definitely misused!

What would stop it being used against political opponents of the Government, for example? 





Notice how meetings by Jeremy Corbyn with Sinn Fein politicians many decades ago were used to say he supported terrorism!

This could, very easily, be the first step towards a real surveillance state!

And, with the increasing crisis of the Current Capitalist Economic System, it would undoubtedly be used against all agitators for the end of that system! They would be labelled as terrorists, and both monitored and hassled in all possible ways to disrupt their agitations...

12 November, 2016

New Special Issue: Computerised Solutions







Computerised Solutions,
The Nature of Mathematics
and The Necessary Revolution in Philosophy



The Myth of the Intelligent Computer:

With so many media fairytales about so-called “Intelligent Computers”, projected with confidence, by seemingly all pundits, into all our futures, we must, from a both well-informed and sound position, trounce such hopeful or even fearful myths completely.

The statement, “The computer says...” is, of course, total nonsense, as all computer programs are written by people, AND, crucially, limits the means they use to considerably more restricted methods, than can be carried out in the best of Human Thinking.

Indeed, they are mostly iterative techniques for getting closer and closer to a sought, quantitatuve solution. Their value is that they can carry out such processes at colossal speeds, delivering useable results very quickly indeed.

But computers cannot think...




02 April, 2011

Understanding Intelligence?

If I was going there, I wouldn't have started from here!

Photograph by Mick Schofield

The expression "You can't see the Wood for the Trees!" is ever resonant in the ways that we usually consider the World. I never realised it before, but it relates to our profound belief (our assumption) that Plurality is the way of the World; that the essence of all phenomena is contained within their "constituent Parts", and the converse of this - that properties of the Whole can be totally reproduced by means of the mere provision and juxtaposition of all these Parts.

Indeed, the major criticism of Plurality is that it exactly equates the Parts revealed, isolated and extracted by artificial erection of Domains, with its "brother" relation, as it exists, in the coherent real World Whole.

But, of course, that is NOT the case! It is merely a "useful" simplification used by scientists.
No matter how much we learn about the specimen forms of trees, grown in splendid and perfectly arranged isolation, such knowledge can never reveal, from that alone, the full full qualities of the Wood or Forest.

Yet, this assumption is ubiquitous (hence the saying above to counter it), and once you realise it, clear cases of it appear absolutely everywhere, and then stick out like sore thumbs, where previously they were "invisible". In a recent New Scientist (2784) there is an article entitled The 12 Pillars of Wisdom which is introduced in the very first sentence with:
 
    "Can we ever understand intelligence? Only by building it up from its component parts"

The point is proven, is it not?
Now I could belabour the point throughout the whole length of that contribution, but I won't. The key point necessary has been made! Clearly the writer believes he is going to bring together as many aspects of "intelligence" extracted by various pluralist means, in order to deliver the nature of intelligence. But that is impossible. Many new things may be there, and the article will be worth reading for those things alone, but they will not, and indeed cannot deliver the secret of intelligence!

That would certainly involve a very different approach grounded soundly upon some understanding of the episodes of revolutionary qualitative change known as Emergences. For only when we begin to grasp how all such changes emerge, NOT as the consequence of the mere juxtaposition and summation of only small incremental changes, but as the reality-changing result of dramatic revolution.





05 March, 2011

I, Algorithm


(or Artificial Intelligence with Probabilities)

This article in New Scientist (2797) by Anil Ananthaswamy describes how the old (and now dead) Artificial Intelligence based on Formal Logic and Neural Networks has been re-vamped by the inclusion of Noise and Probabilities. It is, I’m afraid, not a new and great step forward, but an old “solution” to the unanswerable problem, “How do you improve upon a purely formal and pluralistic, and hence totally unchanging, artificial system, which is intended to deliver some sort of machine- based intelligence?”

So, instead of strict determinism only, you merely need to add a bit of random chance, and then deal in the probabilities of various alternative outcomes.To put this new system into the language of the participants, these new systems of Artificial Intelligence “add uncertainty to Formal Logic – in order to reason in a noisy and chaotic World”. It is a proposed “new” application of the same standpoint as was used in the Copenhagen Interpretation of Quantum Theory almost a century before. But the real world is NOT basically deterministic PLUS “noise”! It is holistic! And to attempt to analyse it pluralistically is doomed to failure. So the trick, as usual, is to continue with that old methodology, but to heighten the “flavour” with the added “spice” of Random Noise and the coherence of Statistical Methods – using averages and probabilities on top of a still wholly deterministic basis.

Now, to echo the revolution that occurred in Sub Atomic Physics may appear to be an important development, and in the same way may allow better predictions in this sphere as it did in Physics, but in BOTH areas it certainly does NOT deliver the Truth! In this particular instance it seems to apply very well in the area of infectious diseases, but we have to be clear why it works there, but also, and most importantly, why it isn’t the general solution that it is claimed to be.

It works when many factors are acting simultaneously, and with roughly equal weights. In such circumstances many alternative diagnoses are available, and hence various distinct results are possible. The important question is, “What is the correct diagnosis in a given particular case?”

Now Neural Networks had delivered a system that could be modified to more closely match real weightings of various alternative situations, but they were crude to say the least: absolutely NO indication of why and how these changes were effected were revealed. It was merely data without a cause!
Now this new version of AI returns to such ideas, but adds the 1764 ideas of Bayes embodied in the Theorem which carries his name:- which is,

if the conditional probability of Q implies the conditional probability of P
then
the conditional probability of P implies the conditional probability of Q - [Bayes Theorem]

And this was for the first time a basis for being used with Causes and Effects, not only in the usual direction but backwards too (that is diagnostically).

The constructed systems were so-called Bayesian Networks, where the variables were initially purely random, that is of equal weight, BUT thereafter dependant on every other involved variable. Tweak the value of one and you alter the probability distribution of all of the others. Now, this, on the face of it, appears to be very close to Holism, but has a clearly fictitious starting point, where all are equally probable. The “saving grace” was then that if you knew some of the variables you could infer the probabilities of the other contributions. Now, when you think about it, it doesn’t seem likely. Starting from a wholly fictitious starting point, why should the inclusion of some reliable data move ALL of the probabilities in the right direction? Clearly such systems and associated methods would have to be very close to Iterative Numerical Methods, and hence dependant on a convergent starting point for a useful outcome. And, as with such numerical methods, these too needed to be refined and improved until they began to become much more reliable than prior methods.

Even so, it is clear that such methods are full of dangers. How do you know whether you are considering all the necessary factors? Gradually researchers began to produce models in certain areas which were much more reliable. The key was to build them so that new data could be regularly included, which modified the included probability distributions.

But, as it did not deal with answering the question, “Why?”, but only the question, “How?”, it was still dependant on the old methodology, even if it was overlaid with Bayesian add-ons.

Indeed, to facilitate such programs, new languages began to be developed specially designed to help construct such self-modifying models.
To give some idea of their powers AND limitations, it is worth listing the principles on which they were based.

1.Equal likeliness of all contributing factors must be the starting point
2.Algorithms must be very general
3.New data must be straightforwardly included to update the probabilities.

Now, this is clearly the ONLY way that the usual pluralistic conceptions and analyses can be used in a holistic World. The basis is still Formal Logic, but real measured data can modify an initial model in which everything affects everything else, but as to how they do it, there are NO revelations. The ever-new data merely adjusts less and less arbitrary figures, and, by this alone, the model improves. The model learns nothing concrete about relations, but improves as a predictor, based on regularly updated data.

Nevertheless. There could be no guarantees. It is a pragmatic method of improvement and NOT a scientific one.

Also experience has shown that the gathering of new data can be altogether too narrow, and the seriousness with which it is collected much too slight for the methods to always be depended upon. Behind the robot diagnostic program, a very experienced “doctor” would certainly come in handy!
There is also the problem of ”current ideas” guiding the actions of the data collectors, and hence “tending” to confirm those current ideas. You cannot discover a new cause, if you are not measuring for it, can you? The method is NOT a genuine holistic one!

And the most important omission has to be that Time and Trajectory are not part of the schemas. Miller’s famous Experiment was indeed holistic, and produced amino acids from a modelled holistic system, but it too lacked Time and Trajectory information. This author’s (Jim Schofield) redesign of Miller’s Experiment has the same core set up as in the original, but surrounded by a time-triggered set of diagnostic sub-experiments, regularly sampling what was present at crucial positions throughout the set up and throughout the whole time that it was running. The results would then have to be laid out on a series of related timelines, showing WHAT was present and WHEN. The relationships over time and place would then be available and sequences and even cycles of processes could be revealed and interpreted.

The half-cock nature of the latest version of model based on Neural Networks but involving Bayesian principles, though it will produce ever better simulation-type computer programs, is still immovably grounded on pluralist principles, and so will be limited in its applications, and most important of all, will REDUCE the amount of real analysis and explanation to the Lowest Common Denominator of “the computer says that…..”