Jump over left menu
Comments on the Lighthill Report and the Sutherland Reply
Professor H C Longuet-Higgins, FRS
2 Comment by Professor H. C. Longuet-Higgins, FRS.
Theoretical Psychology Unit, University of Edinburgh.
To my mind Sir James Lighthill's most valuable contribution to the current debate on artificial intelligence has been to raise searching questions about the proper justification of the subject. We should, he suggests, ask about any piece of work whether its primary objectives are technological or scientific. If technological, such as the automatic exploration of the planets or the mechanical translation of Chinese into English, are such aims realistic in relation to our present knowledge and justifiable in economic terms? If scientific, then what science or sciences are likely to be enriched?
The need to ask such questions becomes only too apparent when one studies certain recent pronouncements on the subject (or subjects?) of artificial intelligence/machine intelligence. In the Computing Science Review recently published by the SRC the aims of machine intelligence are seen as bluntly technological, though the pious hope of achieving them by the formulation of general principles makes a hasty genuflection to scientific respectability. There is no further mention of any such principles either in the main body of the review or in the appended report of the Long Range Research Panel; all we find is an unlikely assortment of subjects grouped together under the heading machine intelligence for no better reason than none of [them] seem to the Panel to demand the study of human thought or perception.
The subjects in question are: computational logic, real scene analysis, picture processing, the use of the robot as an analytical tool, and the acquisition of organised information in computers and interpretation of descriptive material. It is highly dubious whether either computational logic or real scene analysis is likely to get anywhere without due attention to our processes of thought and perception; but in any case, if such a negative criterion is adopted for what counts as machine intelligence, it is difficult to see why that subject should exclude analytical geometry or analytical chemistry, which have at least as good a claim to be regarded as analytical tools.
The poverty of such arguments for regarding machine intelligence as a priority area in computing science must have become plain to Sir James as soon as he undertook his penetrating survey of the field. Insofar as machine intelligence projects are basically technological, should they not be judged by the same criteria as one would apply to any piece of development work in advanced automation? Of any such project one should ask: first, what exactly is it intended to achieve, secondly, what material resources would it demand, and thirdly, what are its chances of success? Lighthill's shrewd and comprehensive critique of the technological achievements and ambitions of artificial intelligence needs no recapitulation, nor does his scepticism about the defensibility of robotics as a technological enterprise. It is only when one looks at the scientific case for artificial intelligence studies that differences of opinion seem to arise.
Sir James places in his category C all the artificial intelligence work which he regards as scientifically promising, and refers to this category as Computer based studies of the central nervous system. In so doing he aligns himself with those of us who hold that the main justification for artificial intelligence is the light it can throw upon human intellectual activity. But his chosen heading, and some of his later remarks, indicate that he attaches more significance to work on the hardware of the brain than to work on its software.
This is the only point on which I want to take issue with him. He is, of course, perfectly right in saying that anyone who is developing network models of the brain had better work within the constraints imposed by our knowledge of its anatomy and physiology; it would be foolish for an engineer to speculate about the circuitry of a computer when he could perfectly well open it up and look inside.
But the hardware of computers is very far from being the only matter relevant to their functioning. In order to understand how a computing system works one must enquire into the logic of the system software and the semantics of the programming languages in which the system can be addressed. The corresponding questions about human beings are those asked by the science of psychology - though admittedly, psychological theories seldom attain a degree of sophistication worthy of their subject matter. An outstanding exception to this stricture is the science of linguistics, and perhaps it is no coincidence that the most impressive achievement of artificial intelligence to date is a working model of the comprehension of natural language.
I would go further and hazard the prediction that for some time to come the most valuable work in artificial intelligence will be that which attempts to express, in the form of computer programs, abstract theories of our various cognitive faculties, rather than mathematical models of the brain itself - this in spite of some excellent recent work on the possible role of the neocortex as a classifying device. This view is based not only on the obvious vitality of current artificial intelligence work on language and vision, but also on an evident dissatisfaction among psychologists with the naive stimulus-response theory of behaviour as it has been applied to human beings.
It is now plain that a central problem in cognitive psychology is to understand how our knowledge is represented and deployed, and the computer program is the only medium which at present offers us the possibility of formulating adequately sophisticated theories of cognition. The elimination of inadequate theories is no longer the main problem; the defects of a programmed theory become immediately apparent as soon as it is run on a computer.
In short whatever the technological prospects of artificial intelligence, its principal scientific value, in my view, is that it sets new standards of precision and detail in the formulation of models of cognitive processes, these models being open to direct and immediate test.
The question What science or sciences are likely to be enriched by artificial intelligence studies? can now receive a provisional answer, namely All those sciences which are directly relevant to human thought and perception. These cognitive sciences may be roughly grouped under four main headings:
- Mathematical - including formal logic, the theory of programs and programming languages, the mathematical theory of classification and of complex data structures.
- Linguistic - including semantics, syntax, phonology and phonetics.
- Psychological - including the psychology of vision, hearing and touch, and
- Physiological - including sensory physiology and the detailed study of the various organs of the brain.
Perhaps cognitive science in the singular would be preferable to the plural form, in view of the ultimate impossibility of viewing any of these subjects in isolation. Indeed artificial intelligence studies are beginning to offer interesting suggestions as to how our various modes of experience might be logically related.
Finally, perhaps one should say a word about the main point of disagreement between Lighthill and Sutherland. Professor Sutherland's redefinition and reinstatement - of Lighthill's category B as basic artificial intelligence has my sympathy, because although I hold no particular brief for bridging activities as such, I do think that there is a place in artificial intelligence for studies which are addressed to the general problems which have been found to recur in many different areas of cognitive science. The mathematician's ability to discover a theorem, the formulation of a strategy in master chess, the interpretation of a visual field as a landscape with three cows and a cottage, the feat of hearing what someone says at a cocktail party and the triumph of reading one's aunt's handwriting, all seem to involve the same general skill, namely the ability to integrate in a flash a wide range of knowledge and experience. Perhaps Advanced Automation will indeed go its own sweet way, regardless of Cognitive Science; but if it does so, I fear that the resulting spin-off is more than likely to inflict multiple injuries on human society.