Jump over left menu
Part III: Comments on the Lighthill Report and the Sutherland Reply
Dr R M Needham
1 Comment by Dr. R. M. Needham
Computer Laboratory, University of Cambridge.
Like all classifications, Lighthill's division of AI into three main parts is contentious in detail as doubtless was Caesar's similar dissection of Gaul. It would not be useful to discuss whether particular individual activities are best placed in A, B or C - at any rate if one accepts as I do the spirit of the classification. Since I basically agree with Lighthill's conclusions there is perhaps less to say than in Sutherland's commentation.
The aim of the category A work is technological. Any method which achieves the desired result will do, provided it is not too expensive. This is by no means to say that it need not be founded on detailed knowledge of the subject matter, nor that it should eschew devices such as the use of heuristic methods which are perhaps associated with AI rather than with automation. On the contrary, the use of reasonably reliable heuristics is very suitable to the "no holds barred" approach. Heuristics, in general, are devices to avoid excessive searching by acting on guesses as to where to look - guesses which are not provably correct but which usually lead to something sensible. For example To find a letter from the SRC in the Departmental office, look in the file marked SRC , or To proceed from Cambridge to Edinburgh, go first to London. Neither of these is always reliable, but both are based on a sound knowledge of the relevant facts.
It could perhaps be said that in any knowledge-based system some of the access rules will be formalisable and embodied in regular programs, and some will not and will thus have to be treated as heuristics. At this point, a question arises which Lighthill does not treat very much. Are there any general principles - that is, principles which apply to numerous applications - which guide or might guide the application of heuristics? Workers whom Lighthill might describe (? stigmatise) as being in category B say that such principles are, inter alia, what they are looking for. Some of the justification for this kind of work would seem stronger to those who believe that such general principles are there to be found. Lighthill suspects that they are not, so it should be pointed out that the Hart-Nilsson-Raphael theorem is one such that has been found. I do not personally think there is much to dig for here, but one should not deny that there is anything at all.
Lighthill's category C is quite outside my own technical knowledge. It is self-evident that enquiries, computer-based or otherwise, into how people work constitute an important field of scientific endeavour. The present question is about the intelligent behaviour of people, or the way people function when behaving intelligently, and whether any work can be important to this which does not explicitly concern itself with its subject matter. Which brings us to category B.
Category B work is viewed unenthusiastically by Lighthill, and defended with vigour by others. One line of defence is to call attention to developments in programming technology which it has stimulated, and to other insights to which it has led. In any venture into the history of ideas one is on dangerous ground, but in considering this kind of argument the risk has to be taken. I do not believe that the case can be made by considering programming technology. Structured programming has no dependence on AI, and the handling of complex low-level operations in terms of smaller numbers of higher-level notions has been taken to its highest development by people whose view of AI is no more favourable than Lighthill's. Backtracking is a programming technique of much antiquity. The embodiment of knowledge in procedures is a year or two younger than the act of programming; its descriptions for the plain man is that, when looking up a table you sometimes find the address of a program to compute the value you want rather than directly being given the value itself. List-processing is a technique for burying store-management problems, excellent for rich people with complicated programs to write.
It is beyond contention that AI research has led to a great deal of excellent work in packaging these techniques attractively and embodying them in programming languages (some of which, for example, LISP, are of much interest as languages); it is a standard progression for frequently used facilities to start as library routines and end as language features.
However, the ideas did not all originate in AI, any more than did the content (though perhaps not the phraseology) of the maxim heterarchy not hierarchy. The general inapplicability of strict hierarchical models, despite their seductive clarity has not merely been known but explicitly recognised as an important point by many people for a long time. To be explicit about a few aspects in my own experience: Library Automation - middle 1950's; Taxonomy - early 1960's; computer filing systems - middle 1960's. To the small extent that so vague and general a maxim can be said to be a discovery, it is one to which AI has contributed little.
Professor Sutherland remarks, in the course of a defence of category B work along these lines, that one recent insight derived from Basic research on AI is that in interpreting the meaning of any complex input, it is impossible to use a rigid step-by-step procedure. Leaving aside the perhaps captious comment that this means that computers cannot do the job at all, it is emphatically not a recent insight that you cannot finish with the syntax before starting the semantics. The present writer first encountered it, in a computational linguistic context, sixteen years ago when it was not new.
This comment on a point to do with language processing leads naturally to others. Lighthill cites the understanding of natural text as one of the prime examples of the combinational explosion, and so it is. He, and also Sutherland, do in my opinion underestimate the contributions which have come from activities which are (or were not) called AI. Most people writing on such subjects tend to dismiss Machine Translation not only as a technological failure, which it was, but as an intellectually totally negligible activity, which it was not. The emphasis on exact algorithms rather than vague descriptions, the above-mentioned importance of mixing syntax with semantics, the use of heuristic devices to shorten searches in great collections of semantic data, were all studied and recognised as important. Linguistics proper, without technological objectives, has made vast progress in schematisation, exactness of description, and theoretical understanding. General interest in linguistics and in particular in rule-based (algorithmic) approaches to it, have contributed much to the intellectual climate in which AI work is done, though not always in a directly recognised manner. It is a great mistake to extract those products of other enquiries which have been found helpful by AI workers and suppose that they are AI inventions.
In sum, I do not believe that one can justify category B work by its side effects. What of its main thrust? Whether there is any middle ground between studying intelligent behaviour as an attribute of people or animals on the one hand, and making machines do complicated and useful things which used to need people, on the other, it can easily lead to a sterile philosophical debate unless we say yes, on the grounds that we can seek to make machines do complicated and useless things.
Artificial Intelligence is a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better. It would be disastrous to conclude that AI was a Bad Thing and should not be supported, and it would be disastrous to conclude that it was a Good Thing and should have privileged access to the money tap. The former would tend to penalise well-based efforts to make computers do complicated things which had not been programmed before, and the latter would be a great waste of resources. AI does not refer to anything definite enough to have a coherent policy about in this way.
A final caution: like the majority of contributors to this paper symposium on AI, I am not an expert in any of the activities which come under its rather ill-defined umbrella. Amongst the many features, good and bad, which AI shares with Machine Translation is the fact that non-practitioners have strong views about it.