Contact us Heritage collections Image license terms
HOME ACL Associates Technology Literature Applications Society Software revisited
Further reading □ OverviewComputing at HarwellBTM 1202Glennie: Syntax MachineHowlett: OrbitIrons: Syntax Directed CompilerSchorre Meta IIHowlett: ACLGill: Atlas conceptsSumner et al: Atlas CCUAngus: Computers in scienceGood: GoBell: KalahBond: CarnegieQuatse: G-21Baylis: Eng AssistantAnderson: UAIDE 68Ogborn: Change and chanceHopgood: Hash overflowUSA Visit 1965Bowden talkChurchhouse: All-purpose LabUSA Visit 1969USA Visit 1970USA Visit 1971Hash tablesBell: HimmellbettHayward: Computerised StudioChurchhouse: Arts and SciencesHowlett: Charles BabbageHopgood: Teaching toolUSA Visit 1975Yasaki: IBM StretchNash: IBM StretchFORTRAN comparative studyOPSCANWichmann: Algol compilersGlennie: Electronic computers at AWRE Aldermaston
ACD C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACLLiteratureReports
ACLLiteratureReports
ACL ACD C&A INF CCD CISD Archives
Further reading

OverviewComputing at HarwellBTM 1202Glennie: Syntax MachineHowlett: OrbitIrons: Syntax Directed CompilerSchorre Meta IIHowlett: ACLGill: Atlas conceptsSumner et al: Atlas CCUAngus: Computers in scienceGood: GoBell: KalahBond: CarnegieQuatse: G-21Baylis: Eng AssistantAnderson: UAIDE 68Ogborn: Change and chanceHopgood: Hash overflowUSA Visit 1965Bowden talkChurchhouse: All-purpose LabUSA Visit 1969USA Visit 1970USA Visit 1971Hash tablesBell: HimmellbettHayward: Computerised StudioChurchhouse: Arts and SciencesHowlett: Charles BabbageHopgood: Teaching toolUSA Visit 1975Yasaki: IBM StretchNash: IBM StretchFORTRAN comparative studyOPSCANWichmann: Algol compilersGlennie: Electronic computers at AWRE Aldermaston

Computers: A Survey

J Howlett

Orbit Aug/Oct 1963

One powerful computer, Orion, has just been installed in the Rutherford Laboratory; next year the Atlas Laboratory will take delivery of one of the most powerful computing systems in the world; C.E.R.N. is putting a great deal of thought and effort into deciding on its next computer, which it recognises will be a major piece of equipment in the Laboratory. Computers have become standard and essential equipment in scientific laboratories and are penetrating into many departments of life. To many people, including myself. they represent one of the greatest achievements of the age; this is perhaps only another way of saying that mathematics is one of the greatest achievements of the human mind, for the computer is a device for putting mathematics, in its widest sense, to work on a grand scale. However, the technical feat is remarkable. Speaking in a very general way we can say that the first machines, produced in the very early 1950's, increased one's power to do arithmetic by a factor of about 1,000 over calculation by hand with a desk machine; the fastest of those now being built like the Atlas give another factor of 1,000, so we have increased our powers in this field by something like a million-fold in about 10 years. In these two articles I want to indicate what this means, how it has been achieved, what it implies for the use of these machines, and what is likely to happen in the next few years. To put the subject in its correct financial perspective, let me say here that the National Institute's Orion, which ranks as a medium sized machine, has cost £350,000: the Atlas will cost £2,800,000.

We tend to think of computers as elaborate calculating machines, implying an exclusive concern with arithmetic. This is much too narrow a view. They have been called electronic brains, but this smacks too much of sensationalism; the name information processing machine which is being used a good deal nowadays has a more respectable sound, and gives a truer picture of what these things do. They operate with information, which can be numbers, symbols or statements; numbers are used in arithmetical calculations, symbols in non-numerical processes such as algebra or symbolic logic, statements can include definitions, relations, rules for carrying out some process. The set of statements which tells how to perform a process is called the programme, the numbers or symbols needed to start this off are the data.

It is convenient to think of the machine as a system in four parts: input mechanism, store, processor and output:

INPUT STORE OUTPUT PROCESSOR

Information for input is punched in code as patterns of holes on cards or paper tape and read by a photo-electric or electro-mechanical device which forms the input mechanism; the output mechanism can be a printer or a card or tape punch or some form of visual display. The input-output devices are certainly important and have themselves been greatly developed from the earlier forms, but the real power of the machine lies in the store and the processor. The, store, as the name implies, holds the information put into the machine and the numbers and other information generated by the process it is following; the processor takes from the store one by one the instructions which make up the programme, and obeys each in turn. The power of the machine is determined bv the size of the store, the speed with which information can be taken from it and recorded in it, the speed with which the processor can act and the size and richness of the repertory of operations which it can perform.

The fast stores of modern machines are made up of magnetic cores, tiny rings of ceramic material (ferrite) which can be magnetised in one of two directions and switched from one state to the other by a pulse sent through a wire threading the ring. This allows a core to be used to store what is called a binary digit, that is, a number which is either 0 or 1; it is a fundamental law that any piece of information can be represented by a set of binary digits and therefore an assembly of magnetic cores can be used to store information of any kind. In a computer the cores are arranged in groups, each called a word and representing a unit of information in the machine; the Atlas and Orion word is 48 binary digits and represents a number, an instruction or a set of 8 characters any of which can be letters, numbers or other symbols. Stores have got bigger and faster as time has gone by; in the earlier machines, in fact before the invention of the core store. a thousand words was considered large, and the access time was of the order of a thousandth of a second (a millisecond); the Rutherford Laboratory Orion has a core store of 16,386 (= 214) words, and can be consulted every 12 millionths of a second (12 microseconds). Our Atlas will have 49,152 (= 3 × 214) words with a corresponding time of 2 microseconds, and IBM have built several Stretch machines with stores of twice the size and about the same speed. A particularly important feature of these core stores is that they allow what is called random access, that is any word in the store can be found within the stated access time, so all numbers are equally available. But they are ex pensive, £10 per word being quite a usual price, and have to be backed up by cheaper and slower forms. Magnetic tape is now used very widely, a single deck storing from 1 to 3,000,000 words, groups of which can be transferred to or from the fast store at a rate of 10 to 20,000 words per second; a large installation can have quite a number of tape decks; Atlas, for example, will have 18. A more recent development is the magnetic disc file, more like a juke box. These are available with capacities of 1 to 10,000,000 words and transfer rates of 50 to 100,000 words per second. The cos t of storage on tape or disc can be as low as a few pence per word.

The circuits of the processor carry out the arithmetical and logical operations which make up the programme, and these too have got faster. Simple operations like adding or subtracting two numbers, which took l to 2 milliseconds in 1955, can now be done in about the same number of microseconds; multiplication on Atlas takes 4½ microseconds. This increase is due primarily to the production of fast components, mostly diodes, transistors and cores, but also to improvements in the logical design of the circuits. An example is the design of fast carry circuits; when two multi-digit numbers are added or subtracted most of the time is taken up by the need to look after the carry, from one digit position to the next; much ingenuity has gone into speeding this.

The fastest machines do a good deal of overlapping of instructions. Obeying an instruction is quite a complicated business: the instruction itself in the form of a group of binary digits has to be taken from the store, the operand or operands taken also, the operation carried out and the result placed where the instruction says it has to go. In Atlas, when one instruction has been executed the circuits which do this are released and can start on the next, whilst other circuits continue the processing of the first, and so on; there will usually be 4 instructions in different stages of processing. All this, of course, means more circuitry and therefore more cost; as usual one never gets anything for nothing.

The overlapping of parallel operations is an example of a general feature of the newest and most powerful machines, time-sharing. Another is the overlapping of the input-output operations, which are very slow in relation to the speed of the processor with computation so that, for example, the cards for one problem are being read in whilst an earlier one is under way. A third, and the one to which the name is most often applied, is the holding of several jobs in the store at the same time with automatic switching, so that if the one being processed is held up waiting for information - for example, whilst magnetic tape is being searched, the processing of another is started and carried on until either it is held up, when a third could be brought in, and so on, or the first is ready to continue. Atlas and Orion have very elaborate timesharing features. The aim is to keep all parts of the installation working at full speed all the time, and so to get the maximum amount of work through the system in the day.

To get some feeling for the meaning of the speeds I have been quoting, it is instructive to look at the problem of solving large sets of linear algebraic equations - this is logically quite simple and occurs very often in actual calculations. Using a desk machine and a well organised method, a good human computer can solve a set of 5 equations in 5 unknowns, keeping 6 decimals throughout and including continuous checks, in about half an hour. The amount of work goes up as the cube of the order, so a set of 100 in 100 unknowns would take 203 = 8,000 times as long, or 4,000 hours; if anyone could bring himself to do this, which is very doubtful, it would take about 2 years. The Ferranti Mercury, which was installed in A.E.R.E in 1958, does it in about 20 minutes, Atlas in about 15 seconds.

In the August issue I gave the important facts about computers, how much information they can store and how fast they operate, for example. I want now to consider them more from the point of view of the user, discussing how we are to get them to do what we want and then to say a little about the future.

The most powerful machine loses much of its value if we cannot communicate with it easily. Communication here means telling it what to do - writing a program and feeding it in - and finding what it has done. There is a very live interest now in the communication problem, which is recognised as one of first importance. The early computers acquired something of an air of mystery with a high priesthood to minister to them. but now we want to make them work for all and sundry and to make it easy for everyone to write his own programs and to get them working quickly. Programs are stored inside the computer in the machine's own language which is extremely precise, highly formalised and entirely different from, say, the ordinary language of mathematics - it is, after all, designed to suit electronics rather than people. Writing in this language is a most tedious business. and certainly the necessity of doing so put off most people in the early days. In the middle 1950's it became clear that the machine itself could be used to translate from some much more natural language into machine code. and so take a large part of the burden off the programmer's shoulders: the great success of, for example, the Autocode language for the Mercury computer illustrates the importance of the idea. The program which translates from such language into machine code is called a compiler, the language itself an algebraic or problem oriented or machine independent language.

Many such languages are being used and there is certainly a good deal of confusion in the situation at present. The ideal would be to have one or a very small number of languages accepted as standard with compilers for all machines so that programs could be freely interchanged between different centres. There are discernible trends towards this, though with much diversion on the way. Two strong candidates for at least a basis for a universal language are Fortran, devised originally for the IBM 704 and used very widely indeed with IBM machines and many others and Algol, produced by a European/USA Committee which set itself up specifically to produce such a language. Both Orion and Atlas will have compilers for these, and also for Mercury Autocode. This question of languages is of the greatest importance; apart from considerations of interchange of programs, a good language helps the programmer to tell the machine how to carry out complicated procedures, just as a good spoken language helps in the communication of complex thoughts; and as a good notation is essential to progress in mathematics. My personal view is that, although I am all for some tidying up, I would not like to see any rigid standardisation enforced just now; the subject is too new and we are still a good way from knowing what we really want.

The possibility of timesharing is likely to have important effects on another part of the communications problem, that of contact between the machine and the user. Originally the programmer got his program working by sitting at the console, pressing switches which caused the machine to go through small parts of the program - possibly one instruction at a time - and looking at display lights to see what had happened. This could be very wasteful of machine time - the whole installation stood idle whilst the user tried to decide what to do next, and, as machines have grown more expensive and the pressure on their time greater, it has been displaced by a more highly organised system in which professional operators run the machine, and the result of a test run is a printed statement.

This generally works well enough, but there are many people, whose views command respect, saying that there are plenty of circumstances in which direct control of the machine by the user is the best way of going about things, not only for program development but also for exploratory work with a working program needing trial runs in which parameters have to be varied so as to make the solution have some desired properties. On a timeshared machine this need cause only a slight loss in efficiency; so long as the problem is not so big as to monopolise most of the computer, another problem can be taken up whilst the programmer is scratching his head. We shall try out this kind of thing with Atlas which from the start will have two independent consoles. M.I.T is just starting an elaborate experiment with several remote consoles attached to a central machine. and has plans to extend this very widely indeed. Incidental to this on-line use is the development of visual displays: the user, especially when experimenting, would often prefer to see his results as graphs rather than as tables of figures, so that he could do his exploratory runs by altering the parameter values until a curve on a screen took some particular shape, or passed through some point or set of points. There is no great technical problem in achieving this, the real difficulty being to produce cheap and reliable equipment.

Finally, a few words about future possibilities. It seems unlikely that machines built with the kind of component used in Atlas and Stretch can be made to go much faster, say much more than ten times as fast. The time for a signal to travel from one part of the machine to another (a millionth of a millionth of a second per foot) is already comparable with some of the times needed for the simplest operations, and will soon become the controlling factor. The next generation of machines will probably use micro-miniaturisation and thin film techniques; that is, the components will be of very small size and he made by depositing from vapour successive layers of metals or semi-conductors only a few molecules thick. Complete circuits can be formed in this way and complex assemblies built up which can be made orders of magnitude faster than the fastest conventional circuits. The store and processor of such a computer can be made much simpler than, say, Atlas, for with plenty of speed to call on one can afford to forego a lot of circuit sophistication; but the even greater disparity between input/output and processor speeds will demand even more elaborate arrangements for timesharing these operations with computation. Also, it seems to me quite certain that some of this speed will be used to allow much more powerful programming languages so that it becomes still easier to express complex procedures, and the machine itself does still more of the organisation.

To sum up, my general view is that on the one hand, computers will get more automatic and self-controlled in the sense that they will themselves look after all the pedestrian tasks such as scheduling and organisation of input and output; and on the other that they will become more amenable to control by the user through the provision of visual displays and convenient and efficient ways for affecting the course taken by the program. We shall be able to take their powers more and more for granted, but at the same time they will offer more and more exciting possibilities for exploring new fields of application.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site