Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD CISD Literature
Further reading □ OverviewATLAS newsletterComputing at ChiltonHistory of DCI □ W3C UK Office □ W3C UK NewsEvolution of the WebDevelopments and futureW3C Office AnniversaryW3C-LA
Harwell Archives Contact us Heritage archives Image license terms

Search

   
CISD and DCILiterature
CISD and DCILiterature
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
ATLAS newsletter
Computing at Chilton
History of DCI
W3C UK Office
W3C UK News
Evolution of the Web
Developments and future
W3C Office Anniversary
W3C-LA

Computing at Chilton

Jack Meadows

2009

This is Chapter 5 of the book Big Science: Fifty Years of the Rutherford Appleton Laboratory (1957-2007) written by Professor AJ Meadows, Loughborough University. The book was originally commissioned to celebrate RAL's fiftieth anniversary and was in the first proof stage when CLRC became STFC and the project was dropped. The author decided to self-publish his work even though it was no longer possible to include RAL's photographs or to expand on the scientific backgrounds.

One characteristic of big science is its insatiable demand for more computing power. In nuclear physics, for example, both solving theoretical equations and the handling of experimental data require a great many calculations. In the 1940s, these were typically carried out manually on mechanical calculators. The term computers in those days usually meant the people who operated such machines; Harwell soon established a team of computers to service the data-handling requirements of members of staff. Manual computation was slow. Card-sorting machines helped to speed things up in the early years, and were still in use when the Rutherford Laboratory was founded. But the final answer came from elsewhere. During the Second World War, the concept of an electronic computer had been developed at Bletchley, mainly for the purpose of breaking enemy codes. This work remained secret after the war, but the concept of an electronic computer soon became general knowledge in the UK. Such computers offered the potential for carrying out calculations much more rapidly than was feasible using existing mechanical machines. Electronic computers consist in essence of a series of switches linked together. Each computation involves a different sequence of on/off switches determined by the instructions given to the machine [the program]. Early computers employed electronic valves as switches. The word valve suggests their on/off function, while the alternative name - vacuum tube - popular in the United States, reflects their physical appearance. They typically consisted of a glass tube, a few centimetres long, emptied of air. Valves not only took a good deal of space: they were also liable to develop a variety of faults which immediately brought the computer to a halt. These first generation computers were replaced within a few years due to the invention of a solid-state device - the transistor - which could be substituted for a valve. Transistors were in one sense a throwback to the early days of radio, when crystal sets were popular. Both used the properties of semiconductors (materials that allow the passage of an electrical current, but only to a limited extent). The difference was that the crystal set used naturally occurring semiconductors, whereas the transistor was carefully constructed under laboratory conditions. The transistor proved considerably more reliable than the valve, and it took up much less space. Second generation computers based on transistors became the norm in the latter half of the 1950s and the early 1960s - the period when the Rutherford Laboratory was born.

A number of commercial firms in the UK moved into the computer field post-war. One of the early leaders was Ferranti, a well-known electrical engineering firm. Ferranti, partly working in conjunction with Manchester University, began to develop a series of increasingly powerful computers, mostly named after figures in classical mythology. Meanwhile, staff at Harwell were developing their own computers, including a very early transistorised computer, and they also had access to computers at Aldermaston. However, it was apparent that Harwell needed its own large computer to handle all the data being generated; so, in 1956, the decision was taken to order a Ferranti Mercury machine. It was installed two years later -the largest machine on site in the early days of the Rutherford Laboratory. Access to it was made available to Laboratory staff, but it was clear that they would soon need a machine of their own in order to cope with the volume of data that would be generated by the detectors attached to Nimrod. Electronic computers had already become an essential back-up for such detectors. Analysing the tracks left by particles involved mainly routine calculations, but in enormous numbers. Moreover, since nuclear interactions could lead to particles being emitted in any direction, the calculations had to be carried out in three dimensions. Computers were well equipped to handle this kind of work, but it required a powerful machine (a mainframe computer, as it came to be called to distinguish it from smaller computers). The decision was therefore made to order the latest Ferranti machine - the Orion - for the Laboratory, and to bring it into operation at the same time as Nimrod. There were various initial problems with the Orion computer, but it was soon operating at full capacity. Indeed, within a year or two of the start in 1963, it became necessary to buy time on other computers to supplement its contribution.

The Atlas Computer Laboratory

Meanwhile, an important development in the UK computing scene had taken place under the aegis of NIRNS. A Working Party on the Combined Use of Expensive Research Equipment (its acronym - 'CURE' - gave some indication of the Working Party's aims) had considered the question of large computers in the UK. It decided that a good case could be made for providing universities and research establishments with access to the largest computer possible. The appropriate site for a central computer was thought to be next door to the Rutherford Laboratory, since the latter was already organised to receive university personnel and, unlike Harwell, had no security restrictions. It was further decided that the best computer to buy would be the new Ferranti Atlas machine, then being constructed in conjunction with Manchester University. (The computer companies in the UK needed all the help they could get, and it was hoped that an early order from a high-prestige establishment might help with further sales of the Atlas abroad.) In 1961, a separate Atlas Computer Laboratory was therefore set up with Jack Howlett as its director. Howlett, as he liked to recall, had been recruited to Harwell in the immediate post-war period by Klaus Fuchs, the atomic bomb spy. (Fuchs was then in charge of the computing team at Harwell.) He had been involved in the use of electronic computers, and of the earlier mechanical calculating systems, from the beginning. Like the original members of the Rutherford Laboratory, he had also worked in the atomic energy field. (Less predictably, he was one of five people involved in the development of the Atlas Laboratory who had previously been connected with the London, Midland and Scottish Railway.) He was soon deep into plans for the new building to house the computer. His emphasis was particularly on welcoming university users. This extended to hanging reproductions of modern paintings round the rooms - not always to the delight of his own staff. He was joined by a number of Harwell staff in the build-up to the delivery of the Atlas machine in 1964. The Atlas Laboratory had its own management committee under the NIRNS umbrella, and so, in policy terms, was independent of the Rutherford Laboratory. However, the latter, like universities, was entitled to use the new computer for free, whereas the Atomic Energy Research Establishment and other government bodies had to pay for its use.

The Atlas computer was delivered in 19 truck loads in April 1964. (Mainframe computers, even when transistorised, still took up a lot of space.) It was one of the most powerful computers in the world at that time, and used the latest technology for storing information both internally and externally. Atlas was, for example, one of the first computers to have an operating system - now an essential part of all computers. The function of an operating system is to make sure that the hardware (the computer and its peripherals) and the software (the programs instructing the computer) interact in a coordinated way. It operates rather like a policeman directing traffic. Indeed, the relevant software in the Atlas was called the Supervisor. As was customary in those days, the new computer handled material in batches, rather than continuously, and working was soon round the clock, divided into three eight-hour shifts a day. Although second generation computers were much more reliable than first generation, they were far from fault-free. There was a period after the installation of the Atlas when Howlett refused to take it over officially because of its unreliability. Throughout its lifetime, keeping the Atlas going required an on-site team from Ferranti who took over the machine for two hours a day for maintenance purposes. In those days, computers were sensitive to temperature changes, and so required powerful air-conditioning systems. This sometimes had unexpected side-effects. When the water-cooling systems used with the Atlas were eventually opened for cleaning, it was found that they were playing host to a number of orchids. It is an interesting aspect of the way computers have developed that the Atlas machine cost as much in real terms as any of the later, much more powerful computers that Chilton acquired.

The Rutherford Laboratory made full use of the time allocated to it on the Atlas, but soon found that its computing resources were still not adequate. Not only was there a ceiling on the time allocated on the Atlas, but, equally importantly, data from the Nimrod detectors could not be handled directly. For this, the Laboratory needed a dedicated machine, rather than a general-user computer such as Atlas. So it was decided in the mid-1960s to buy a new machine for the sole use of the Rutherford Laboratory. By this time, the computer scene had changed considerably. The end of the 1950s had seen the invention of the integrated circuit or 'microchip' (subsequently abbreviated simply to 'chip'). Whereas transistors were individually made and wired together, chips integrated a large number of circuits and switches for simultaneous construction on a single base. Chips not only decreased the space required, they also increased the reliability. Computers using chips - 'third generation' computers - began to appear in 1963. Developing mainframe computers was proving an increasingly expensive and competitive business, and one that was, to the worry of the British Government, increasingly dominated by American firms. The commercial and political pressures at work led to a rapid amalgamation of the various manufacturing companies in the UK. Ferranti had merged its computing interests with ICT [International Computers and Tabulators] before the final delivery of the Atlas to the Chilton site. In 1968, all the major UK players in the computing field combined to form ICL [International Computers Limited] - the largest non-American computer manufacturer in the world in terms of staff numbers. The American front-runner was, and always had been, IBM [International Business Machines]. The firm's history went back to the end of the nineteenth century, and it moved into the manufacture of electronic computers immediately after the Second World War. In the following decades, it introduced a series of innovations. In 1956, for example, it produced the first magnetic hard disk, now the universal method of storing information within a computer.

IBM machines were the market leaders in the computer world, so it was to them that the Rutherford Laboratory looked for its new computer. In 1967, the Laboratory took delivery of the firm's current model - an IBM 360/75. The 360 series of computers were more flexible than previous computers. They possessed more memory and could handle data input from a number of sources simultaneously. This allowed the Rutherford scientists to input data from the various Nimrod detectors online. An additional advantage was that the IBM series of machines were compatible - a program run on one machine could also be run on the others. The new machine took a little time to bed down, but its advent immediately altered the balance of computing power on the Chilton site, since the new machine had four times the power of the Atlas. (Computer power is essentially a measure of how long a computer takes to complete a specific task.) The balance was changed even further when the Rutherford Laboratory was given permission in 1970 to purchase, in addition, an updated model, the IBM 360/195, one of the most powerful machines in the world at the time. In terms of central computer provision at Chilton, the boot was now on the other foot, for the Atlas Laboratory was offered a fifth of the time available on Rutherford's new computer.

Staff of the Atlas Laboratory were, of course, aware that their computer was becoming obsolescent, and lobbied throughout the latter part of the 1960s for a new machine. However, the initiative ran into problems. The Ministry of Technology, which would be responsible for funding any new machine, was becoming increasingly worried by the emphasis on American mainframes. Back in the early days of Atlas, it had been alleged that British computers were less reliable and efficient than American computers. Indeed, there had been a BBC Horizon programme that explicitly said this (despite a Government warning that it would affect the morale of the British computer industry). It was therefore seen as important that a flagship computing centre like the Atlas Laboratory should be seen to support British manufacturers. In consequence, the Atlas Laboratory entered into discussions with ICL about their need for a large machine. Unfortunately, the period around 1970 marked a recession in the computer industry, and ICL dropped the plans they had had for a new large mainframe computer. Instead, the Atlas Laboratory had to be satisfied with a smaller machine - the ICL 1906A - which had only about twice the power of the Atlas. The new computer had its advantages. As with the IBMs, there was software compatibility between different machines, and the ICL 1900 range of computers proved a considerable success overall. The problem was that it still left the Atlas Laboratory with a computer less powerful than the current IBM machines at the Rutherford Laboratory. This was the position when, in 1973, the Atlas Computer was finally switched off. As part of the closing ceremony, the first program ever run on Atlas was rerun. It was a simple sentence; though this time, in deference to the audience present, an adjective was deleted. The message ran: 'It works - and about *** time too'.

Meanwhile the position of the Atlas Laboratory within the UK computing community was changing. In the mid-sixties, NIRNS was dissolved, and the newly created Science Research Council took over responsibility for both the Atlas Laboratory and the Rutherford Laboratory. Previously, the Atlas Laboratory, though independent in terms of policy making, had been regarded for administrative purposes as part of the Rutherford Laboratory. Now it was to be treated as a separate entity. Its essential brief - to cater for all university researchers - was initially continued. Retrenchment came at the beginning of the 1970s, when the research councils faced a major financial squeeze. This was when Shirley Williams, later Minister for Education and Science, grimly commented: 'For the scientists, the party is over'. As part of its savings, the SRC decided that the Atlas Laboratory should in future support not all university researchers, but only those whose research was funded by the research councils. At the same time, universities were catching up fast in terms of computing power. The Flowers Report of 1966 had led to an upgrading of university computers, and to the establishment of three regional computing centres. So universities' need for access to a central computer had become less acute. Indeed, the computer power available to universities now dwarfed that provided by the new ICL 1906A. As a consequence, the early 1970s saw a debate over the future of the Atlas Laboratory. An additional factor in the discussion was that Howlett was due to retire in 1975, so it would soon be necessary to find a successor.

Reorganising computing at Chilton

The question at issue was what the role of the Atlas Laboratory should be. University access to a very powerful computer that could run really complex programs would certainly remain essential into the foreseeable future. There was clearly also a need for a central body to provide expertise and assistance to universities in a computer environment that was by now becoming increasingly interactive. But did all this require a separate body to run it? Various possibilities were mooted, including moving the Atlas Laboratory to the Daresbury site. In the end, the most obvious solution in administrative terms was adopted: to merge the Atlas and the Rutherford Laboratories and to pool their computers and staff (though some Atlas staff were also transferred to Daresbury, where they successfully made the case for a more powerful computer). The Atlas staff, themselves, believed, even after the merger, that they should form part of an independent National Computer Centre. There was, indeed, a problem with merging: that it would arouse university suspicions of a takeover by the nuclear physics community. The Science Research Council set up an Atlas Laboratory Review Panel that reported: 'We were all impressed by the need for any proposed arrangement to maintain the confidence of the non high energy users of these facilities and of the need to make it clear to them that they would not take second place to the physicists currently being catered for by the Rutherford Laboratory' [1], Geoff Manning, who was involved in the negotiations on the Rutherford side, replied to this. He noted that there had been similar suspicions about the Rutherford Laboratory's development of neutron beam activities, yet he was sure that these were being rapidly overcome as users interacted more with the staff and facilities at Chilton. His argument was accepted. It took a little while to work out how the Atlas Laboratory might best fit into the Rutherford Laboratory organisational structure, but the two actually merged only a month after Hewlett retired. Manning later remarked that, in some ways, the culture of the Atlas Laboratory differed more from that of the Rutherford Laboratory than the Appleton Laboratory's culture did. One factor in the difference was that, unlike the other two, the Atlas Laboratory did not have the construction of equipment as a major aim. However, computers were central to operations on the Chilton site, and the merger went ahead successfully.

The question of how computing best fitted into the Rutherford Laboratory organisational structure has risen a number of times in the past half-century. In its early days, the Laboratory did not have a separate computing section: computers came under its Applied Physics Division. By the end of the 1960s, computing had grown sufficiently important for a separate Computing and Automation Division to be formed. This C & A Division continued in existence on the absorption of the Atlas Laboratory. The latter remained as a separate Atlas Computing Division with Manning as its head. However, the responsibilities of the two groups were redefined. The C & A Division took charge of the running of the computers, including the Atlas Laboratory's ICL 1906A. Ironically enough, this British computer was joined immediately after the merger by an additional machine from IBM. (By this time, 'fourth generation' computers were the standard. These were based on chips that contained a sufficiently large number of components to form a basic computer in their own right. In essence, fourth generation computers are still standard today: it is simply that more electronic components are now crammed onto each individual chip. Starting with SSI [Small-Scale Integration] chips that contained perhaps a hundred components per chip, further microminiaturisation has led to ULSI [Ultra Large-Scale Integration] chips containing more than a million components.) The Atlas Computing Division, meanwhile, concentrated on interactive computing and computing applications. This split did not last for long: at the end of the 1970s, the two groups were merged into a single Computing Division. Within five years, this, too, was revised. The Division was split again into two: 'Informatics' which dealt with new computing initiatives and 'Central Computing' which handled the computers and the networking at Chilton. The Atlas Centre remained as a physical entity, and is still marked by a plaque on its wall: the nameplate of the British Rail locomotive Atlas which was bought with money collected from the Atlas staff. By the 1980s, the IBM 360/195 computers were becoming obsolescent. So it was decided to buy an ICL Atlas 10 (with another IBM machine as a front-end). This was not quite a return to roots. The Atlas 10 was essentially a relabelling of a Japanese machine which had the valuable property of being IBM-compatible. ICL, after performing well in the 1970s, was finding the 1980s hard going. It was soon to be taken over by the Japanese company concerned, Fujitsu, at which point the UK ceased to be a major player in the mainframe market.

The absorption of the Atlas Laboratory meant that Chilton was committed to providing a computing service for universities. This meant regular upgrading of equipment just as the on-site service did. A joint working party from the universities and research councils in 1985 decided that the time had come to purchase a new central computer of the greatest power possible, to which both universities and research councils could have access. It was agreed that such a supercomputer - as the most powerful computers were now called - would best be sited at Chilton. In the 1980s, the leading designer of supercomputers was Cray Research in the United States, and one of its machines - the Cray X-MP - was purchased. It is alleged that the 'X' originally meant 'exciting'. More importantly, MP stood for 'Multiprocessor'. The standard computer of the day handled its tasks one at a time. A multiprocessing computer, on the contrary, could handle several tasks simultaneously. By the early 1990s, the new machine was working flat out on a wide range of projects. (The environmental sciences took up about a third of the available time, with physics plus astronomy taking another third.). So funding was made available to buy the next computer in the series, the even more powerful Cray Y-MP. Despite their power, these computers were remarkably compact. The early Crays consisted of a vertical tower surrounded by a base containing the power supply and air-conditioning system. This latter was often used as a seat by weary operators: it was claimed to be the world's most expensive love seat. There was much sadness when later Crays did away with it. With these machines, even highly complex events could now be modelled. Two examples in the 1990s, taken from different areas of science, are detailed modelling of the factors causing the hole in the ozone layer and a series of projects to model chemical interaction at surfaces. In 1996, the last in this series of supercomputers at Chilton - the even more powerful Cray J90 -was installed. Though its predecessor had been installed only a few years before, the demand for computing power had already become larger than it could handle. In subsequent years, a different route for satisfying this demand emerged. Instead of having a single very powerful computer, this approach employs a number of less powerful computers linked together. So, for example, in 2002, computers in the USA, France, and at Chilton were linked together in order to analyse data on mesons that had been accumulated in California.

Interactive computing and networks

Although mainframe computers always attracted the most attention, they represent only part of the computer development story at Chilton. Smaller computers used to handle specialist jobs, or as front-ends (essentially filtering jobs) for the mainframe computer, started to appear in the 1960s. They were generically labelled 'minicomputers', and were much cheaper, as well as more compact than the mainframes. By 1970, the Rutherford Laboratory had some twenty of them operating, and universities, too, were beginning to acquire them. These new machines encouraged a different approach to computing. In the early decades of computing, the main problem had been to develop an adequate computer and to make it work. The convenience of users came second. When the computers became easier to use, it was the software side - operating the computer and handling the input and output - that tested the patience of the user. Now, these activities gradually became easier. For example, early computers input and output data in batches. Initially, users had to turn up physically at the computer with their laboriously constructed input, and at a later time return to collect the output. This was acceptable, if annoying, when the machine was on-site. If the computer was far away - as it was for university users of Chilton facilities - it was highly inconvenient. The data could be dispatched by surface transport - this was the era of the punched card - but it took time. A main concern of the Atlas Laboratory was how to provide as efficient a service as possible for users. The solution in the early 1970s to the input/output problem was to introduce remote working. The new IBM 370/195 had been purchased on the understanding that half of its time would be given over to universities. Chilton now introduced remote work stations - that is minicomputers - sited in the leading universities (as judged in terms of their computing requirements). In a number of instances, Chilton actually purchased the work station for the university. The work stations were connected to the central IBM computer by leased landlines. Batch working continued, with users storing their data at Chilton, but handling it from the work station at their own institution. In addition, contact could be made directly with the central computer. A few groups attached their own equipment via the public telephone network, but the data exchange rate was very slow. The most important non-university link was naturally with CERN, though - a portent for the future - the Appleton Laboratory was also a user. On-site users at the Rutherford Laboratory and the Atlas Laboratory were also, of course, linked to the mainframe via work stations.

The new development eased access, but batch working was still from the user viewpoint a rather inefficient way of working. What was needed was for the user to interact with the computer as a piece of work evolved, so that the way the task developed would depend on feedback from the user at each stage. Such interactive computing became a major focus of interest at Chilton from the mid-1970s onwards. The Science Research Council saw it as particularly important for helping engineers in their attempts to produce models of objects or activities. The original idea was that engineers (and others) could work out the appropriate approach to their problem interactively; then move to batch processing for the heavy calculations. SRC therefore set up a national Interactive Computing Facility [ICF] run from Chilton. Minicomputers, which could be used by several users simultaneously, were dispatched to selected universities. The idea was that software packages produced by one set of users could be distributed and used by others, so reducing the problem of groups duplicating work done elsewhere. The new situation was summarised in a flyer for a Colloquium on Interactive Computing to be held at Chilton in 1978. It was entitled: 'Have you interacted with a computer?'

'As engineers we tend to consider the generations of computers according to their hardware features, eg valve, transistor, LSI, etc.

For the engineer as a computer user these hardware developments have entailed different degrees of control of the computing process.

  1. Problem handed to programmer for development and translation to an acceptable computer form.
  2. User prepares program in high level language, eg FORTRAN, and submits work to professional operator controlled computer system.
  3. User prepares and edits program via a simple terminal and submits batch jobs to computer operating system.
  4. Interactive use frees the user from the constraints of the initial conception of the computing problem and the limitations imposed by the economies of the former implementations of computer technology.' [2]

In terms of hardware, ICF activities reached a peak in 1978-80, though software development continued long after that. Meanwhile, another activity had been spun off which required considerable Chilton input. This was labelled the Single User System approach. The ICF programme was concerned with providing computers that served the needs of groups of researchers. But it was recognised that the logical end of this progression was for individual users to have their own machine. Towards the end of the 1970s, an American firm - Three Rivers Computer Corporation - began work on a single-user machine called PERQ (not an acronym, just an abbreviation of 'perquisite'). Chilton acquired two of the initial PERQs for evaluation in 1981. The first arrived securely packed in a wooden case. Unfortunately, the nails used were so long that they had gone through the packing and into the computer. The problem was resolved by ICL staff who were present, as they were able to make rapid repairs. Their presence was significant. By this time, ICL was experiencing a sharp downturn in business, and the Government was again concerned about the future of computing in the UK. Experts, not least those at Chilton, suggested that ICL should take on the marketing of PERQs, since, in its category, it was a computer well ahead of anything planned at ICL. After some vacillation, the compute firm agreed. However, ICL staff had not been trained on the computer, so they hired Chilton staff - in exchange for two free PERQs - to give demonstrations of the machine world-wide. All this was a high profile activity in media terms, since the new initiative was seen as a possible solution to ICL's difficulties. Various rumours of problems with the new machine circulated. Most of them were unfounded, but they made life difficult at Chilton in view of its close involvement with the new computer. In the event, it made little difference, since even the PERQ failed to save ICL from takeover. The PERQs themselves lived on, but were gradually replaced by the products of other manufacturers. Indeed, the demise of single-user minicomputers was not far away. They did, however, have the distinction of having introduced the sort of window management system that is widespread nowadays. Indeed, a court case between Apple and Microsoft over the use of this type of manager was thrown out when Chilton sent the judge videos which showed a PERQ running such a set-up before either Apple or Microsoft had products in the field. Chilton actually purchased its first batch of IBM Personal Computers in the mid-1980s, not long after they first appeared. From then on networked microcomputers increasingly became the norm.

A fundamental intention for both multi-user and single-user machines was that they should be linked together in a network. By the end of the 1970s, it was clear that the next major step forward in computing was going to be networking. Initially, Chilton was the centre of a star network configuration -the connections went out from the central hub like the spokes of wheel (or, in terms of the analogy, like the rays of a star). The subsidiary computers at universities were linked straight to Chilton. This had the advantage that, if one computer went down, it did not affect the other computers in the network. On the negative side, it required more cabling than some alternative configurations. Moreover, if the central computer went down, so did the entire network. By the mid-1970s, networks were beginning to proliferate, with computers connected together at a variety of levels - locally, regionally and nationally. Chilton itself was involved with a number, mainly within the UK, though, already by the end of the 1970s, wider-ranging experiments were under way. For example, the European Community was exploring the use of satellite links to provide more rapid interchange of data between some of the leading centres - including Chilton and CERN. Starlink, which started up in 1980 to provide dedicated support for astronomers in the UK, provided a national example of networking restricted to a specific group. It was run by Chilton initially for half-a-dozen sites, but the network expanded down the years until it eventually covered virtually all astronomers. Both particle physics and space research produce large quantities of data, so staff at the Rutherford Appleton Laboratory have acquired considerable expertise in setting up databanks. Chilton has provided central handling and storage for the observations made by such satellites as IUE and IRAS. Starlink made it possible for remote users to gain immediate access to the data from these satellites. Initially, if large quantities of data were required from such databanks, they had to be downloaded onto magnetic tape and physically transported to the requester. The growth of higher capacity networks -broadband networks, as they are now called - has ended the need for this practice. Starlink was aimed at a specific user group. It was preceded by another network run from Chilton - SRCnet - which had a wider brief. SRCnet joined together a range of sites supported by the Science Research Council, and was designed to facilitate the exchange of information between these sites using a variety of computers. Getting all the computer systems to work together and also to access the mainframes entailed a major exercise in standardization. (The network was subsequently renamed SERCnet when the Science Research Council became the Science and Engineering Research Council.) These various networks were typically backed by their own advisory committees, users' committees, hardware advisory groups, and special interests groups (which concentrated mainly on software). All these activities consumed a considerable amount of Chilton staff time.

The simplest method of sending a message between two computers is to set up a specific pathway between them and to send all the data down the link at once. A more complex way is to divide the data up into packets and send them over any pathway that links the two computers. The latter processing is called 'packet-switching'. Although more complicated, it has a number of advantages. For example, if one route between computers is blocked, the packets simply follow other routes. This implies, of course, a different configuration from the star (which was mainly useful when there was a central focus to the network). By the early 1980s, Chilton was experimenting with local 'Cambridge Ring' circuits on the site: these were also installed in many UK universities. As the name implies, the 'rings' had no single central focus. In the end, the American 'Ethernet' system won out against the Cambridge Ring - even though the latter was technically superior - and became the standard system installed at Chilton. Packet switching was already the subject of experiments in the 1960s, but its use for national and international networks depended on agreement concerning protocols (essentially instructions to the computer on how to put the packets together and how to find the recipient computer). Initially, the UK adhered to an international standard called X.25. However, this failed to gain sufficient support in the USA, and was overtaken in popularity in the 1980s by an alternative international standard labelled TCP/IP [Transmission Control Protocol/Internet Protocol] which was supported in the USA. The changeover from one to the other was to have an impact on later developments occurring at Chilton.

The UK research councils recognised during the 1970s that academic networking was fragmented. Universities were setting up local and sometimes regional networks that differed from each other and from centrally provided services, so hindering communication This problem led to the creation of a small Joint Network Team, based at Chilton, whose aim was to create a national network for education and research taking in the local and regional networks. In 1982, the team recommended that SERCnet (a packet-switched network) should provide the foundation for a national academic network based on the X.25 protocol. The recommendation was accepted and the new network - called JANET [Joint Academic NETwork] - was created in 1984. The team at Chilton was correspondingly expanded, and continued to run JANET until the early 1990s. At the beginning of 1991, a new JANET service using the IP protocol was launched to parallel the existing service using the X.25 protocol. By the end of the year, traffic on the new service exceeded that on the old service, and continued to mushroom in subsequent years. The rapid growth required a reorganisation of the way in which JANET was run. It was decided to set up the Joint Network Team as an independent association. The association was subject to some limitations, since it was publicly funded; so it was decided that it, in turn, should be fronted by a non profit-making company labelled UKERNA [United Kingdom Education and Research Networking Association]. The new company would consist mainly of Chilton staff - indeed, its creation removed a significant number of the Rutherford Appleton Laboratory networking team.

Meanwhile, important changes were taking place in the networking scene, itself. By the end of 1993, JANET was in the process of being transformed into SuperJANET. A main aim of the JANET team had always been to increase the amount of traffic that member institutions could input to the network. In SuperJANET, the traditional cables that had been used hitherto were replaced by optical fibre cables. (In such cables, the information is transmitted as light through long stretches of transparent material. Many more messages can be transmitted this way than through a similar-sized electrical cable.) The significance of this change was that JANET could handle data and text, but SuperJANET could also handle all types of graphic (including moving pictures) and could do so interactively over the network. Chilton, as the home base, was naturally one of the first six sites to have access to SuperJANET. In subsequent years, SuperJANET went through a series of developments aimed both at increasing the capacity of the network and the range of institutions covered. When it was formed, UKERNA was given a wide-ranging brief - to take responsibility for the networking programme of all education and research communities in the United Kingdom. UKERNA's outreach now includes not only all further and higher education institutions, but also all schools, both primary and secondary.

JANET and SuperJANET were major national developments. There were equally developments in terms of international networking over the same period. At the end of the 1960s, the US Department of Defense set up ARPANET [Advanced Research Projects Agency NETwork] in order to carry out research into networking. In the 1970s, access to this network was made available to non-military users, who mainly used it for data transfer. By the 1980s, uses ranged more widely, including electronic mail, and the American network had developed international links - not least to JANET. The result of these developments has been the Internet: what started as a communication system for researchers has become a communication system for everyone. Of course, all good things have their drawbacks. So, for example, the growth of networking, nationally and internationally, led to the appearance of hackers -people who, for one reason or another, try to interfere with the operation of a computer. As a major establishment, the Rutherford Appleton Eaboratory has had to wage a continuing war against hackers. In terms of viruses that can cause major disturbances to the system, Chilton may have to stop 50,000 attacks a day during an epidemic, and illegal attempts to access the on-site network are many more than this.

The finishing touch to the Internet as it exists today came in the early 1990s, when the World Wide Web was devised. This provided a standard way for handling and presenting information transmitted via the Internet. It is significant that this development was initiated by Tim Berners-Lee at CERN. The information-handling needs of the particle physics and space research communities have been an important driving force for innovation in networking, and continue to be so. It was appropriate, therefore, that, in a ceremony at Chilton in 1997, Godfrey Stafford presented Berners-Lee with a medal from the Institute of Physics in recognition of the importance of his work. For Chilton was much involved in these developments. Bob Hopgood became an important member of the group concerned with the development of the Web - the W3C [World Wide Web Consortium] - and helped establish its European offices. By this time, Chilton was deeply involved with European networking as well as with links to the USA. In 1990, it became the UK member of the recently formed European Research Consortium for Informatics a nd Mathematics [ERCIM], which was intended to spearhead new developments in information handling in Europe. The two strands merged in 2002, when ERCIM became the European host for W3C. The Chilton influence on European computing remains strong: Keith Jeffery (Director, IT) is currently President of ERCIM.

Although the Rutherford Appleton Laboratory and Daresbury had always had good network links, the creation of the Council for the Central Laboratory of the Research Councils in the mid-1990s led to more joint computing activities. The biggest of these related to improvements in data handling via networks. By the end of the 1990s, it was clear that upcoming applications of 'big science' would flood the existing SuperJANET network with data, creating serious delays for other users. The obvious example was data coming from CERN, where the Large Hadron Collider is expected to produce vast amounts of data - one estimate is 15 petabytes a year (that is well over a thousand million million bytes). What was required was a network that could transmit data with the maximum possible speed, together with methods of handling and analysing the data rapidly at input and output. Packet-switching is an efficient way of sending relatively small amounts of information between a large number of sites. It is not the most efficient way of sending very large amounts of data between two specified sites. The decision was therefore made to set up a separate dedicated network for passing very large amounts between a relatively restricted number of sites. The new network has been labelled 'UKLight' (because it employs optical fibres) and, like SuperJANET, is being managed from Chilton by UKERNA. One of the problems which has had to be tackled is how the deluge of data can be handled by the local network at the destination site. The solution has been to distribute the data over computers at different sites, with each site handling and analysing part of the incoming data. The methods and software involved in this approach have been labelled the 'Grid'. (The analogy is with the electricity grid: users can obtain access to electricity - or, in this case, computing power - without needing to know where it comes from.) The World Wide Web allows access to vast quantities of information, but how well it can be handled depends on the properties of the particular computer. With an attachment to a grid, the properties of the individual computer become much less important as compared with the properties of the whole network. A paper by Keith Jeffery decided the Government to set up the e-Science programme, leading on to the establishment of a UK Grid. Chilton and Daresbury have jointly supported the National Grid Service for the UK academic world, but particularly heavy users of data are also setting up their own grids. The two of special interest to staff at Chilton are GridPP for particle physicists and AstroGrid for astronomers and space scientists. Not all grids face identical problems in terms of their working requirements. In this case, GridPP is primarily concerned with the volume of data to be handled, whereas Astrogrid has the problem of bringing together data created at different times and from a number of different sources. The existence of grids aimed at specific groups of users raises the question of wider access to the information. On this front, Chilton has been busy internationally, as well as nationally. It was, in particular, involved in the formation of the International Grid Trust Federation in 2005. This is intended to allow individual scientists to access any of the grids that are now appearing round the world.

Handling information

Though computers and networking may seem the most striking aspects of Chilton computing, the provision of appropriate software has been equally important. In the early days, for example, staff had to be able to cope with a diversity of computer languages. Originally, computer users had to input data in a numerical machine code that was easy for the computer to digest, but less easy for the programmer to write. The next step was to provide a language that was easier to use and which the computer software could translate into machine code. Several languages were devised and remained in use for some years. But, in the mid-1950s, IBM introduced the FORTRAN [mathematical FOrmula TRANslating system] language, a simplified way of writing computer programs for scientists. The use of FORTRAN spread rapidly; it soon came to be the standard computer language used at the Rutherford laboratory. Regularly updated and expanded, Fortran (now so familiar that it can be written in lower case) continues in scientific usage. Other computer languages are, of course, used at Chilton for particular purposes. For example, TeX has been used for producing text over the past twenty years: it is now the standard package for preparing scientific papers for publication. So far as computer language is concerned, the Chilton staff have mainly had to react to what was devised elsewhere. This has not been true of software in general; for Chilton, either on its own, or in conjunction with other groups, has pioneered in the production of a range of software packages. For example, its staff played a major role in developing the HSL package, which made it possible to apply advanced numerical methods to a variety of problems. (The initials come from the fact that the package started life as the Harwell Sub-routine Library.) In recent times, this package has been used by over 2,000 organisations worldwide. Similarly, pioneering work was done on database technology. This is now a foundation stone of business computing, and ideas from the work have been incorporated into a number of commercial projects.

However, it is on the graphics side, that Chilton has played a particularly pioneering role. The information output in particle physics is essentially graphical in its nature. The output from a bubble chamber, for example, was a set of tracks. These were captured on film, and the tracks measured by a special machine attached - typically via a satellite computer - to the mainframe computer. The operating system to handle this and similar input was devised at Chilton. In essence, the work involves identifying patterns, and pattern analysis using computers became an area of Chilton expertise - to the extent that the Laboratory was asked to coordinate UK research in this area in the early 1980s. A similar amount of effort was put into producing graphical output, not only in picturing events, but also in producing graphs of the data. The need for good graphics programs increased after the merger with the Appleton Laboratory, since much astronomical work involves images. This kind of data handling was soon extended into the field of Computer-Aided Design [CAD], and, by the early 1980s, Chilton was offering external users access to their computer software in this field. By the mid-1990s, Chilton had been signed up to provide a CAD software support service for the whole of the European Union. Along with still images, Chilton has also pioneered work on animation. One of the earliest examples of computer animation in the UK was in the early 1970s, when Chilton provided input to Open University mathematics presentations on television. Chilton staff have since produced a series of animations - for example, for BBC's Tomorrow's World. The best known link with television has been the Channel 4 logo, which was devised with assistance from Chilton.

Because of its central role in the UK computing environment, Chilton has come to be regarded as an appropriate host for government ventures in almost any area of the field. In the early 1980s, for example, a team was set up there to handle a new initiative in industrial robotics. Somewhat closer to the Rutherford Appleton Laboratory's main interests was the Alvey programme of the mid to late 1980s. This programme was stimulated by continuing governmental worries about declining UK competitiveness in the computing world. It was aimed at four specific areas - Very-Large-Scale Integration [VLSI], Software Engineering, Intelligent Knowledge-Based Systems, and Man-Machine Interface. (The latter heading was later replaced by the more politically correct, if less general term, 'Human-Computer Interaction'.) Chilton had major responsibilities for organising and coordinating the programme as well as for providing the computing network infrastructure. The Alvey programme itself produced some valuable results, though the second part of the programme - the commercial application of the results -was curtailed by restrictions on government funding. But the work helped emphasise the importance of knowledge engineering (the ability to manipulate large quantities of information in such a way as to provide a knowledge base for users), an area of increasing interest at Chilton. There has been a progression of emphasis in the computing world over the past few decades -from data to information, and then to knowledge. This has been well reflected in the work at Chilton. An obvious example has been the growth of interest in metadata - perhaps best defined as data that describe other data. Thus a library catalogue provides an alphanumeric code for each book in the library. The code is the metadata; the book contains the actual data. The link between the two is, in this case, the library user who notes the code and then goes to find the book. The current drive at Chilton is towards assisting with this human sifting. 'Metadata is arguably the key facility for interoperability and intelligently-assisted user access to global information resources. .... Metadata can be used to provide query assistance to the end user, in order to achieve the ultimate goal "get me what I mean, not what I say'". [3]

The merger of the Rutherford and Appleton Laboratory and the Daresbury Laboratory led to the formation of a new Department for Computation and Information, which joined together common computing interests at the two sites. The department only lasted for five years, at the end of which time computation and information technology were split into separate departments. Within a couple of years, there was a further reorganisation. Library, reprographic and photographic activities have all moved into the electronic handling of information. It was therefore decided to merge them with the main information technology activities on the site to form a new Business and Information Technology Department. The title of the new department reflected the belief that this convergence of different activities to a common electronic handling of information opened up possible applications of commercial interest. Correspondingly, the department was expected to provide innovative systems and services not only within the organisation, but also to the UK as a whole. (In fact, its brief ran wider since it was also the national representative on W3C and ERCIM.) The new department again only lasted for five years: there was another reorganisation in 2006. Indeed, computing at Chilton has been subject to more reorganisation than any other group on the site. This no doubt reflects the continually changing nature of activities in the digital world, but it can be rather disturbing for the personnel involved. For the moment, e-science and e-business have taken centre stage. An important strand in the growth of interactive computing has been the attempt to provide users with realistic computer simulations. Such a virtual reality environment is familiar to anyone who has played a computer game, but the complexity of visualising changes in the scientific world - in examining biological processes, for example - is much greater. Software has now reached the stage where what can be simulated on a computer is becoming as important as what experiments can be carried out in the laboratory. This whole area of activity has been labelled 'e-science', and Chilton has been involved in a programme to promote e-science in the UK for some years past. The role of the new e-Science Centre is likely to keep Chilton computing staff busy for some time to come, whatever future changes may occur. For e-science is expected to assist activities in all the research areas of especial interest at Chilton:

'e-Science will be vital to the successful exploitation of the next generation of powerful scientific facilities operated by CCLRC on behalf of the UK research community, and to the UK's effective use of major facilities elsewhere. These facilities - synchrotrons, satellites, telescopes, lasers - will collectively generate many terabytes [million million bytes] of data every day. Their users will require efficient access to geographically distributed leading edge data storage, computational and network resources in order to manage and analyse these data in a timely and cost effective way. e-Science will build the infrastructure which delivers this'>[4].

References

[1] SRC - Atlas Computer LaboratoryReport of Council Working Party (1973) [ALP 6-73] p.7

[2] E.Hailstone to G.Manning Colloquium on Interactive Computing 8 June 1978 [RAL Box 471]

[3] K.G.Jeffery What's next in Databases ERCIM News No. 39, October 1999

[4] www.e-science.cclrc.ac.uk 19 January 2007

Prof Meadows died in 2016; the University published an obituary.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site