Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF SE ENG Alvey Transputers Literature
Further reading □ OverviewContents1. Summary2. Terms of reference and method of working3. Background4. Problems5. Board submissions and user's views6. Options and Comments7. Conclusions and RecommendationsA. Statistical data on SERC computingB. Submissions from Boards etcC. Case for supercomputer
CCD CISD Harwell Archives Contact us Heritage archives Image license terms

Search

   
InformaticsLiteratureReportsCRWP
InformaticsLiteratureReportsCRWP
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
Contents
1. Summary
2. Terms of reference and method of working
3. Background
4. Problems
5. Board submissions and user's views
6. Options and Comments
7. Conclusions and Recommendations
A. Statistical data on SERC computing
B. Submissions from Boards etc
C. Case for supercomputer

B. Submissions from Boards etc

  1. Astronomy, Space and Radio Board Submission
  2. Engineering Board Submission
  3. Nuclear Physics Board Submission
    1. Nuclear Structure Committee Submission
    2. Particle Physics Committee Submission
  4. Science Board Submission
  5. Joint User Liaison Committee Submission
  6. Steering Committee on Administrative Computing and Office Automation Submission
  7. Single User System Steering Group

Bl ASTRONOMY, SPACE AND RADIO BOARD SUBMISSION

1. INTRODUCTION

Computing plays an important role in Astronomy, Space and Radio Board (ASRB) supported science. Both central and distributed (eg Starlink) facilities are used and a recently conducted (September 1983) review of ASR computer use and future needs shows a trend away from the use of central facilities towards distributed computing.

A major part of the Board's work in astronomy, both ground and space based, is undertaken with the VAX machines distributed at the eight nodes of the Starlink network.

Theoretical work in both astrophysics and geophysics is undertaken both on the central mainframes at Rutherford Appleton Laboratory (RAL) and Daresbury Laboratory (DL) and on the CRAY at University of London Computer Centre (ULCC).

Future programmes in geophysical research will rely on a distributed network of super-minis, similar to that employed in the Starlink system. Good communication between nodes is essential. A central node, located at RAL, would be responsible for archiving and disseminating data from atmosphere and climate studies. In addition some use of central mainframe facilities is envisaged.

Finally there is a continuing, small but highly cost-effective use of the central mainframes and of the Interactive Computing Facility (ICF) envisaged for engineering design work in support of ASRB space and ground-based projects.

In the remainder of this document, further details of the ASRB computing needs are given for each of the areas outlined above. In addition some general comments are presented on the use of mainframes, the funding of computing services and related matters.

2. ASTRONOMY AND THE STARLINK NETWORK

The Starlink network of eight VAX 11/780 and 11/750 minicomputers was established in the first instance to provide a unified approach to the computer analysis of data taken at optical telescopes. In the past two years the Board has stated that the use of the network should be expanded to include the analysis of radio and X-ray astronomical data. This will lead to a further small expansion in the number of nodes.

Array processing capability will be required at some nodes particularly those mainly concerned with radio astronomy data. The role of physicist/programmers and node managers is evolving with an eventual need for a greater effort in writing applications software as the system software becomes better established.

Networking will remain essential and the Starlink system will need to maintain compatibility with future networks. The advent of remote telescope operation will, before the end of the decade, require a network of simpler hardware systems operating alongside the Starlink analysis machines and with connections to the ASRB facilities on Hawaii and Las Palmas.

In view of the substantial investment of effort in writing software for Starlink, it will be necessary for any replacement hardware to remain capable of running VMS and of supporting the Starlink environment.

3. THEORETICAL ASTROPHYSICS AND GEOPHYSICS

Work in these areas requires use of the central mainframes and of the ULCC Cray. Some theoretical atomic physics work in support of astrophysics is similar to Science Board atomic physics activities and has therefore a similar need for access to large state-of-the-art computers. While some of this work is being undertaken on the ULCC Cray, ASR users are dismayed at the difficulties of access (admittedly efforts are being made to improve this) and the lack of an appropriate peer review system. Thus, like Science Board, ASRB requires some time on the CRAY and on the CYBER funded by SERC and allocated by an appropriate peer review system. In the longer term, in order that UK theoretical astrophysics retains a position of world leadership, it is desirable that SERC fund and provide peer reviewed access to the next generation of advanced computers.

In addition to atomic physics and astrophysics, some geophysical problems (eg atmospheric modelling) require access to advanced computers such as the CRAY. Both areas will also require some use of central mainframes. This can probably be met from the allocation suggested in Section 6, Table II.

4. GEOPHYSICAL RESEARCH

The ASRB panel set up to examine the needs of Geophysical Data Processing has recently (October 1983) reported. The panel envisages a distributed network similar to that described in Section 2 for astronomy. A central facility at RAL would, in addition to establishing and maintaining catalogues and data bases, require a high speed network between a central node and the users within the JANET framework. A dedicated super-mini (VAX 11/780 or equivalent) would be required at the central node. In view of the large data storage requirement, optical discs would be necessary to reduce the cost of the storage medium and to improve speed of access.

In addition use of the central mainframes is envisaged as indicated in Table I. Since at the time of writing this summary, the report of the Geophysical Data Panel has not been discussed by the Board, the overall level of resource and, in particular, the division between central and distributed computing, may require further discussion.

5. ENGINEERING DESIGN IN SUPPORT OF PROJECTS

A small, though important, requirement for use of the ICF and of the central mainframes exists in this area. Mechanical (finite element analysis) and thermal design programmes are used in the early stages of many projects, both space and ground-based. In view of the high cost of this work if it were to be carried out in industry, it is necessary for both establishments and university groups to have continued access to these facilities.

6. ENVISAGED USE OF CENTRAL MAINFRAMES

With the exception of the Geophysical Data Requirements listed in Table I, the Board concluded that an acceptable level of central mainframe usage would be as outlined in Table II. As a small (compared to Nuclear Physics Board and Science Board) user of central facilities, the ASRB is much concerned about the effect of the mainframe charging policy on its ability to make use of central computing facilities. The Board intends to move to a policy where the computing needs (central or distributed) of major projects will be budgeted for and supported from within the project. It would therefore welcome a universally employed charging policy for CPU usage with central (ie Council) support of facilities for which the attribution of charges between Boards presents unusual difficulty.

1984/5 1985/6 1986/7 1987/8 1988/9 1989/90
CPU hrs 25 25 25 25 25 25
CMS AU 200 300 400 500 600 600
Table I: Geophysical Data requirements - Central Facilities
1984/5 1985/6 1986/7 1987/8 1988/9 1989/90
CPU hrs 400 400 400 400 400 400
CMS AU 1200 1200 1200 1200 600 600
ICF AU 500 500 500 500 500 500
Table II: Overall ASRB Mainframe Requirements Excluding Geophysical Data

B2 ENGINEERING BOARD SURMISSION

INTRODUCTION

The Engineering Board has made a substantial contribution to the development of the Council's central computing services. In particular, the special requirements of engineering users for interactive facilities led the Board to set up the Interactive Computing Facility (ICF) in 1976 which provided for a substantial number of widely distributed multiuser minis networked with facilities at Rutherford Appleton Laboratory (RAL). The ICF now managed through the Central Computing Committee gave a major stimulus to interactive working by engineers in academic institutions, and was influential in promoting interactive working more generally. The other Boards of the SERC have also benefited and the facility was eventually incorporated in the Council's central facilities in 1981.

The Engineering Board also supports other facilities networked through the ICF, including machines for the support of its microelectronics programme. It has promoted the use of the Distributed Array Processor (DAP) through its support of the mainframe facility at QMC; this machine is now widely used by all Boards of the Council, and will henceforth be funded as a central facility. The Engineering Board is also a significant user of the Council's central batch facilities.

FUTURE REQUIREMENTS FOR ENGINEERS

Apart from its interest in developing and using the central facilities, the Engineering Board has also provided stand-alone and single-user facilities where necessary. Provision of such facilities has expanded significantly over the last two or three years with the advent of viable single-user systems. It is apparent that engineering users will increasingly seek access to the powerful personal workstations now being developed, backed up by appropriate networking arrangements for access to big batch processors, file servers, print servers and software.

A mismatch is developing between the evolving requirements of engineers and the facilities provided by the ICF. New single-user workstations are not currently planned; furthermore developments in engineering software have progressed beyond the stage where the existing hardware, procured some years ago, can support it. When the ICF was originally established it was always recognised that the initial arrangements would have to be superseded in response to a changing technical environment. Although it has taken longer than originally envisaged, these technical changes are now taking place and must be recognised in the review of the Council's central computing services.

Although it is difficult to identify with any precision the nature of the facilities and networking arrangements that may be necessary for engineers in future, the stated objective of the Network Executive is that each university campus should have a Local Area Network (LAN) with a connection via a Campus Switch to a Wide Area Network. Within each LAN there would co-exist single-user systems with one or more multi-user-mini type of system together with the usual central facilities provided by the Computer Board.

On these assumptions a number of issues can be identified which are of crucial concern to engineering users. In brief, they are:

  1. how should the existing pattern of support for hardware through the ICF and the Engineering Board evolve to meet the new requirements?
  2. how should software be provided in future?
  3. what will be the future demand for central batch facilities?
  4. where does responsibility for funding lie?
  5. what are the most suitable arrangements for the future management of central facilities?

PROVISION OF HARDWARE

The use of the ICF is at present below full capacity and declining overall. This is not a reflection of diminishing demand by engineers, but rather of their evolving requirements. Future provision will be predominantly in the form of single-user systems, and since the ICF no longer always provides the right kind of facilities, users are increasingly seeking resources through other means. Nevertheless, within the ICF, some multi-user minis remain highly effective and are fully used, while others are used little. The Engineering Board believes that in the shorter-term capacity should be reduced in the context of a plan to meet the overall requirements and agreed with the Computer Board. Thereafter the aim should be to develop the kind of arrangements suggested above. Examination of current usage indicates that the PRIME machines are much more heavily used than the GEC machines. Although there is scope for some adjustments in the PRIME network, this should be retained essentially at its present level, at least in the medium term. Reductions should be made in the GEC network since most of these are l6-bit machines which are expensive to maintain and anyway obsolescent as they cannot run much of the SERC-supported software. The remaining support should be concentrated on the five 32-bit 4090 machines and the wider range of software they can offer. This smaller network of GEC machines should eventually be integrated with the series 63 GEC machines envisaged for the Alvey programme. There should be substantial savings in maintenance costs, some modest short-term savings in the cost of site contracts, and savings in RAL manpower in the range of 2-4 Direct Man Years (DMY).

PROVISION OF APPLICATIONS SOFTWARE

The Engineering Board believes that the issues regarding the provision of applications software are now of greater importance and urgency than those relating to hardware. Engineers require access to a wide variety of applications software packages, some of which are developed by the research community whereas others are available commercially. Needs are likely to become more specialised and more diverse; however of particular importance is that software should be portable across a wide range of hardware. (This is in sharp contrast to certain areas of science where users tend to write their own code.) Engineering users are faced with the problem of identifying the applications software they need and obtaining access to it. The requirement is at present much greater than the resources at present available for identification, evaluation and mounting of software, for training and user support, and for publicity.

There is a considerable need for greater support for these activities from the Council. RAL already provides a significant level of user support and undertakes work, both intramurally and extramurally, on software development, but its great strengths in this area are not at present developed to the extent actually required by the research community. One objective of the review of the central service ought to be to free resources to devote to this task. Other facets which should be addressed include the need for arrangements for the exploitation of software through a central contracts organisation and a review of arrangements for the distribution and support of software to minimise costs to the user community.

CENTRAL BATCH FACILITIES

The Engineering Board has a continuing need for access to powerful central batch facilities and for central provision of software for work which cannot be undertaken using facilities available in universities or through the Board's own funding and for which Computer Board facilities are said to be unavailable. The Board has reviewed its current usage, and finds that a small number of user names account for more than 50% of current use. The Board is not convinced that a significant percentage of these large users - and indeed of the smaller users also - cannot and should not be accommodated on the mainframe machines at the London and Manchester regional centres and is studying ways of bring this about. It is certain however that not all of the current usage can be so transferred since the CRAY and CYBER machines are not always appropriate for work on software destined ultimately for commercial application which calls for portability to machines in widespread use. Nevertheless Engineering Board usage should drop on this account. On the other hand the amount of use charged to the Engineering Board is likely to increase as a result of the recent installation of the Atlas machine at RAL. Much engineering work requires double precision, which runs more slowly on the Atlas 10 than on the IBM 360/195, thus increasing the number of IBM equivalent hours needed by about 30%. The Board seeks reassurance that the charging algorithm eventually adopted will not inflict a penalty on this account. Demand generally is also increasing. On balance, the demand from the Engineering Board for access to machines other than those at the regional centres is still likely to increase overall. Present allocations are primarily at RAL, but some usage is being incurred at DL; concentration of all services at RAL would be beneficial.

RESPONSIBILITY FOR FUNDING

When the ICF was established it was clearly recognised as the responsibility of the SERC to pioneer the use of interactive techniques and the development of the network. With the rapid progress in these fields, and the enormous reductions in costs for equivalent computing power, the distinctions in responsibilities between the SERC, the Computer Board and the Universities are becoming blurred. In some universities, facilities originally provided by the Engineering Board - such as multi-user minis for interactive work - now stand alongside similar facilities provided by the Computer Board. This calls into question whether support for the ICF should be regarded as a continuing charge on the SERC, and primarily the Engineering Board, in the longer term. Furthermore, there is no longer any clear boundary between the facilities which should be provided by the university or by the Computer Board as part of a well-found laboratory and those which should be provided by the SERC. This is particularly so in respect of the provision of single-use systems, which the Engineering Board is being asked to fund in increasing numbers, whereas there is a strong case to argue that as standard items of computing hardware they should be provided locally. Finally, the development of SERC net into the Joint Academic Network highlights the present duality of funding provision for computing infrastructure.

The Engineering Board is seriously concerned that these issues have not been properly addressed. The Board is at present making substantial contributions to both central and single-user facilities which it believes may be either the responsibility of another funding agency, or which are not properly coordinated with provision elsewhere.

FUTURE MANAGEMENT OF CENTRAL FACILITIES

The Engineering Board envisages an evolution over the next few years of the nature and scope of provision of computing resources for engineers. This will involve changes in the ICF co-ordinated with planned future provision of single-user systems. The Engineering Board is the predominant user of the ICF and single-user systems, and it believes that user requirements could be coordinated in the most cost-effective way if both programmes were to be managed together under its own supervision. It proposes that both programmes should be transferred to the Engineering Board, provided that this is consistent with the eventual recommendations of the Computer Review Working Party and provided that appropriate financial arrangements can be agreed. The Board recognise that in the longer term these programmes may become increasingly the responsibility of the Computer Board, and that ultimately it will be necessary to develop a common approach to these activities.

The Engineering Board is aware of the suggestion being considered by the working party that the Council's central facilities should be integrated into those of the Computer Board to provide different facilities from those of the other regional centres but with comparable management arrangements and access. The Board is not in a position to determine the merits of this proposal, but asks that it should be considered in the context of meeting the Board's requirements, discussed above, in the most effective way. One point should however be emphasised. Software provision is related to the Council's own research programme much more directly than is hardware provision; planning and support for the provision of software should be retained in the hands of the Council and developed strongly irrespective of future arrangements for hardware. There is no reason why the two need be linked in future.

B3 SUBMISSION FROM THE NUCLEAR STRUCTURE COMMITTEE

NUCLEAR STRUCTURE FUTURE COMPUTING REQUIREMENTS

(extract of the report of the Nuclear Structure Committee's (NSC) Working Party)

The nuclear structure work performed on the AS7000 at Daresbury over the last 12 months was analysed to determine the nature of the work being performed on the central computers.

The conclusions of the Working Party were discussed with the Data Acquisition and Electronics Working Party of the Nuclear Structure Facility (NSF) Coordinating Committee at its meeting on 12 October 1983 and our suggestions were favourably received. Dr J M Nelson, the chairman of the Data Acquisition and Electronics Working party, joined the meeting on 10 November 1983 and contributed to the drafting of this report.

Finally, the Working Party assessed the Nuclear Structure computing requirements over the next three years using the current load as a base and the questionnaire responses as a model, and estimated the cost and suitability of several practicable solutions. Although detailed configurations of specific equipment were used in these comparisons the Working Party has made no attempt to cover all the issues relevant to specifying a computer system and does not wish to recommend at this detailed level. In the WP's view any detailed proposal should be prepared by the Laboratory working in conjunction with the Data Acquisition and Electronics Working Party of the NSF Coordinating Committee.

2. NUCLEAR STRUCTURE COMPUTING REQUIREMENTS

Nuclear Structure computing can be divided into three main categories: data acquisition, data analysis and model calculations.

Data Acquisition

The data acquisition needs are adequately met at present by systems of mini computers. These computer's are essentially part of the experimental equipment and will have to be updated to keep in line with changes in experiments. They should clearly continue to be part of SERC's responsibility no matter what changes are made to the arrangements for central computing.

Data Analysis

Nuclear physics data analysis usually proceeds in two stages: first event-by-event tapes are sorted to yield spectra and then these spectra are analysed to yield peak intensities, shapes, etc. The latter task is necessarily a highly interactive one. The nature of this analysis has changed little since the first Computer Working Party and the present type of workstation is well adapted to coping with this task. An important change which has taken place over the past few years is the increasing need to study 2D spectra: this may require additional resources in the future. Because of the specialised nature of interactive analysis and its intimate connection with data acquisition, the provision of these facilities must also remain SERC's responsibility.

The present situation for sorting event-by-event data is much less satisfactory. Originally it was envisaged that all data sorts would he run as batch jobs on the central computers but in practice, because of the need to do this type of analysis interactively, only one group are doing this at present and they are using 80% of the nuclear structure time on the AS7000. The special needs of event-by-event sorting are discussed in detail in Section 3.

Model Calculations

The remainder of the nuclear structure computing load is model calculations and these tend to be reasonably portable. That is, they can be run wherever computing power is made available. Until recently they tended to need central facilities because of the large program size but with the widespread use of mini-computers with virtual memory operating systems we see, for example, groups running large coupled channel calculations at night on workstations.

The several theory groups require more substantial facilities. Some of their work can be satisfactorily mounted on the AS7000 at Daresbury Laboratory (DL) or the central computers at Rutherford Appleton Laboratory (RAL), but the more demanding TDHF and coupled channel calculations require the power of a vector processor. Since the Cray moved from DL to University of London Computer Centre (ULCC) these needs have not been met and the work of at least two groups has been Serious1y set back.

3. THE SPECIAL REQUIREMENTS OF DATA SORTING

As specified above, the main area of concern is the provision of facilities for event-by-event sorting. Not only is the present provision inadequate but we see this as the main growth area in the future.

In trying to estimate the need for this type of computing we have noted that Oxford and Glasgow have recently been awarded NSC grants which have provided them with sufficient capacity for the Folded Tandem and electron physics experiments, and that the main need is to provide for the sorting load generated by the NSF programme. From the replies received we see the load on the groups as follows:

Bradford:
at present they use the DAS (the Data Acquisition System at Daresbury) but they are not satisfied with this because a full day of sorting is required on the DAS to analyse the data from one day of beam time.
Liverpool:
at present they use about 60 hours per month on the AS7000. This will increase as improvements are made to the efficiency of TESSA.
Edinburgh
do a11 of their sorting on their workstation.
Manchester:
present needs could he met on the 4190 if it is enhanced. They see a growing need which can only be satisfied by a central dedicated sorting facility.
Kings College London
the DAS at present but may use their GEC workstation when tapes are provided.
Birmingham
are self-contained at present but envisage a need for a central facility as the volume of data increases.
DL:
in house use the DAS (10 hours cpu per month) and NAS (2-3 hours per month). They see an increasing tape load.

From these responses we estimate that approximately 100-200 NAS equivalent hours per month would be used for data sorting if a convenient, interactive sorting facility were available with operator support to cater for batch and remote users.

The Working Party see a further need for improved sorting facilities. At present no provision is made for physicists to sort data immediately off-line while doing experiments at the NSF. We consider this a serious deficiency in the system and believe that more efficient use could he made of beam time if suitable facilities were available. It would be highly desirable if preliminary results could be extracted before collaborations separated for a period of time. This might add a further 50 hours per month to the figures above.

4. POSSIBLE SOLUTIONS TO THE SORTING PROBLEM

There are at least: three ways in which the sorting load could be met:

  1. increasing use could be made of central computing;
  2. workstations could be enhanced; and
  3. a separate computer could be set up at the NSF to provide this service.

The choice between these options will depend on feasibility and cost.

The cost of option I is difficult to estimate accurately because the effect of the Atlas 10 and of separating networking charges has not been worked out. Under the present charging algorithm the cost to the NSC of providing 100-200 hours per month for sorting would be £330K - £660K pa. It is difficult to see how the central computers could offer an interactive sorting service. This would require giving Nuclear structure computer users priority when a large number of small users are trying to use the computer.

It is possible that changes in the provision of a computing service at Daresbury could lead to a reduction in the amount of grant-supported work run on the AS7000. If so, it may be possible to tailor the service more to the needs of the local users and provide a priority service for urgent work.

Option 2 would be the most convenient system for many physicists but it is probably the most expensive solution as adequate facilities must necessarily be available at Daresbury and there will be an ongoing need for batch processing with operator support. If the workstations are seen as major sorting facilities then large disks and high density tapes will be required.

The Working Party feels that option 3, the provision of a computer at the NSF, would be the optimal solution to the nuclear structure future computing needs. We have in mind a computer of about the AS70OO cpu power, 4 Mbytes of memory, 1 Gbyte of disk space and 6250 bpi tape drives. Such a system might be a GEC 63/30, a VAX or one of the other machines available in this class. Such a system could meet all of the nuclear structure sorting needs foreseen at present. The fast processor and large memory would allow fast and efficient sorting for interactive users while the disk space would permit sorting into large (4K by 4K) arrays. The capital cost of such a system would be under £300K and the recurrent cost of maintenance and consumables would be about £25K pa. The system would require some additional manpower and the Working Party estimates one man-year per year for systems and one man-year per year for operations. There is a need for additional effort for mounting tapes, servicing peripherals and dealing with user enquiries and it is suggested that part of this might he provided from within the existing crew complement. It is recognised, however, that only a minimal level of service could be provided if effort were not available from other sources.

If such a computer were installed at the NSF it would have sufficient capacity to meet almost all of the experimental nuclear structure computing load (ie the non-sorting as well as the sorting). Some small provision would have to be made for central computing and we would have to contribute to networking costs. However, central computing allocations could he reduced to, say, £50K pa.

5. IMPLICATIONS Of TERMINATING CENTRAL COMPUTING AT DL

The AS7000 would become available for use elsewhere and the Working Party considered the possibility of employing it as the computer installed to handle the sorting load under option 3 above. It would need to be moved to the NSF counting room or be staffed by operators in the present computer hall, alternatives which are inconvenient or expensive, and the running costs are high, although these could he reduced by pruning. It has the advantage of incurring a low capital cost, but this is outweighed by the disadvantages.

The most serious implication of terminating the central computing service at Daresbury is the disbanding of the pool of technical expertise in all aspects of computing and electronics now supported by the Central Computing Committee. The NSF has leaned heavily on this expertise in building both the data acquisition system and the specialised data analysis systems now installed in these universities. Although the work was booked to nuclear structure, it represented only a fraction of the workload of some of the staff and the nuclear structure requirements alone would be insufficient to retain a viable pool of expertise in all the areas of concern. Support from a remote location would be a totally inadequate substitute.

The work of the theory groups could move easily to other centres providing that improved network access were made available via a substantial local program preparation service.

B3 (ii): PARTICLE PHYSICS COMMITTEE SUBMISSION

1. INTRODUCTION

The computing requirements for experimental and theoretical particle physics have been the subject of regular review in recent years. The most recent report by Mulvey in 1982 emphasised that adequate computing facilities were essential in order to rea1ise the full physics potential of the Nuclear Physics Board (and Council) investment in particle physics. Particular importance was attached to the need for:

  1. powerful central mainframe computing, linked with
  2. a network of distributed computers in the universities. The Particle Physics Committee strongly endorses these comments, and also recommends that:
  3. common standards be established throughout Europe for the implementation and development of both hardware and software systems.
  4. utilisation is made of developments in
    1. ultra-fast array processors
    2. distributed interactive computing

The items a) to d) are considered in more detail in the following sections. A strategy for the future funding of particle physics computing is also suggested, and account is given to the consequences that would result if insufficient resources were available.

2. CENTRAL COMPUTING FACILITIES

Already the responsibilities for data-analysis in large scale particle physics experiments are shared on an international basis. Such co-operation and agreed collaboration, essential for present experiments at both CERN a DESY, will be even more important within the planning and realisation of LEP experiments.

A central computing facility provides the necessary hardware, expertise and co-ordination to allow the community to respond to these demands in a coherent and efficient way. The centre provides:

  1. large CP capability for the analysis of basic data. In recent years the needs for particle physics have been increasing by about 10% annually, and are now at the level of about 7.5K hours of 360/195 equivalent per year. A similar rate of increase is foreseen in future years, resulting in a total requirement of about 14K hours of 360/195 equivalent by 1988/89.
  2. bulk storage of data needed for analysis by the physicists in universities. The current medium of original magnetic tape will continue to be replaced by less operator-intensive forms of filestore, consequently making data-analysis more time-efficient.
  3. expensive peripheral equipment, needed particularly in areas of high quality output, including graphical display.
  4. personnel and expertise needed to maintain and improve international computer communication links. The direct links to CERN and DESY are an essential feature of the modus operandi of UK particle physics for the foreseeable future.
  5. Programming expertise from particle physicists to support the intricate and large software systems that both exist and are being developed for the design of experiments and analysis of data in multi-national collaborations.

In a more general context the centre enables particle physicists to encourage a policy of international common standards by direct discussion with CERN and DESY on computers and system support. In addition the availability of centralised manpower allows efficient implementation of policy changes that will occur from time to time.

The centres should maintain cross-Board liaison on computing matters of common concern. The development of computer aided design (CAD) in engineering large scale experiments is a clear candidate for such a project.

3. NETWORKS AND COMPUTING IN THE UNIVERSITIES

The ability to carry out research in the university departments. as well as at distant accelerator laboratories, is an important requirement in the research policy for particle physics. Consequently the networking of computer resources is of the highest priority. However the introduction of the Joint Area Network (JANET) must be accompanied by adequate funding and good co-ordination to ensure a smooth transfer from existing arrangements, with the needs of particle physics being voiced through a representative central body rather than from separate regions. Again, maintaining a common standard for the local computers and their software will ensure maximum efficiency. In this respect it is essential that adequate posts for physicist-programmers and system managers are funded at the universities. It should be noted that computing within a network environment removes the need to distinguish groups on a geographical basis and allows co-ordination to be effected through a central body with specialist knowledge relevant to particle physics.

Interactive computing will play an increasingly important role in physics analysis and detector design and here the facilities should be situated within the individual university groups. Also within the universities there is need for some the theoreticians to have access to ultra-fast machines. The development of array processors attached to a local serial mini-computer is likely to be a more cost effective way of satisfying such requirements than the use of a centralised vector processing super-computer.

Finally, the efficient use of all these facilities relies on the continued and improved training of particle physicists in computing techniques.

4. THE IMPACT OF TECHNICAL DEVELOPMENTS

Both the hardware configurations and software capabilities of personal workstations (PWS) are at present in a state of rapid development. It appears likely that within a few years a much cheaper PWS will he available with excellent interactive graphics and powerful analysis capability. Such machines will be an important ingredient in the strategy for local area network facilities of individual university groups. However the enhanced input/output facilities of multi-user mini-computers (MUM) are likely to maintain their competitiveness in the near future.

There are also developments in high bandwidth network communications for improving remote graphics.

Undoubtedly the use of dedicated emulators will continue to be exploited to carry out, at relatively low cost, the repeated functions involved in some aspects of data analysis and in Monte Carlo simulations. Such facilities can enhance the processing power of local computers, even more so when they become VAX-compatible.

Within the next few years attached array processors will convert MUMs into super-computer capacity, thus providing an alternative to vector processors such as the CRAY X-MP, particularly if array processor instructions become available in the CPU. Such facilities will be used for extensive theoretical QCD calculations, and may prove efficient for high statistics Monte Carlo studies for LEP experiments.

Also improvements will he available in the somewhat peripheral area of electronic mail through moves for compatibility at an international level.

5. THE FUTURE FUNDING OF PARTICLE PHYSICS COMPUTING

There are substantial differences between the computing needs of particle physics and the other users of Computer Board facilities. Particle physics requires specialist large scale computing, while most other users cover a diverse range of small scale activities. With a significant decrease in the cost of powerful mainframe computers it is no longer necessary to maintain a central purchasing policy. It would be more appropriate for the Nuclear Physics Board to optimise its choice of computer and at the same time provide a commercial service for the 25% use made by other Boards of SERC. The Nuclear Physics Board would pay a contribution to the management costs for the central site in proportion to its usage of the facilities and charge the other Boards for the services it would supply.

Such a structure could also provide the mechanism for long term planning, with individual Boards having the ability to make early provision for the effects of proposed changes in the future.

If central computing funds for the Nuclear Physics Board were subjected to a significant cut-back, of say £200K, the requirements of particle physics would put the highest priority on:

  1. preserving the full rota of batch production) ie 168 hours per week and
  2. improving the efficiency of staging input data, so as to achieve (i) with a smaller number of operating staff.

The increased central computing power of recent months has clearly been reflected in an enhanced rate of data analysis and consequent physics return. Any proposed cut-backs in the future will clearly reverse this progress.

6. SUMMARY OF RECOMMENDATIONS

The major features for particle physics computing requirements in the next few years are well defined:

  1. powerful centralised CPU and data storage
  2. distributed computers in the universities attached to local and international networks
  3. common international hardware and software standards to be established
  4. to utilise the rapidly developing fields of:
    1. ultra-fast array processors
    2. interactive computing in association with personal workstations
  5. to maintain the efficient use of resources by:
    1. training particle physicists in computing techniques
    2. moving towards automation of batch processing of basic data
    3. encouraging where appropriate, inter-Board development of common projects, as well as links between the universities and the computer industry
  6. to follow a funding strategy that allows the unique demands of particle physics computing to be met through
    1. a flexible purchasing policy. existing within
    2. the overall long term computing strategy of the Nuclear Physics Board itself.

B4: SCIENCE BOARD'S SUBMISSION

The attached paper sets out Science Board's views on its long-term computing requirements and related issues. In preparing this submission the Board has viewed the future needs of the academic community in the context of likely hardware development and the resultant possible changes. The Board has also taken into account the likely impact of the full development of its central experimental facilities Synchrotron Radiation Source, (SRS) Spallation Neutron Source (SNS) on the provision of computing resources.

It is envisaged that the Board will face increased demand for computing in the years ahead. But this demand will be reflected in demand for state-of-the-art computing facilities and dispersed local facilities rather than in centralised provision for IBM and Interactive Computing Facility (ICF) time. Indeed, the demand for IBM and ICF time is expected to show a concomitant decrease.

It has already been established that the SRS facilities will require substantial increase in computing time. Similarly there will be an increase in computing requirement when SNS becomes fully operational. It is considered important that whatever short-term arrangements are made to meet this increased demand, longer term options should be kept open so that the benefit of new hardware development (eg array processor) could be quickly availed.

Financial/Resource Constraints

It is important to note that the attached paper deliberately takes no account of financial/resource limitations. But the constraints on Science Board's finances. - for the Estimates Year and the Forward Look are such that it is seeking to make reductions - probably substantial - in its financial commitment to central computing. As a result it will seek to concentrate its resources on its highest priorities. namely computing support for its central facilities and state-of-the-art computing (Categories I and III). The Board's present draft computing time Forward Look reflects this.

The Working Party is invited to CONSIDER the submission.

A. SCIENCE BOARD'S COMPUTING NEEDS

Science Board's computing needs fall broadly into four categories:

  1. Data collection and primary analysis (including real time feedback) for central experimental facilities (mainly SRS, SNS and Central Laser Facility (CLF).
  2. Analysis and interpretation of data collected under I.
  3. Access to special hardware (eg CRAY, CYBER) and software (eg Collaborative Computational Projects (CCP) programme).
  4. General computing support for alpha quality university research.

The needs of the Daresbury Theory Group would also fall into Categories II, III and IV.

These categories will be considered in turn with no implication about priority order.

Category I
This should be met by dedicated computers the cost of which will be part of the experimental programme. The SNS and CLF have the necessary computers for this task. The SRS presents a special problem as the AS 7000 is heavily involved in primary analysis being fed directly by the single VAX dedicated computer. The SRS Facility Committee has had a Working Party examining computer needs and concluded that within a few years a substantial part of the power of the AS 7000 may be needed for SRS Category I computing. It also recommended the purchase of further (3) VAX computers. We know of no cheaper alternative to the AS 7000 for this task but it has been suggested that if the AS 7000 became a dedicated SRS computer the operating mode could be changed to produce significant savings. It would be sensible to assume that in the future the AS 7000 is wholly a Science Board responsibility.
Category II
This cannot be completely separated from Category I. For example, the PUNCH system on the SNS has three levels on dedicated computers integrated to a fourth level on the Rutherford Appleton Laboratory (RAL) mainframe. Although it can be argued that on-site provision for secondary data analysis is unnecessary given an adequate communications network, we have seen no evidence to suggest that this would be cost effective for either the SRS or SNS. The most efficient mode must be to retain data on site, to have on site specialist software and to access this from remote work stations. The Theory Group at Daresbury is heavily involved with SRS analysis which adds strength to the provision of Daresbury computing for this purpose.
Category III
For many years SERC has played a major part in bringing high power advanced computers to the academic community and these have left the UK in the forefront of many branches of theory and experiment. For the present, with the move of CRAY to University of London Computer Centre (ULCC), SERC has dropped out of this role. Whether it returns is a major question to be considered in the current review. The past success of the SERC programme strongly suggests that it should return, but that should be a decision reached jointly with the Computer Board.
The essential point for Science Board Users is that there must be some advanced computing power available in the UK reserved to a peer review system. It is unrealistic, because it is uneconomical, to suppose that state-of-the-art number crunching can be wholly done under open access with criteria such as first come - first served, equal shares for all or ability to pay.
In the immediate future Science Board wishes to have some time on both the CRAY and CYBER allocated under a peer review system with the costs for this not necessary falling directly on the University sector. Both machines must be organised so that long runs and large file stores can be provided under this category. In the longer term it recommends that the next generation of advanced computers should be, in the initial years, almost entirely reserved for peer review computing and provided by SERC. The latest hardware usually requires specialised software support and SERC has, in the past, demonstrated its ability to provide that.
The ICF can also be considered a special computer system, although less so now that most universities possess interactive mini-computers. SERCNET still provides a unique distributed ICF which is essential for Science Board Database activities.
Category IV
The Board's task is to support the best fundamental scientific research in universities. It must, therefore, be able either to provide the computing power through SERC or be assured that it is available from other sources. Past experience shows that major computing requirements, not necessarily requiring special hardware or software, cannot be provided adequately by university computer centres because they have other demands on their activities. The IBM type facilities at RAL and Daresbury have provided this power and it has been backed up with skilled software support. Science Board programmes like the Collaborative Computational Projects would not have been possible without reserved computer power and central support. If this provision were to be drastically reduced without increasing alternative facilities, it would have a major effect on Science Board activities. We stress also under this heading that if a greater role is to be provided by the Computer Board in this field then it must be prepared to establish a peer review system or use that already operating in SERC.

In conclusion Science Board would welcome a greater collaboration between SERC and the Computer Board providing it is accepted that top quality computational science can only be pursued with special provision of reserved time.

8. 0THER ISSUES

  1. Coordination of Software - It is believed that a good deal could be achieved through coordinated efforts. The Collaborative Computational Projects have already shown the importance and productive potential of such an interactive effort. The area of such a collaborative effort should be extended with an active support.
  2. Common Base Policy - The advantages of a common base policy are recognised. Rut such a policy should not be too rigid nor should it be tied down to particular hardware nor a single supplier. In a rapidly developing situation such as the present, any common base policy to be successful must be flexible enough to take advantage of developing possibilities. The standardisation of software is important and any decision on standardisation should be supported by a widest possible consultation with the user community.
  3. Networking - An evolutionary approach is the best to adopt at present. Complicated solutions or equipment should be avoided at this initial stage.
  4. SERC and UK Computing Industry - Closer links with the UK industry should be fostered but care should be taken that such a link would not result in a monopolistic situation. There should be a balanced approach so that both the UK industry and SERC could have the benefit of competition.

B5: JOINT USER LIAISON COMMITTEE SUBMISSION

1. INTRODUCTION

The Daresbury and Rutherford Appleton Laboratories User Liaison Committees held a joint meeting at Rutherford Appleton Laboratory (RAL) on 2 December 1983. The major agenda item was a discussion on the Future SERC Strategy for Computing with an objective to submit the views of the meeting to the Central Computing Committee (CCC) Working Party looking at this subject.

Dr Manning attempted to focus the discussion by summarising the three meetings of the Working Party to date. The meeting received a total of nine written submissions, one from each of the two Laboratories and the other from various user groups. The discussion was largely based around these papers.

2. SUMMARY OF DISCUSSION

The meeting agreed to bring the following points to the Working Party's attention:

Computing facilities must be Science driven.

Funding in the Five Year Forward Look, once agreed, must be stable. There was general dissatisfaction expressed with the present funding methods whereby Boards are able to pull out funds at very short notice. The meeting nevertheless recognised that the present funding problems are outside the control of the Boards.

There is an overwhelming case for the provision by SERC of central facilities which should provide:

The successful exploitation of experimental data is not possible without such large central facilities.

There are many examples of centrally supported software without which the successful analysis of experimental data and advances in Computational Science could not have been achieved (eg the Collaborative Computational Projects at Daresbury). The success of the Interactive Computing Facility (ICF) was largely due to the strong central support.

Centralisation does not necessarily imply centralised funding (ie CCC) or geographical centralisation.

Boards should continue to fund special facilities as they have done in the past (eg Starlink and ICF), but not necessarily via CCC.

"Fashionable" funding is not good as it disturbs the desired stability of the Five Year Forward Look.

The provision of a first class service must take precedence over the desire to promote the British Computer Industry.

SERC should continue to provide for computing on grants requiring facilities outside the present Computer Board provision. The peer review system is well established and works.

SERC computing facilities should not be just for central facility data analysis.

The next generation Vector Processor should be installed and supported by SERC not the Computer Board, provided it has the support of the Boards.

Despite computing needing to be science driven, there was considerable debate on how to define the dividing line between single Board funding and central funding. Individual views varied depending on the particular interest and there was not a general agreement on this issue. Although the possible takeover of funding of computing at Daresbury by Science Board appears logical it is certainly not so straightforward at Rutherford Appleton where there is no dominant Board. The view was expressed that direct Board funding is generally more successful but it was recognised that it is not practicable in all cases.

Doubts were expressed about the value of the role of the CCC.

The Joint User Liaison Committee invites the CCC Working Party to take note of the above points when discussing the Future Strategy for SERC Computing.

B6: STEERING COMMITTEE ON ADMINISTRATIVE COMPUTING AND OFFICE AUTOMATION SUBMISSION

In December 1981 Dr Manning produced his first report as Computing Coordinator which was considered by the Council's Directors in January 1982. The Report drew attention to the growth of non-science computing within Council and the introduction of modern office technology and electronic office concepts, and recommended a review of the computing requirements for these areas. The Directors endorsed the recommendation and invited Mr Visser, Director, Administration, to chair a Working Party to consider the position and to make proposals for the future.

The Working Party deliberated during 1982 and reported to Council in January 1983. It noted that the use of computers to assist with Council's administration was already well established. Experience at Daresbury Laboratory (DL) and Rutherford Appleton Laboratory (RAL) had shown benefits when producing managerial information; this has been particularly true when the information is compiled by aggregating numerous individual transactions and regularly presented in differing degrees of detail to many levels of staff (for example management of large capital facilities). In Central Office the major effort had been deployed on Research Grants and postgraduate Awards with some recent trials of personnel and manpower accounting. At the time of the Report there was no administrative computing at Royal Greenwich Observatory (RGO) and only some finance work at Royal Observatory Edinburgh (ROE).

The Working Party concluded that although there has been fairly extensive use of administrative computing the position was not uniform across Council and, more seriously, there were hardly any common or compatible systems between Establishments. It would increase the efficiency and effectiveness of the work of Council and its Establishments (including Central Office as both an Establishment and the federal Headquarters) if greater use could be made of modern computing methods. Primarily reliable information would be available more rapidly, and in more detail, than manual systems could achieve so enabling the time of managers at all levels to be used more effectively. The advantages to be gained were seen as: greater productivity of staff at all levels, some saving of clerical effort and better control over administrative processes such as accounting and personnel management.

At the same time as the Working Party was in session the Government was preparing its reply to a critical report from the Treasury and Civil Service Committee on "Efficiency and Effectiveness in Government" which encouraged systematic moves to develop the mechanisms used in the Public Service for setting objectives, measuring resource inputs and outputs and assessing results. The Working Party felt that its aim should be a comprehensive finance and manpower management information system at each Establishment feeding a federal system for Council and its Boards at Central Office; this view was reflected in national terms in a statement of the Government's intentions (usually referred to as the Financial Management Initiative) issued as a White Paper (Cmnd 8616) in September 1982.

The Working Party made three specific proposals for the way ahead:

  1. A Steering Panel should be convened to decide in overview what should be done and in what order, to select Project Officers for each main task and allocate resources to them, and to liaise with the Computing Coordination Panel for supply of the computing resources. This body has been convened.
  2. A user requirements analysis should be carried out for the Vote/Management/Project Accounting elements and an assessment made of the suitability of the commercially available packages. If it seemed that one or more of the packages could be adopted, a trial would be carried out at DL in parallel with the existing system. This project has started.
  3. A personnel system holding factual (as opposed to judgemental) information should be introduced at all Establishments by developing the existing RAL system to cater for more than one group of users and the Central Office need for information aggregated across Establishments. This project has started.

The Working Party noted that most of the computing facilities for administration were being provided from machines purchased for science use and were usually run as batch routines with card input and paper output. This is now seen as unsatisfactory and the Steering Committee requires that its future systems be VDU-based, interactive, flexible and above all user-friendly. They would be expected to supply answers rapidly to complex queries and draw information from more than one database.

The question of computing resources for administration was passed to the Computing Coordinator who discussed the issue with the Computing Coordination Panel and then with Dr P Wakely and Mr L Clarke, who had been asked by Council to look at the overall resource implications of the development of administrative computing. Dr Wakely and Mr Clarke recommended that administrative computing should be provided by a separate machine to avoid conflict with the scientific service. Council did not fully accept this view and asked that in future serious consideration should be given to the use of the machines supplying the scientific service. It was generally agreed that there was insufficient knowledge of the pattern of user demand for modern systems and a sensible short-term solution would be to use the IBM 3032; this course of action was agreed by Council and the Central Computing Committee.

The present position is that two major projects are underway using the 3032 - the pilot study for a Council-wide financial management system using a software package from Management Science America (MSA) and a postgraduate student awards administration system using in-house software running under IBM's transaction processing package CICS and the TOTAL database management system. In addition it has been decided to buy (from MSA) a commercial payroll suite to replace the Council's in-house software packages. The managers of the research grants administration and the personnel system also wish to transfer to the 3032.

The 3032 is proving to be a less than perfect environment because of its limited memory and input/output facilities. As it is obsolete these disadvantages are not easy (or cheap) to rectify and in any event by 1980s standards the 3032 is very expensive to operate. Because of these points, and because better data were beginning to become available about the requirements. of the administration community, the Steering Committee on Administrative Computing and Office Automation endeavoured to spell out in its 1983/84 Forward Look bid the future resource requirements.

The RAL Computing Division, who manage the 3032 produced a paper giving five options with costings over a four-year period against the users' requirements predicted for roughly 1985/86. Three main cases were examined:

The 3032 case does not look at all attractive. It would be expensive to replace and/or upgrade hardware to a state suitable for a reliable production service and maintenance and current costs are also high. This option has therefore been rejected.

If the 3032 were to be replaced then it is clear that 438l-sized replacement could cope with the present workload and a limited amount of growth. However there is no hint of an upgrade path from IBM and in the light of experience with the 3032 (and on a smaller scale with the grants ICL 2904) the idea of buying the top of a range is very unattractive. As demand for administrative systems is rising steadily the smallest large machine approach seems wisest. It is likely to eliminate the wasting of staff effort on packing or fine tuning and will allow users to run all their software on the same machine.

If the 3032's workload is transferred to the 308l-Atlas complex then it looks as if the workload could probably be accommodated until the end of 1984/85 with present equipment (on the assumption that the scientific users' reductions in funding are reflected in reductions of computing resource) and could certainly be accommodated if the D - K upgrade were to be sanctioned. Placing all the front-end activity on one machine carries a major uncertainty: no-one can predict from experience or theoretically the behaviour of the 3081 as a multiple interactive front-end machine and thus the loss of capacity from systems programs contention. If the computers were loosely coupled by putting a front-end on the Atlas 10 then the software contention problem disappears and it is cheaper to supply the same performance. Both of these solutions to the problem would be cheaper than installing a replacement machine and there would also be system software economies.

The Steering Committee does not wish to involve itself too deeply in the technical side of this debate (nor does it have the expertise so to do) but the various options have different implications for security (in the sense of privacy and protection against fraud), recovery from failure, science/administration contention and future enhancements. The Committee (and also the Auditors) in principle prefer the idea of a separate administrative machine. If this is too expensive in present circumstances they would prefer the loosely coupled option to the alternative of a shared two machine complex with a common front-end. The Steering Committee would, however, welcome the advice of the Working Party or the Central Computing Committee (CCC) on this point.

ANNEX A: OUTLINE REQUIREMENTS FOR ADMINISTRATIVE COMPUTING OVER THE FORWARD LOOK PERIOD

Application Active
Terminals
Usage per Terminal Base Store
(Mb)
%3081 Storage (Kb)
Financial management 30-50 0.15 15 0.5
Payroll/Personnel 5-10 0.15 15 0.5
Awards & Grants 25-50 0.15 15 0.5
Modelling/Simulation/forecasting 5-10 0.20 50 1.0
Query packages 5-10 1.00 100 1.0
PROFS or similar 40+ 0.15 50 1.0
Software development 5 0.20 50 1.0
Batch 5-10 200 4.0

The cpu figures are based on a 3081D which delivers about 10 mips so the above profile shows a need for roughly 5 mips to sustain a transaction rate of one per second for financial management, grants and awards.

Since the Council currently has comparatively little experience of large scale administrative computing the figures given above represent only an outline forecast. Once more practical experience has been acquired it will be possible to make more accurate and detailed statements of requirements.

A definition of the Admin Computing FYFL has been omitted as being not entirely relevant to the discussion.

B7: SUSSG Submission to Computing Review Working Party

1. PHILOSOPHY

In the future, computer facilities will be provided by single user systems linked to a fast local area network with connections to wide area networks and to specialist processors. There will be a range of single user systems available, corresponding to the range of specific tasks that will be required of them. At the low power end, tasks such as management information and certain types of teaching can be done with low power machines. These would be cheap - not significantly more expensive than current terminals and will have good "graphics" capabilities, where graphics implies the ability to manipulate images as well as draw them. Good interaction will also be possible via devices such as touch sensitive screens and puck tablet combinations. The standard scientific/engineering workhorse will have a significant amount of power - not less than VAX 11/750 - and will be capable of handling a wide range of design and software development tasks. We can expect higher quality graphics, extensive use of colour, and so on. As ever there will be requirement for very high power state-of-the-art single user systems by a significant minority of users. These systems may well have attached processors of one sort or another, and are not the same as the specialist processors mentioned above.

Local area networks are currently reasonably fast and very reliable but not very cheap. We can expect over the next several years that speed will increase - an order of magnitude is by no means impossible. The costs should fall dramatically (not necessarily at the very high speed end). The availability of such fast high speed connections will mean that effective distributed operating systems will not only be possible but will be the norm. These local area networks will give fast connections to other researchers in the campus; other local area networks: wide area networks; the special shared services such as file and print servers.

The specialist processors will essentially fill requirements that are abnormal in some sense. These may be abnormal in terms of cpu power required, in terms of quantity of data that it is to stored or manipulated or provide special architectures such as parallel processors, data flow machines and so on.

There is no obvious place for multi-user interactive machines such as the interactive computing facility currently provides, nor for the interactive facilities provided by CMS or TSO on mainframe IBM equipment at present. It is impossible to underestimate the importance of communications; and in this sense we are talking about connections in a very wide sense. The high quality graphics and good interaction provided by the emerging single user system will vastly improve communications between user and machine and there is obviously much room for improvement in terms of interface to the user. Obviously, the wider availability of local area networks and wide area networks will vastly improve communications between machines, leading to vastly improved communications between the users. The net effect will be to give the capability of rapid transfer of research development results and the ability to have effective cooperation between research groups which are geographically widely separated.

2. THE COMMON BASE

2.1 Motivation

The viewpoint expressed in the previous section will not occur without specific action being taken by the funding bodies. Because single user systems will be relatively cheap compared to current multi user costs, there will be a very large number of suppliers. To allow a free-for-all would produce immense compatibility problems in hardware, software and communications areas. The Common Base attempts to resolve these problems by providing a standard range of hardware, standard software and standard communications. It is our opinion that the software should push the Common Base, not the hardware.

The advantages of the Common Base are in communications (in the wide sense referred to above); effective support is possible because there is a limited range of hardware; portability of research results; increasing richness of the computing environment for the user with time, due to massive software development on a common base; and finally rapid transfer of technology from the research community to UK industry.

2.2 Current Status

The Common Base currently consists of the ICL PERQ (both versions one and two), the UNIX operating system, Fortran 77 and Pascal compilers and the GKS graphics system. Communications are the standard JNT products (X25, Cambridge Ring) with the addition of Ethernet, when appropriate, where fast PERQ to PERQ communication is essential. More specialised initiatives exist as a result of the Alvey programme and other software may well be supported on Common Base equipment by these (examples are LISP and POPLOG by the IKBS community).

It is not intended to document history of the project which is now about one year behind the original schedule. The major reason for this delay is grossly inadequate funding, currently 5my/year. This should be compared to the 27my/year provided for the establishment of the Interactive Computing Facility. At present SERC has 132 PERQs delivered of which about 110 are in the academic community. The status of the software is that most users have PERQ UNIX (PNX) version 1.5. This offers the C and Fortran 77 compilers with Pascal available as a pre-release. The next release of PNX (Release 2.0) and GKS are now on field trial.

PNX 2.0 will be generally available in the Spring and will offer a range of features including better virtual memory management, faster compilation speeds, and fully supported Pascal compiler, screen editor and various tools from the programmers' workbench, particularly the source code control system (sccs).

In addition. single precision performance will be much improved and the speed and functionality of the window manager. Various software developments are also taking place including provision of the NAG library, parts of TOOLPACK (funded by the Computer Board), LISP, POPLOG, PROLOG. etc, and the numerical software, such as SPARSPAK, have also been ported to PNX and will generally be available soon.

The PERQ2 now available in the UK offers approximately a 15% price reduction on the PERQ1 while providing a larger capacity fixed disk with half the access time, much improved throughput at the external i/o ports, and much reduced noise and heat levels. Further options are available (these are retrofittable to PERQ1), including a 2 Mbyte memory board, 16K writeable control store and an A3 size landscape screen which offers the same resolution as the current A4 screen.

In the long term other equipment will be joining PERQ in the Common Base and SERC, as mentioned above. has undertaken a major survey of single user systems market. Over 80 vendors replied to the original operational requirement, demonstrating that one of the major motivations in setting up the Common Base was all too valid. By the end of 1984 it is likely that another machine will be supported which should offer comparable graphics performance to PERQ but with significantly better cpu and virtual memory management (it will not be as cheap as PERQ - likely costs for a comparable configuration to PERQ2 are £30-40K).

3. SERC/COMPUTER BOARD COOPERATION

The Computer Board has purchased some 46 PERQs, most of which have been installed at university computer centres in the UK. Queen Mary College have been appointed primary site for software licensing and act as informal coordinators for the Computer Board machines. SWURCC act as a source for public domain software for PERQ. There is obvious scope for cooperation between SERC and the Computer Board funded sites and a start has been made in coordinating software distribution, document production, general information collaboration, and provision of training courses. Separate arrangements exist however not only for software licensing but also for hardware and software maintenance. Some SERC users have been confused as to the correct site to contact for support of PERQ; there needs to be a clear policy on how the activities are to be coordinated.

QMC and RAL have been cooperating in developing a proposal to be placed before CCC, Computer Board and university computing centre directors. In its current state this proposal assumes cooperation between the various sites of the user support; hardware supply and maintenance; communications; software; documentation; and coordination of a possible national academic common base.

The joint arrangement proposed the following scheme:

  1. The user arranges funding of equipment, its maintenance (including communications) and cost of any software necessary.
  2. Local computing service arranges installation and commission of equipment; the network connections; installation of software; first line engineering support; acquisition of documentation; training; supply the applications software.
  3. Central support provides software distribution where possible; production of documentation and training modules; second line in depth support; a focus for information exchange and coordination of the Common Base service.

There are obviously a number of financial and political considerations that require further exploration before detailed proposals can be submitted.

4. FUTURE DEVELOPMENTS

4.1 User Community

At present about 110 Perqs have been installed, mainly in the Engineering Board area (note that this includes Computer Science). These machines support research valued at over £3M. Over the next few years, the facilities provided by the Interactive Computing Facility and systems such as IBM and TSO on the IBM mainframes will be replaced by single user systems of various types. This implies some 7,000 users will move to the use of personal workstations providing a much wider range of features than current equipment. These features include access to a much wider range of software tools, and the ability to work with much increased productivity. The latter feature is possible due to the availability of operating systems supporting multiple process execution, effective screen management, and greatly improved pointing tools for interacting with the system (such as the mouse).

4.2 Hardware

Support of a range of single user machines will be necessary, ranging from types such as the IBM PC/XT (now capable of running most CMS functions), through machines of comparable power to the current Perq, to a few very powerful machines required by the most demanding users. All the major areas CAD/CAE, management services, IKBS, Software Engineering must be supported.

In the communications area, the trends to international standards will intensify. It is likely that a standard Local Area Network technology (probably Token Ring) will evolve and must be made available to the academic community. Gateways to Wide Area Networks and current LAN technologies (Cambridge Ring, Ethernet) will be necessary. Access to shared resources (servers) on LANs, such as high quality print servers, file servers, and archive servers will be the norm.

4.3 Software

At present the power provided by the hardware is under-used due to the lack of appropriate software which exploits the new features. To make effective use of the advanced input and output facilities now available, a major programme of work must be undertaken. The work should improve markedly the man-machine interface, both in access to the tools available in UNIX and also to applications software. Software development is needed to aid the construction of user software, and to provide training tools to reduce the (seemingly ever-growing) support load with which computing services are faced. In the longer term, "intelligent" software will be available (eg from the Alvey IKBS initiative) to provide interfaces which can adapt to the users' skill levels and background.

4.4 Funding

It is vital, given the scope and importance of the developments outlined above, that there be a coherent policy within the Council on provision and support of single user systems. The current levels of funding are totally inadequate - formally only 5 man-year/year and some £210k other funding is allocated. Experience indicates that a viable programme requires a level of expenditure of 30 man-years and £1M per year.

It is essential that a sensible policy, including funding at levels indicated immediately above, be adopted as soon as possible.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site