C&A Computing & Automation Annual Reports

Jump To Main Content

Jump Over Banner

Home

Jump Over Left Menu

Annual Report 1974

Computing Services

The only major hardware change made to the IBM System 195 Central Computer during 1974 was the addition in March of a third megabyte of main core. The system has now given consistently high performance for a period of three years and from this experience it is now possible to predict its maximum capability when fully loaded. After deducting time lost through faults, maintenance; machine development, etc., just over 8,000 hours of good time were available to users in 1974. Central processor utilisation averaged 83%. After deduction of overheads this provided users with 5,412 hours of accountable computer time, which might increase to 6,000 hours under full pressure.

Remote computing has continued to develop, and about 50% of all jobs were loaded at remote stations. At the end of the year there were nineteen operational remote batch stations, most of which are equipped with a group of ELECTRIC terminals - VDUs, typewriters and graphics. The number of ELECTRIC identifiers active during anyone week rose from 200 to 300 in 1974, and by the end of the year accounted for over 60% of the total number of jobs submitted, with a peak of 8,000 jobs one week.

Development work in remote computing is now directed towards networks, in collaboration with the Post Office's Experimental Packet Switching Service (EPSS). A link to the Advanced Research Projects Agency (ARPA) network via Professor Kirstein's nodal processor in London is operational.

Work is proceeding, based on GEC 4080 computers, to combine some local requirements (data collection, graphics, etc.) with RJE facilities. The work on graphics is directed towards removing local core-resident interactive activities from the central computer. It also has wider application for remote users of interactive graphics.

Work is well advanced on coupling HPD1 and HPD2 in tandem mode. During 1974, HPD2 measured a total of 500,000 events. CYCLOPS, used previously for the measurement of spark chamber film, was taken out of service in May.

CENTRAL COMPUTER

Over 100 terminals communicate with the central computer, of which about half are attached to RJE stations.

Operations

A statistical summary of operations is shown below in the Tables showing machine utilisation and a breakdown of compute time between categories. The machine was scheduled for a total of 8,309 hours. Machine availability was 98% (8,133 hours) and the time available to users was 8,046 hours. The total number of jobs ran in 1974 (532,950) showed an increase of 18.4% over the 1973 figure. The average CPU utilisation was 83% corresponding to 6,709 hours. After deduction of overheads the compute time accounted to users was 5,412 hours.

Table 9. Distribution of elapsed time (in hours) for Computer Operations
First
Quarter
Second
Quarter
Third
Quarter
Fourth
Quarter
Total for
Year
Weekly Averages
1974 1973
Job Processing 1955 2097 1998 1996 8046 154.7 150.6
Software Development 25 25 20 17 187 1.7 2.4
Total Available 1980 2122 2018 2013 8133 156.4 153.0
Lost time attributed
to hardware
56 31 55 21 163 3.1 2.5
Lost time attributed
to software
6 2 3 2 13 0.3 0.4
Total Scheduled 2042 2155 2076 2036 8309 159.8 155.9
Hardware Maintenance 23 29 17 14 83 1.6 1.3
Hardware Development 49 0 61 50 110 2.1 2.6
Total Machine Time 2114 2184 2154 2050 8502 163.5 159.8
Switched Off* 69 0 30 134 233 4.5 8.2
Total 2183 2184 2184 2184 8735 168.0 168.0

* These figures include shut-downs of 69 and 131 hours over the Christmas periods.

SYSTEM SOFTWARE DEVELOPMENTS

The basic software for the central computer operating system is the IBM-OS/MVT/HASP, with some local additions, and on-line activities are supported by the locally-written MAST/DAEDAALUS/ELECTRIC programs.

OS/360

Late in 1973 IBM provided version 21.7 of OS/360, which was introduced early this year and was expected to be the final version of OS to be issued. However, at the end of 1974 version 21.8 was received and is being studied. Both versions show only minor changes and corrections from their predecessors.

The main change during the year was the enlargement of certain supervisor work-areas, to cope with the increasing load following installation of the third megabyte of main memory. The increases were :

System Queue Area 40 Kbytes Work space for OS
HASP 40 Kbytes more buffers, improved overlays
ELECTRIC 30 Kbytes improved overlaying
Link-Pack Area 20 Kbytes more resident SVC modules
MAST 30 Kbytes to maintain response with more users

HASP and COPPER

The HASP work space was increased to cope with the faster flow of jobs submitted remotely as work station usage increased. HASP can now be instructed by operators to select only jobs with priorities in a prescribed band, a facility of particular value during prime shift to avoid low priority jobs jumping the queue while others are waiting for disc or tape mounts. Further moves towards no-class input were made, the internal class being computed from the job parameters such as core and time requests. A 'no-restart' facility has been made available for jobs which should not be restarted after a system failure.

A new extended version of the COPPER control program has been introduced to help users cope with the near-saturation conditions existing on the central computer from time to time. It allows up to 8 levels of time allocation, of which most users currently see 5, viz priorities 12,8, 4,3 and 1 for express work, day-shift and urgent long overnight jobs, bulk production, and two levels of non-urgent background work. Individual user's requirements often fluctuate from week to week, so it was made possible for several accounts to pool their time allocation; this proved much more satisfactory than giving each user the same small allocation each week. The overall COPPER allocations are agreed with the 195 Advisory Committee, and each category of users are now enabled to allocate CPU time within their own field.

Data sets can now be mentioned on SETUP cards by symbolic references to the system catalogue. To give more information in response to status enquiries from users, a file has been introduced to hold information about jobs for as long as possible after their completion. The System Management Function has been allocated more space for recording and examining job turnround time.

MAST and DAEDALUS

In most respects the MAST and DAEDALUS system for on-line applications was unaltered, but changes were made to allow handling of lower-ease text in messages as well as upper-ease, and to allow a terminal to be used in conversational mode. In this mode input and output messages can alternate, whereas previously a terminal or program could send any number of messages before being required to receive a reply.

ELECTRIC

Documentation facilities in ELECTRIC have been extended considerably during the year. It is now possible for lower-ease alphabetic characters to be entered at terminals and held in ELECTRIC files. In conjunction with a documentation processor and page layout instructions, this new facility allows text stored in files to be printed on a local line-printer specially equipped with upper and lower case characters. It has been used extensively, for example to produce manuals (including the Supplement to the ELECTRIC Users' Manual).

A user can now send a one-line message to another user (or to several simultaneously) by means of the MESSAGE command. The message will be received immediately if the user is logged-in, or stored for him if not. A longer message can be sent as a file by the MAIL command to a single user, providing his main directory includes a MAIL area.

An archiving scheme was introduced to ease the considerable pressure on filing space. It enables users to transfer ELECTRIC files to a mountable 3330 disc overnight and restore them again as required.

Other developments include automatic routing of line-printer output to remote workstations for jobs submitted via ELECTRIC, improvements to file security, and a scheme (not yet implemented) to allow transfer of ELECTRIC files to and from OS data sets.

IBM 360/195 Playing Chess, November 1974

IBM 360/195 Playing Chess, November 1974
Large View

GEC 2050 Workstations

Only limited changes were made to the software for the standard GEC 2050 workstations. This is maintained by the Laboratory, and has been consolidated into a bootstrap loading system. All stations now load a bootstrap program from their cassette loader. This submits a job to the central computer which in turn picks up the current production version of the workstation RJE program with the necessary configuration from disc on the central computer. This control program is transmitted to the station, over-writes the contents of core in the GEC 2050 and is then automatically initialised.

This technique ensures all stations use the same level of RJE program and greatly eases the introduction of modifications and correction of faults. The current program is version 3.1 of the multi-leaving emulator, which allows upper and lower case characters to be used. Software was developed to allow users of low-speed terminals to dial-in to those workstations suitably equipped with the necessary hardware, instead of directly to the Rutherford Laboratory. Full ELECTRIC facilities were made available in this way to King's College, London and the University of Surrey.

GEC 4080 Satellite Computer

A GEC 4080 computer is being set up to replace the aging DDP-224 computer, which is now becoming difficult to maintain. The initial system comprises 128 kbytes of core, one 9-track tape unit and two 2.4 Mbyte disc drives. Basic system software is being developed, and a link (initially at 9.6 kbaud) to the 370/195 central computer is being provided. It is intended to attach the 4080 as a HASP workstation, with ELECTRIC and MUGWUMP facilities available from interactive terminals and disc-to-disc file transfer. There will be a variety of VDUs and graphics terminals connected, including a new large Tektronix 4014 storage tube display and a fast Hewlett-Packard refreshed display (see below).

The initial applications software in the 4080 for both patch-up and magnet design will follow the existing system closely. It will be written in Fortran, using the compiler provided by GEC. A new interactive graphics package is planned for the 4080, to replace the IDI package on the 370/195 and the DDP-224 graphics terminals. One possibility being examined is the standard GINO-F system, extensively used elsewhere and available from the Computer-Aided Design Centre at Cambridge.

Migration and Archiving

The IBM standard system management of libraries of users' load-modules leaves unusable gaps in the disc space reserved for them, and the clean-up process for recovering space is not automatically initiated when needed. Space recovery is made harder by scattered modules which are still in use but rarely change and by others which are not in use, these latter wasting space as long as they remain undetected.

To deal with these problems, some changes to the standard system were started in 1973 and have been completed and brought into full operation during 1974. The new locally-modified system is believed to be the first practical version of automatic migration and archiving. Briefly, modules still under development are separated from unchanging members, while those not used for a long time are transferred to magnetic tape, whence they can now be retrieved by the users themselves. Ordinary libraries are listed weekly, and archived libraries monthly, as a public service. About one full 3330 disc has been saved by this work, with a gain in convenience.

Paper Economy

Shortage of supplies of standard computer output paper, and its rapid rise in price, led to serious efforts to cut consumption. Some compiler changes were made to reduce paper output, and default options were modified in some cases so that users have to make specific requests for printed output they would previously have had automatically.

Printing at 8 lines per inch (instead of 6) will be tried out soon, using the same type-font, but in all these economy measures the biggest single factor is the cooperation of users.

Other Developments

Some minor improvements were made to the Linkage Editor and to extended precision division.

Programs are being developed to enable Rutherford users to benefit from the new FR80 graphics hardware due at the Atlas Computer Laboratory next year.

Rutherford staff have continued to take an active part in the organisations of IBM machine users (SEAS and SHARE), particularly in the OS, HASP, Performance Evaluation and Fortran committees and in the new Future Requirements Project.

Libraries

Each major group using the 370/195 has a disk-based user library of compiled programs. During the year each of these 40 libraries has been put into the automatic migration/archiving/cleanup system mentioned above. There are in addition six libraries of commonly used routines now available.

They are:

  1. AERE Library, of which a new version was introduced during the year.
  2. CERN Library, providing routines mainly for HEP users. The CERN 7600 library was received recently.
  3. Computational Physics Communications (CPC) Library, of which twenty issues have been received already.
  4. Numerical Algorithms Group (NAG) Library, which is increasingly used at Universities.
  5. Rutherford Library, of useful routines mostly written locally.
  6. Scientific Subroutine Package (SSP) Library, which was provided by IBM but is no longer supported by them and is falling into disuse.

Some track chamber members of the CERN Library joined the Rutherford Library RHELIB on autocall, i.e. the programmer need take no action to have them supplied. The long-established CERN statistical package SUMX was brought up to date and made more generally available: it is heavily used by the HEP community.

Accounting

It is vital to keep track of usage of various parts of the computer system so that, for example, future requirements can be forecast and problems of individuals or groups identified. Accounting of jobs processed is regularly done, and facilities have been extended to monitor, for example, the turn-round time for jobs, the usage of disk data sets and tapes, and utilisation of I/O channels.

Databases

The High Energy Physics database obtained from the Stanford Linear Accelerator Center was adapted for use with the IBM-supplied STAIRS information retrieval system, instead of the SPIRES software with which it was originally associated. At present STAIRS relies on another package (CICS) for many support functions, but local software is being written so that STAIRS facilities can be accessed through the Laboratory's standard MAST/DAEDALUS system.

Utilities

The Administrative Terminal System (ATS) is an IBM-supplied program for document preparation. It was made available, at one terminal only, and its facilities will be compared with those recently added to ELECTRIC. It was used by Atlas Laboratory Staff for preparing a user manual.

A general utility package OSDITTO was obtained from IBM, with a view to replacing several diverse utilities by a single-package. An early application was to provide a utility for copying multi-file tapes, which had previously been a cumbersome procedure. The STACKER facility was introduced for compressing experimental data on 7-track magnetic tapes into high-density 9-track tapes. This was applied to several experiments run at Rutherford and CERN.

Advice and Information

With the increased number of remote users the majority of queries handled by the Program Advisory Office now come from such users, either by telephone or via their terminals. Considerable effort goes into keeping them informed and discovering their plans and problems.

Documentation has concentrated on re-writing the Computer Introductory Guide and Reference Manual (CIGAR). This has proved much more laborious than anticipated because of the very considerable changes to the computer system and its interface with users which have taken place since the last edition. However, parts of the new CIGAR are now available.

COMPUTER NETWORKS

With the installation of the IBM 370/195 computer late in 1971 the Rutherford Laboratory accepted the obligation to provide a computing service to a large number of users authorised by the 195 Advisory Committee. Provision of remote facilities began in 1971 with work stations at the Institute of Computer Science in London (based on a PDP9) and the Universities of Birmingham (IBM 360/44) and Oxford (IBM 2780). As shown above, the number of workstations, the facilities available at them and the use made of them have all increased enormously within the last four years. Clearly it is popular and convenient for users to access the powerful central computer at the Laboratory from their local terminal.

As a further step in providing more flexible remote facilities, interest in computer networks developed here rapidly during 1974. Networks in which several major computers are linked together and can be accessed from remote terminals appear to offer significant potential advantages to the user. Firstly, networks greatly increase the amount of terminal equipment through which a particular central computer can be accessed. For example, if the CERN and Rutherford computer systems were joined in a network, access to the Rutherford 370/195 could be gained from all terminals at CERN, instead of only from the single Rutherford workstation there. Secondly, a user may have special demands such as access to large data bases or special program packages, which are best met on one particular computer. Thirdly, physics groups in scattered localities collaborating on experiments will each have work for their local main computer but may all prefer to process collaboration data on one computer, to avoid problems such as different word-lengths.

The first experience was obtained by connecting the Rutherford 370/195 to the ARPA (Advanced Research Projects Agency) network, which links a wide range of 'HOST' computers in the United States. Access is made via the PDP-9 at the Institute of Computer Science (at University College London), which functions as a 'HOST' computer on the network but appears to the 370/195 as a HASP workstation. Through these links terminal users on the 370/195 or any of its UK workstations can log into any HOST computer on ARPA to which they have authorised access, and terminal-type access to the 370/195 here (including full use of ELECTRIC) is possible for authorised users anywhere else on the network.

The Post Office is developing its EPSS (Experimental Packet Switched Service) of fast lines and exchanges for a UK network. The Laboratory is actively collaborating in designing protocols for terminal usage, remote job entry and file transfer across this network.