Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1984 □ JanuaryMarchMayJulySeptemberNovember □ 1985 □ JanuaryMarchMayJulySeptemberNovember □ 1986 □ JanuaryMarchMayJulySeptemberNovember □ 1987 □ JanuaryMarchMayJulySeptemberNovember □ 1988 □ JanuaryMarchMayJulySeptemberNovember □ Index of issues □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFORUM
CCDLiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1984
JanuaryMarchMayJulySeptemberNovember
1985
JanuaryMarchMayJulySeptemberNovember
1986
JanuaryMarchMayJulySeptemberNovember
1987
JanuaryMarchMayJulySeptemberNovember
1988
JanuaryMarchMayJulySeptemberNovember
Index of issues
Index

January/February 1988

Editorial

We start the New Year with news of bonus computing allocation for batch users. See Brian Davies' article for details of this not to be repeated offer! If we do not receive enough funds for the next machine upgrade, perhaps we should ask users to chip in a few emulators. John Barlow's Story of the 370E describes its background and RAL's involvement.

It is always good to get correspondence from users; it proves there is someone out there. Bob Wells' letter about a common user interface raises some points that may have occurred to many of us as we struggle to cope with the differences of one machine from another. Does D in this editor mean Down or Delete? Does P mean Print or Purge? Confusion can cause disaster!

Ros Hallowell, User Support and Marketing, Central Computing Department

RAL Mainframe Computing Allocations 1987/88:

Additional Bonus Enhancement for Batch Processing

In return for firm forward allocations of funds each year, by the Boards of SERC, to pay for the required Mainframe Computing resources, it has been the practice, for the two years of the distributed funding scheme, to augment the resources purchased by 20%. When the scheme was introduced, the charging rate for the resources was set to cover costs and allow for capital accumulation to provide for the inevitable need to upgrade the facility over a timescale of four or five years. In practice, the scheme has not achieved this idealised situation. Boards have only funded the computing resources which they could afford. The mechanism has in fact suppressed the use of central computing facilities; although annual running costs have been funded, the capital replacement requirement has fallen some 50% short of the target.

There is evidence of saturation in the use of the interactive CMS system during prime time. The funding shortfall has, so far, mainly affected MVS batch processing. Under this dilemma of existing academic demand not being adequately funded although the basic running costs had been covered, it has been decided that the MVS batch processing resources purchased by the Boards should be augmented by a bonus of 40% for the financial year 1987/88 ending 3 April 1988. The decision was made with effect from 30 November 1987. Because of saturation in CMS running, only a very limited conversion of the bonus MVS Allocation Units to CMS Allocation Units, sufficient to provide for equivalent overnight SLAC Batch processing, will be allowed.

The augmentation of the allocations for 1988/89 will revert to 20%.

Brian Davies, Head of Central Computing Department

The Story of the 370E

Two 370E emulators constructed by the Rutherford Appleton Laboratory are now available to certain HEP users as attached processors to the laboratory's IBM mainframe. Each 370E emulator can supply CPU power equivalent to a quarter of an IBM 3081 Model K mainframe.

What are emulators and why are they being attached to the RAL mainframe in 1988?

As the name indicates, the 370E is a device which emulates the IBM 370 type architecture machine. The definition of emulation is something that equals or excels the original - the original in this case being the true IBM product.

From the average user's point of view, the 370Es fully comply with this definition. The user can run his program either on the emulator or on the IBM mainframe and he will not see any difference in the results produced. In terms of computer power, the definition also applies since the emulator has a superior performance to the upper end of the IBM 4300 range. Bench marks with a very large and commonly used Monte Carlo program show the performance of the 370E to be approximately 0.25 times that of the IBM model 3081-KX mainframe.

In terms of capital cost, the 370E provides cheaper additional computer power than the IBM products.

This is of course the answer to the second question posed above; the emulators are being attached to the existing RAL mainframe because they offer cost effective computing.

It is interesting at this point to look in more detail at the history of emulators. The story started in the mid 1970's at the Stanford Linear Accelerator Centre (SLAC) in California, USA. There the LASS experiment had just been proposed and would use a particle spectrometer capable of recording data from millions of particle physics interactions. The data in its raw form would consist of signals coming from the electronics of the particle detectors, signals that would need to be processed and analysed by computer in order to reconstruct the spatial form of the tracks emanating from the collision of the beam particles in the interaction region of the accelerator. The analysis of this spatial information would then enable the physical processes involved in the collision process to be studied. The amount of computing time predicted for this analysis was enormous and far exceeded the capacity of the then mainframe IBM computers at SLAC. The SLAC group proposed to supply the necessary computing power through the provision of a set of special purpose processors attached to their mainframe that would allow the parallel processing of their events. The attached processors would emulate the CPU of the mainframe producing results identical to the mainframe. The emulation would be limited such that only Fortran programs with all the Fortran I/O removed would be able to be executed.

The emulator system was designed and built at SLAC by a group led by Paul Kunz and it became known as the l68E system.

This first version of an emulator had severe limitations which made it difficult to use. It was very user unfriendly. Firstly, and due to the limited emulation capability, the user had to remove all the Fortran I/O from his application program and then he had to run the pre translator. Then, owing to a very small program memory, he had to heavily overlay his program. The term "overlay" is probably unfamiliar to the modern generation of very fortunate computer programmers! This program preparation stage could take months of work for large programs.

However, it was used and succeeded in measuring several million events at SLAC. This successful application of the emulator technique at SLAC led other Institutes such as CERN, DESY and Saclay to construct emulator farms and these in turn analysed millions of particle physics events.

The Bubble Chamber Group at RAL entered the scene in the late seventies when both the virtues and vices of the 168E were well known. The prospect of having to build and operate an emulator farm based on the l68E was not attractive. However, at this same moment in time, a group at the Weizmann Institute in Israel, led by the late Hanoch Brafman, was working on the design and prototyping of a new second generation emulator - the 370E. Brafman was one of the principal designers of the SLAC l68E and was very aware of its limitations. His new design was much more complex internally but would allow much easier use. The Rutherford group decided to follow this path and led by John Barlow formed a collaborative venture with Weizman and DESY laboratories to turn the existing prototype into a production system. By 1984, largely due to the work of Bob Hatley and David Botterill, RAL was able to offer an emulator farm system comprising emulators, interfaces to the Ethernet local area network and full support software. This Rutherford system was particularly attractive since it took the form of a turn-key system that was easy to install and operate without the need for extensive in house expertise.

The Rutherford group has now made and shipped some twenty 370E systems. They can be found in Liverpool, Birmingham, Bristol, Cambridge, London, Heidelberg, Amsterdam, Zurich, Neuchatel, Barcelona and Geneva (CERN). There are even two 370Es at the bottom of a salt mine in Minnesota, USA.

The beauty and success of the 370E project can in fact best be seen at Birmingham where almost half of the particle physics production work is now handled by a set of two 370E processors attached to their IBM 4341 mainframe. It is here that the user friendliness of the 370E is best demonstrated. The user need add only one character to his existing JCL deck such that his job will be processed on the emulator rather than the mainframe.

We have come a long way since the early l68E!

It is worth recalling that in the late seventies, critics of emulators were already claiming that the days of emulators were numbered and that they would be brushed into insignificance by the rapid advances made in the field of computer hardware by the industrial computer giants. Yet here we are in 1988, almost ten years later, attaching emulators to the Rutherford Laboratory mainframe! Why?

The reason is exactly that used almost 13 years ago by the LASS group at SLAC.

The particle physics community is not only generating data from its particle detectors at an increasing rate but the data itself is becoming increasingly complex requiring much more computing power for its analysis. The computer industry simply has not provided easy to use additional computer power at a competitive price.

The latest predictions are that the British particle physics community alone will need of the order of 15-20 IBM 168 units of computer power per year (equivalent to 60 Mips) to analyse its data in the forthcoming years. Emulators could supply this if industry still remains uncompetitive in price.

John Barlow, Particle Physics Department

Report of VUG Open Meeting

On Thursday 10 December, the VAX (VMS) User Group held its second Open Meeting at RAL. The meeting was well attended with quite a few participants outside the regular VUG membership. During the lunch-break visitors were treated to a tour of the Atlas Computing Centre where they were able to see the new Cray super-computer.

In his introductory review of the work of the VUG the chairman, Bob Cranfield, raised the question of central VAX support and this became a continually recurring issue throughout the day. SERC policy on VAX support was the specific topic treated in a presentation by Paul Bryant, in which he described how he planned to utilise the limited resources currently allocated at RAL. These resources are primarily intended for the support of VAX/VMS networking and provision of a VAX front-end for the Cray. Sue Weston later gave a comprehensive description of the work of the networking team over the past year and followed this up with a clear description of their future plans. It was obvious that the team will have their hands full continuing to provide a much appreciated service.

Official central support for VAX graphics is in a different category - it doesn't exist, as Chris Osland stressed in a remarkably comprehensive presentation of current graphics standards and options which he was obliged to squeeze into the end of a crowded morning's agenda. He pointed out to HEP users that lack of RAL support affects the GKS-3D product from GTS-GRAL as well as the RAL version of GKS, since Chris is himself currently designated the CERN UK linkman for GKS-3D.

Support was again a discussion issue following Willie Black's demystification of networking in the afternoon. Willie did an heroic job in presenting a simple person's guide to the bewildering jargon of networking and at the same time providing some "meat" for the better-educated members of the audience. Much of the ensuing discussion centred around the inevitable conflict between users' immediate needs and the JNT goal of OSI standards.

When the support question was really opened up for user reaction in the final open forum it was clear that the audience thought pressure should be brought to bear on the SERC to increase its resources for VAX/VMS support. It was more difficult, however, to establish a consensus on which support areas should have the highest priority. This is clearly a matter which should be thrashed out as soon as possible within the VUG.

Although "traditional" areas of interest such as graphics and networking rightly occupied much of the day, Paul Kyberd gave us an interesting taste of topics to come with a description of his recent attempts to benchmark some add-on processor options for VAX systems. Among the benchmarks were some fascinating results for those well-known DEC "add-ons", the Micro VAX 2 and 3. The figures should perhaps be quoted here, but, as Paul admonished, the only benchmarks you should really trust are your own.

Bob Cranfield, VUG Chairman
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site