Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1989 □ 123456 □ 1990 □ 7891011 □ 1991 □ 121314151617 □ 1992 □ 181920212223 □ 1993 □ 242526272829 □ 1994 □ 303132333435 □ 1995 □ 36373839 □ Index □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFLAGSHIP
CCDLiteratureNewslettersFLAGSHIP
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1989
123456
1990
7891011
1991
121314151617
1992
181920212223
1993
242526272829
1994
303132333435
1995
36373839
Index
Index

Issue 11: November 1990

Flagship Issue 11

Flagship Issue 11
Full image ⇗
© UKRI Science and Technology Facilities Council

Year End Message

As year end approaches yet again I will look back briefly on some of the main events of 1990 affecting the services the Atlas Centre provides for its users. As with most years it has felt extremely busy, and as ever we hope that our efforts here at the Centre together with those of our users combine to help move science forward a little year by year.

For most of 1990 the Cray and IBM facilities at the Atlas Centre have been operating at near saturation level.

On the Cray, the peer reviewed demand for usage now exceeds the capacity that can be delivered by about forty percent. While some degree of over allocation is customary, forty percent is uncomfortably high and in the past few months we have been recommending to committees that some new grant applications be redirected to the other national supercomputer centres. But, since the other centres are also running near saturation, redirection of work can provide only a temporary relief if the demand continues to grow. In order to help control the situation the Allocations Panel of the Joint Policy Committee for Advanced Research Computing has suggested the introduction of allocations of supercomputing time by discipline; as I write this the necessary consultations are not yet complete but they should be in a few months' time. As regards the longer term, the Joint Policy Committee has submitted a bid to the Advisory Board for Research Councils for funds to enhance the national facilities for advanced research computing for the period into the mid-1990s. If granted, this bid should help alleviate the immediate problems and provide a means of acquiring next-generation facilities, though it is in the nature of the business that demand will continue to outstrip supply. At the moment the outcome of this bid is not known.

Otherwise, the most visible happening on the Cray this year has been the move away from the COS operating system to UNICOS. This was a major undertaking which did not go by any means as smoothly as we had hoped, due to various factors which are set out elsewhere, but now, after strenuous efforts by our own staff and by specialists from Cray in the USA and UK, the service is stable and reliable. Recently it recorded more cpu hours per week delivered to users than ever before, and we are now well positioned to take advantage of new developments that can follow as a result of moving to Cray's mainstream operating system product.

In August we made an arrangement with Cray Research to make available a small part of the capacity of our machine for bureau work for external customers. The arrangements, which include some upgrades to the Atlas Centre hardware, are not expected to have any detrimental effect on the performance of the machine as seen by the research council peer reviewed users; indeed these users should benefit from the presence of the additional hardware.

The IBM service has run rather smoothly throughout the year, essentially with all six processors going flat-out 24 hours a day seven days a week all year. The Joint Study Agreement with IBM is progressing well, and the strategic users are effectively exploiting the vector, parallel and very large memory facilities on the machine. The Agreement's half-way point was marked by a seminar in May, attended by about two hundred academics and industrialists, on the benefits of large scale computational modelling in science and engineering.

The normal SERC peer reviewed workload on the IBM is heavy, particularly from particle physics data reduction and there are substantial grants of time in other areas of science.

The present arrangements for funding the IBM service by the Boards of SERC and other sources were agreed in 1988 and were due for review this year. The review, now in progress, also covers the future requirements for this style of computing, and so its outcome will help determine the shape and nature of the IBM service for some years to come.

Other developments during the year included the arrival at the Atlas Centre from Harwell of Iain Duff's Numerical Analysis Group. I should also mention the bringing into service of the automated cartridge store, which is connected to both the Cray and the IBM and which proved itself under extremely heavy loading during the UNICOS conversion when it was mounting over 800 cartridges per shift. Also our facilities for producing video output from the Cray or IBM (or indeed from other machines on the network) have developed well this year and are in use by several projects in which the viewing of pre-computed moving images helps to understand the phenomena being modelled. It is very clear that graphics or visualisation can add substantial value to large scale computation, and we hope now to be able to gradually improve the facilities we can offer in this general area.

So, overall the year has been fairly eventful, and I hope that we are now well positioned to consolidate and extend the services we can offer to the benefit of our users.

Finally let me conclude, on behalf of the staff of the Atlas Centre, by sending the seasons greetings to all our users and the very best wishes for 1991.

Brian Davies, Associate Director Central Computing

Cray Bureau Service

Previous articles on the Cray service have described the migration from the COS operating system to UNICOS and how this was facilitated by a loan from Cray UK of an extra 8 megawords of memory and extra disk space. SERC has now negotiated to retain this additional hardware on a rental basis in exchange for allowing a proportion of the Cray X-MP to be used by Cray for its commercial Bureau Service.

It has been estimated that the extra memory enables an increase of between 15% and 20% in the available CPU time as well as allowing significantly larger jobs to be run (currently up to 12 megawords). The Cray Bureau Service is allowed to take up to 10% of the CPU averaged over a week in each of three periods which equate to daytime, overnight and weekend. It has been allocated 10% of the files tore, currently 3 gigabytes, and this is subject to the same rules of migration to off-line storage as the rest of the users' filestore.

Bureau users access the Cray via a high speed telecomms link which is connected directly to front-ends at Cray UK's headquarters in Bracknell. Marketing of the bureau service, user support and accounting for its use are all carried out by Cray staff so there is very little extra effort required from CCD personnel. The bureau service has been allocated a separate set of queues in order to reduce the possibility of contention between its users and the academic users of the Cray.

The proceeds from the sale of computer time to bureau customers, less any direct costs such as the line rental, can be used by SERC for the purchase of hardware upgrades from Cray. Consequently it can be seen that this arrangement is beneficial to all Cray users by providing extra capacity immediately and the potential for further enhancements in the future.

Tim Pett, Head of Marketing Services

Report from the Sixth Cray User Meeting, 25 October 1990

About thirty users of the RAL Cray X-MP/416 met in the Atlas colloquium room on 25th October for the first user meeting following the introduction of the UNICOS service. A very full agenda commenced with a description of the work of the numerical analysis group by John Reid. John began his talk by saying that they really now belonged to the Rutherford Appleton Laboratory and we should stop describing them as the "ex-Harwell" group.

The work of this group has been previously described in FLAGSHIP. In addition to their work in sparse matrix methods and non-linear optimisation, John went into a little more detail on his work as a member of the FORTRAN standards committee; although he has now retired from the ANSI standards body, John is still a member of Cray's FORTRAN Advisory Board. John commented that in 1988 the major theme of this board had been "reliability" and in 1989 improved support for multi-tasking. The question often arose as to the trade off between speed and error checking in compilers; although the obvious answer was to have more compiler switches to define optimisation level, this paradoxically makes it much harder to ensure the reliability of the compiler because of the increased number of permutations that have to be checked after every compiler upgrade.

Roger Evans analysed the difficulties that had been experienced in the move to the UNICOS operating system and pointed out that although the difficulties of the first couple of weeks had been aggravated by lack of experience in the in-house testing, the persistence of some of the major difficulties such as tape support and the data-migration product were due to inherent deficiencies in UNICOS version 5.0. A major loss of user data occurred due to lack of error checking in Cray's data migration code and several hundred user data files had been lost. Cray put in a very large amount of effort to retrieve this data but admitted that of course it should never have happened.

The final problem that had dogged the early UNICOS service transpired to have been a hardware problem related to two defective batches of tapes, both of which had been assigned to the data migration pool. Since the identification of this problem and the introduction of UNICOS version 5.1 in early September, the UNICOS service had been very reliable and was yielding more user CPU hours per week than the old COS service. The retention of the 16 Mword of memory, financed by the bureau service, also meant that we were able to offer users the chance to run larger jobs in the future.

John Gordon described the current state of the UNICOS service, the NQS job classes and the use of the SSD for I/O caching, which meant that we had been able to remove the explicit charge for I/O and give everyone the benefit of the increased speed of SSD accesses. It is planned to introduce a new "anti-social" job class for users demanding very large memory (up to about 12 MWord) or very large temporary disk space (up to about 3 GByte). The new job class would need to be very carefully controlled to avoid disrupting the smaller jobs or the fledgling interactive service.

The real benefits of the interactive service would be received once we were able to run the X-windows graphics protocols across JANET. Most of the new developments in debugging, optimising and tuning tools are based on X-windows to give the user a graphical representation of the performance of his or her program. A JNT sub-group is about to report on the use of the TCP/IP protocols across JANET; users were asked how they would make use of the greatly increased functionality, such as using NFS for remote file access. On a show of hands, the great majority of those present were now using local UNIX workstations and wanted a better style of UNIX-to-UNIX working across wide area networks.

Brian Davies gave a short presentation on the current state of supercomputing in the UK, describing the current level of over-subscription for supercomputing resources, which was particularly severe on the Atlas Cray and had resulted in recommendations that some work be transferred to other machines. The overall management of supercomputing in the UK is set to change in April 1991 when the "lead responsibility" moves to the ABRC with SERC taking executive authority for managing the services at London, Manchester and RAL.

A submission from ABRC to DES was made in April 1990 to increase the amount supercomputing available over the next five years. The case included funding for both a next generation "conventional" vector supercomputer and one or more major installations using massively parallel architectures. The funding for this was likely to be decided at the time of the chancellor's autumn spending statement and we were all anxiously awaiting the news.

After lunch Gordon McBride of Cray UK made a short statement apologising for the lack of reliability in the UNICOS 5.0 release and particularly for the loss of data caused by the data migration software. Cray accepted this to be their responsibility and would provide the necessary CPU time over the next few weeks for users to re-create files which had been lost.

Other presentations from Cray UK described the new applications that were being developed by Cray to support particularly the engineering and chemistry community. Cray's MPGS (Multi-Purpose Graphics Software) is a graphics package that gets its display speed from a local Silicon Graphics workstation while storing the raw data and performing the large scale numerical calculations on the Cray. Chemtools and Unichem are integrated chemistry applications that Cray is targeting at the industrial chemistry and pharmaceuticals area.

A round table discussion encouraged the users to make known their needs for the future development of the UNICOS service; aside from the above comments on UNIX-to-UNIX working, the major need mentioned was for improved turnaround for medium length jobs (about 30 CPU minutes). This will probably mean that the NQS queues have to be restructured with time splits at about 10 minutes and 60 minutes. RAL staff agreed to look at the possibility of providing this increased flexibility. Other comments in the area of networking clearly indicate that users are planning to transfer much greater amounts of data to feed their new workstations. Speed and functionality of JANET are clearly going to be of increased importance in the future.

Overall there was a great sense of relief that the transition to UNICOS was finally over and that the new system was now ready to deliver the benefits that had been anticipated when the move was planned.

Roger Evans, Advanced Research Computing Unit, Central Computing Department, Rutherford Appleton Laboratory

A Sunken Flagship?

Some personal thoughts on the demise of an Operating System.

Let me hasten to reassure you that the flagship referred to in the title is not the publication which you are now reading, but the operating system, MVS, often known as IBM's flagship operating system.

Some years ago, I wrote an article for the forerunner of this journal entitled MVT - An operating system to remember. Some cynics have suggested that the title of this article should have been MVS - An Operating System to forget!

While that is a debatable point, it has to be said that MVS never commanded the degree of affection and loyalty here that its vintage predecessor, MVT, did.

I am not going examine in any detail the reasons why that might be so. But part of it, no doubt, is that by the time that MVS had gone into production here, VM/CMS had become very well established as the user friendly interactive system that it undoubtedly is. At that time, although there was a batch system within VM (CMSBATCH), it did not have many facilities, and was not widely used. It was in fact fairly unusual then for IBM mainframe sites to be entirely VM based. This tended to change both as people moved away from batch, and as VM gained in popularity, especially in academic environments.

MVS has evolved from the older batch-oriented systems such as MVT (and before that MFT and PCP), in a relatively seamless way, from the old 360 platform, through 370 (including 303X, 308X, and 309X), right up to the present day, with S/390, and the ES/9000.

In principle, it is possible to take a load module that ran under MVT on the 360/75, and run it today under MVS with largely unchanged JCL. Because IBM have taken this cautious, evolutionary approach, they have protected the users' investment in code and JCL. But at the same time, they have not been able to take the opportunity of introducing more radical solutions, in architecture and software, that might have been very popular - such as replacing JCL with a universal command language applicable across batch and interactive environments, as exists in VM.

In the early 80s, a SHARE user group committee recommended to IBM that they replace the JESes (Job Entry Subsystems), with a revolutionary Workflow Manager (which would also have had implications in the VM and Networking world). Again, this was too revolutionary, and we still have two versions of JES (JES2, which evolved from HASP, and JES3, which evolved from ASP), which IBM must go on supporting.

MFT was a fixed storage system, with limited multi-tasking support (Multi-Programming with a Fixed number of Tasks). When the early 370s came along, with Dynamic Address Translation and the possibility of Virtual Storage, MFT evolved into OS/VSl (Virtual Storage 1). MVT (Multi-programming with a Variable number of Tasks), which was a little more flexible than MFT, with variable-sized regions instead of fixed partitions, evolved into OS/VS2, Release 1. This still looked very similar to MVT internally, and still only provided a single "address-space", to run many regions. However, it did provide Virtual Storage, so that regions could be much bigger than in MVT. OS/VS2, Release 1 was later renamed SVS (Single Virtual Storage referring to the single address space), when OS/VS2 Release 2 made it obsolete - because this was MVS (Multiple Virtual Storages). IBM were always very keen on re-usable acronyms, or even parts of acronyms!

RAL was rather late in converting from fixed to virtual storage systems, Le. from MVT to MVS. This was because we were still running the 360/195s. Lovely machines though they were, they were still based on 360 architecture (although approaching 370 technology), and there was no possibility of running any flavour of VS on them.

However, with the advent of the 3081 and the Atlas 10, this made MVS a possibility.

Conversion began in earnest in 1983, under the managership of Dr Margaret Curtis. A trial system was produced quite quickly, which ran as a guest under VM, in parallel with MVT.

The full-blown production service, with all local modifications in place, did not go into production until April 1985. After a further period of parallel running, MVT was finally switched off in September 1985.

Significant modifications to the basic system were made were in the area of Tape Management, which IBM did not provide, and in online accounting and ration management. In order to do this, two major Subsystems were written, TOMS, and ACCT, using the formal Subsystem Interface structure which MVS does provide. TOMS used a VSAM dataset for its Tape Database, while ACCT made use of the ACF2 database to store its ration information.

Later on, when it was decided to standardise on SQL for database applications, both of these were successfully converted to talk to SQL, using another locally-written Subsystem, TOVM. This had been designed to use virtual CTC connections to VM, so that in the event of ever wanting to run MVS native on a separate processor to the one running VM, it would have been technically possible to do so.

Another major modification was to set up a second MVS system which ran on the 3081, in a slave-master relationship to the MVS on the Atlas 10. This allowed us to make use of the spare CPU cycles on the 3081 which became available outside prime shift, when the interactive load was reduced.

Although MVS/XA was never run in production at the Atlas Centre, relatively recently an MVS/ESA system was developed for trial purposes. Had it been decided to run this in production, there would have been little technical difficulty in so doing.

What of the future? In IBM's eyes, MVS would still seem to be the flagship for its larger customers. At the same time, with the announcement of VM/ESA last September, IBM seems also committed to the growth and development of VM (and in particular, have unified the various strands of VM - VM/SP, VM/XA, and HPO - into a single, unified VM, available across the ES/9000 range).

However, like all proprietary systems, VM, like MVS, must face the challenge from the growing pressure towards Open Systems.

IBM have a UNIX product (AIX), which is offered on PS/2, RS/6000 and also on mainframes. It currently runs only under VM, but may be offered "stand alone" at some time in the future.

Tributes

The MVS story has been a short one, but as far as it went, not a totally unsuccessful one.

There were many people who contributed to the development and conversion effort, from Systems, User Support, Resource Management and Operations groups, and also Daresbury Laboratory, with whom we were able to share ideas and code. Their fine work enabled a smooth transition to MVS and established a service which has been reliable, and relatively trouble-free.

I would like to mention particularly the mainframe operators, who always seem to handle bewildering systems and bewildered users with great resilience and tact, but whose efforts go largely unseen, whose praises go unsung and who are too often forgotten. They adapted to MVS/JES3 (from MVT IHASP) remarkably quickly and have run it very satisfactorily ever since.

Mike Ellwood, Systems Group Software and Development Division
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site