Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1989 □ 123456 □ 1990 □ 7891011 □ 1991 □ 121314151617 □ 1992 □ 181920212223 □ 1993 □ 242526272829 □ 1994 □ 303132333435 □ 1995 □ 36373839 □ Index □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFLAGSHIP
CCDLiteratureNewslettersFLAGSHIP
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1989
123456
1990
7891011
1991
121314151617
1992
181920212223
1993
242526272829
1994
303132333435
1995
36373839
Index
Index

Issue 25: April 1993

Flagship Issue 25

Flagship Issue 25
Full image ⇗
© UKRI Science and Technology Facilities Council

Front cover is a monochrome version from the presentation by Dr Julia Goodfellow (Birkbeck College).

Chancellor of the Duchy of Lancaster Inaugurates Supercomputer

William Waldegrave, Chancellor of the Duchy of Lancaster, with the CRAY Y-MP

William Waldegrave, Chancellor of the Duchy of Lancaster, with the CRAY Y-MP
Full image ⇗
© UKRI Science and Technology Facilities Council

William Waldegrave, Chancellor of the Duchy of Lancaster, inaugurated one of the UK's most powerful research computers, the Cray Y-MP at RAL, on Monday 22 February 1993.

The new supercomputer is being used on the most demanding calculations in a broad range of science and engineering projects. The machine can carry out 2.7 billion calculations per second, which is about three times the power of its predecessor, the Cray X-MP/416. In his inauguration speech, Mr Waldegrave commented on the awesome power of the supercomputer and said, "British scientists can now take on some of the large computational projects which until now could only be tackled abroad. It will allow research on a whole range of disciplines from charting global warming and climatic behaviour, to the structures of new materials and industrial processes."

Dr Paul Williams, Director of RAL, Sir Mark Richmond, Chairman of SERC, and William Waldegrave, Chancellor of the Duchy of Lancaster

Dr Paul Williams, Director of RAL, Sir Mark Richmond, Chairman of SERC, and William Waldegrave, Chancellor of the Duchy of Lancaster
Full image ⇗
© UKRI Science and Technology Facilities Council

Mr Waldegrave was shown round RAL by Sir Mark Richmond, the Chairman of Council, and the Director, Dr Williams. As well as an introduction by Sir Eric Ash, Rector of Imperial College, and the Chancellor's inauguration address, the distinguished audience also heard talks by Dr Graham Richards, of the University of Oxford, on the use of computers in designing drugs, and Professor Brian Hoskins, of Reading University, on the modelling of the atmosphere and ocean.

Fiona Clouder Richards, SERC Private Office

Report from the Tenth Cray User Meeting, April 1993

The first user meeting since the introduction of the Cray Y-MP8I/8128 was well attended, by about thirty users.

Progress Report

Roger Evans pointed out that the service had been very reliable, apart from two disk drive failures almost on the first day of the service! Hardware and operating system software have since been very reliable and a list of system breaks in the last six months showed no great cause for concern.

Since the grant allocation of CPU time on the Y MP has not yet caught up with the increased capacity compared with the X-MP, the batch job turnaround is generally excellent. In order to provide good interactive response with some fairly slow disk drives it has been necessary to allocate a total of 14 Mwords of memory to disk caching. It is still possible to run user jobs of 100 Mwords.

UNICOS 7.0

UNICOS 7.0 is expected to be introduced by the end of April. There should be little impact on users, apart from the addition of some X Window productivity tools. The X file browser gives an icon-based display of files and their attributes (in a similar way to an Apple Macintosh), the X Fortran file browser provides some very useful traces of program flow, data usage and lists all occurrences of variables, xproc shows the state of executing processes with a point and click interface and is mostly of value to systems programmers and anyone trying to find the current state of a process which is apparently hanging. For more details of UNICOS 7.0, see separate article.

RQS

John Gordon discussed the intention to remove the VM and VMS front end station software and replace it with RQS. The station software is now very dated and RQS give a more uniform interface similar to the UNIX-based NQS. RQS provides the usual commands for job submission, status checking and deletion. The major change to the user interface is the lack of the old fetch and dispose commands which are now replaced by ftp commands for synchronous file transfer and ftua commands for asynchronous transfers. The Cray rft command provides a simpler interface for the most often used options. The removal of the station software is expected around June 1993.

Distributed Packages

It is possible to run the NQS software directly on a user's workstation, but this is still very much in the trial stage. Once some experience is gained of the management needed, wider use may be encouraged.

Cray's Unichem computational chemistry software is in use at a few sites. It provides facilities for job input and data display on a Silicon Graphics workstation coupled to computation on the Cray Y-MP. More widespread use is encouraged.

New scientific library software is expected soon from Cray with improved level 2 and level 3 Basic Linear Algebra Subroutines, new sparse matrix routines and new real to complex FFTs. Several Computational Fluid Dynamics (CFD) packages (Flow 3D, PHOENICS, STAR-3D, FEAT) are now available at the request of the CFD Community Club.

New graphics software includes new versions of UNIRAS, RAL-GKS (better Postscript and HPGL support) and RAL-CGM. AVS is installed on the Y-MP but only modules for computational tasks are currently available. A full implementation of AVS will appear with AVS version 5 which is due shortly.

Parallel Processing

Stephen Wilson talked about the opportunities for parallel processing with UNICOS, using both multi-tasking and the Parallel Virtual Machine (PVM) message passing library. PVM is a heterogeneous computing package which carries larger overheads than auto-tasking when used purely on the Y-MP but which has the advantage that work can be shared between very different machines. For instance, a Y-MP and a massively parallel machine can each tackle the most appropriate parts of a large problem. PVM is also used to distribute tasks over a cluster of workstations and some results obtained in the USA suggest that useful speed-ups were obtained on a few machines, but not on tens of machines.

PVM also offers an easy route to distribute an application between a workstation and a supercomputer. We will produce some example programs showing how this can be done, and if they are sufficiently small we will publish them in a future issue of FLAGSHIP.

Supercomputing Management Committee

Brian Davies relayed news from the Supercomputing Management Committee (SMC): a new Fujitsu VP2400/10 vector supercomputer is expected to be operational in the Manchester Computing Centre in September 1993. The SMC will provide support for the KSR-1 machine at Manchester in order to make part of its resources available as a pump-priming facility.

Upgrades to the existing vector supercomputers are possible in 1993/94 but the cases will have to be made on the grounds of new scientific projects rather than just an increase in capacity. It would be possible to bid, for example, for an SSD for the Y-MP, but we would need to show, by means of an investment appraisal, what the benefits to the scientific programmes would be. Anyone wishing to contribute to this scientific case, please contact me.

The SMC is currently engaged in a procurement exercise for a "high performance computer", expected to be a massively parallel machine, with around ten times the peak performance of the Cray Y-MP8I, and around five times the performance on real applications. On the current time scales, the machines would be benchmarked around August/September 1993 and decisions made in October/November. Decisions on the location of the new machine will take place in parallel with the decisions on the hardware.

Modelling Star Formation

Dr Alistair Nelson (Cardiff) gave an excellent description of his work on star formation, beginning with a description of the role of star formation in galactic evolution and the need to understand the evolution of galactic luminosity in order to use them as "standard candles" in cosmology. The computing challenge of star formation is to follow the hydrodynamic and radiation physics of the interstellar gas clouds as the collapse over a length scale of 107. Grid-based models are unsuitable and two techniques - smooth particle hydrodynamics, and tree structured gravity - are essential to make the problem tractable. Fortunately both can be vectorised and the codes perform very well on the Cray architecture.

Dr Nelson showed various slides and excellent video material showing the collision of two interstellar gas clouds and the resultant collapse to a spinning proto-stellar disk, about the size of the solar system.

Charging

After lunch, Roger Evans described a proposed change to the charging arrangements for large memory jobs. It has long been recognised that large memory, single CPU tasks prevent other jobs from using the machine and charging should reflect this. The algorithm should be fair, i.e. it should reflect the resource used, and consistent, i.e. the same job should always incur the same charge. The current algorithm is consistent, but not fair, in the sense that users of around 60 - 80% of the memory are over-charged, given the current job mix. There is little that can be done to tune the current algorithm, and indeed it is debatable whether any algorithm can be both fair and consistent.

Since the real need is to ensure that users with large memory requirements have at least considered ways in which they can minimise impact on other users, it is proposed to charge in future simply for job CPU time and restrict the access to the queues for large memory jobs - say more than 30-40 Mwords. The Atlas Centre does not wish to be Big Brother, but merely to have the opportunity to discuss the needs and to help in the interpretation of the results of auto-tasking. Auto-tasked jobs that run inefficiently can be a large system overhead, particularly with more than one running at the same time.

It will be a matter of debate as to what the exact memory limits will be and how disputes would be handled, e.g. by reference to the Atlas Centre Users Committee. Those present at the meeting were happy with this suggestion which was carried forward to the User Committee which also agreed. Any other user comments are welcome before this change is implemented in early May.

Discussion

There was a discussion on the exact structure of the current NQS queues: John Gordon requested that users should, wherever possible, supply a memory request with the NQS job, as otherwise the maximum for the job class is assumed. It should be possible to run more jobs more quickly, if better memory scheduling can be done. The medium and large memory queues currently have a break point at 6000 sec CPU time, to distinguish between development and production jobs; the feeling from the support staff is that, although there have been only a few complaints of poor turnaround, this is probably too large a value. This break point will be changed to 3000 seconds for a trial period and user reactions are requested.

Users were keen that under RQS they should still have the same access to information about how their jobs are proceeding. Currently this is not true and the Atlas Centre will look at ways of providing more information. We also plan to provide an interface to the accounting database, so that all jobs for about the last two weeks, on a specific userid or account, can be listed.

Roger Evans, Advanced Research Computing Unit, Central Computing Department, Rutherford Appleton Laboratory

UNICOS 7.0

In the next few weeks we will be moving from UNICOS 6.1.6 to UNICOS 7.0. To a large extent this change will be transparent to users, however, in addition to several new features, there are some commands which are no longer supported. This article details the major changes that may affect your method of working.

Commands no longer supported

Changes under UNICOS 7.0

New commands at UNICOS 7.0

If you have any questions regarding the changeover to UNICOS 7.0, send them to US@UK.AC.RL.IB

Chris Plant Applications and User Support Group

Introduction of the Alpha Service

By mid-May the DEC 7000 Alpha VMS Service will be in production. The service, initially provided on one processor, will very soon be extended by a further two processors and again, later in the year, by a second machine, with both machines sharing four processors.

Gaining Access to the Alpha Service

If you are a user of the IBM service, working on a project which has a block allocation from one of the SERC Boards, your project will already have an allocation of time on the Alpha Service. Just tell Resource Management your USERID and main subproject name and ask to be registered to use the Alpha Service.

If you are an SERC grant-holder, you may wish to have all or part of your time allocation transferred from the IBM to the Alpha. In order to tryout the Alpha service and port your software you can apply for pump-priming time (up to 5 Alpha hours). When you have decided how much of your time you wish transferred to the Alpha this will be done on a 3:1 basis, i.e. 3 hours of IBM 3090 time for 1 hour of Alpha time.

If you do not have an SERC grant with time on the IBM service, but would like to apply to SERC for time on the Alpha Service, then this should be done by means of the usual grant application procedures, using the form AL54 for requesting Alpha time. Pump-priming time can be made available to tryout the Alpha Service if necessary.

Applications for pump-priming time should be made using form AL54 and returning it to Dr B W Davies at the Atlas Centre.

Up to 10% of the CPU time of the Alpha Service is available for commercial customers. Please contact the head of Marketing Services for further details.

Accounting and Control on the Alpha Service

Once you are registered to use the Alpha Service your allocation will be accounted and policed under the CCD Central Accounting System (ACCT) in the same way as on the other central services. Details of usage are available via the OATS system on the IBM. Online weekly accounts will be available on the Alpha.

You will have been given a default disk space allocation of 10Mb, this can be increased if necessary after consultation with the data manager. CPU time limits will be imposed on all interactive and batch sessions.

Documentation

A User Note giving an introduction to the Alpha Service, details of network access and advice on porting application software is available from the Documentation Office. It is also available in machine-readable form.

On the VM system, to view the User Note, enter

FIND ALPHA WRITEUP

To print it on a PostScript printer, for example ljet1 on the print server mysite, type

GIME USDOC 193 
LPR ALPHA LISTPS (p LJET1 @ MYSITE

On the Alpha it is available in the directory RALCCD_DOCUMENTATION as files ALPHA.PS and ALPHA.TXT. File AAAREADME.TXT gives details of all available online documentation.

A more comprehensive User Guide will be available at a later date.

Margaret Curtis, Resource Management Section

Forward with FDDI

About a year ago, we found that the RAL ethernet local area network was showing a few signs of overloading. Substantial files were being moved across the network between machines that could sustain very high data rates. In addition, we intend to put in a very large central file store which we expect to attract much additional network traffic as it becomes used for archiving and for live file space. As we thought that we had a year or so to sort out a solution, we set up a working party to propose a new local area network strategy.

At RAL we have had an extensive "bridged" ethernet for about five years. Bridges allow a large ethernet network to be broken up into smaller ones with the bridges themselves only passing traffic destined for other branches of the network. The site is split into thirteen "villages" (or groups of related machines) each bridged onto a backbone. As most traffic is between machines within a village, there is only a small amount going over the backbone. This worked well until large transfers started between machines in different villages. The backbone is composed of a fibre optic "star" based on a fibre optic multiport repeater in the Atlas Centre. We allow any protocols as long as they do not adversely affect the network; however, the principle protocols are IP and DECnet phase IV. There is a little Apple-talk and Novell, a trace of Pink Book and an increasing amount of DECnet phase V. There are about 600 connections ranging from the IBM 3090 down to a large number of IBM PCs. This network has been very reliable and successful.

The working party started by getting the "experts" to talk to us. We are most grateful to Cabletron, Cisco, DEC, NSC and 3COM for their advice. We decided that, as with the successful ethernet network, we wanted an FDDI backbone plus an FDDI network in each of the villages that needed it. FDDI (Fibre Distributed Data Interchange) is a 100 Mbit/sec fibre optic token ring - in fact, it is two counter-rotating rings which self-heal if a cable fails in some way. Unlike the bridged ethernet we wanted to have a "routed" network. In a routed network the various parts of a network are separated by routers which give very good control over the traffic. We felt that this would be useful in protecting the network from what happened in any particular village, thus giving the villages a lot more freedom to organise their own village network. We also decided that the existing ethernets should be connected to the routers; this would improve the performance for the customers on ethernet and also aid security. The diagram shows the logical shape of the network.

Rutherford Appleton Laboratory local area network logical topology FDDI Backbone Informatics 3COM router Informatics FDDI network Informatics ethernet Neutron Division 3COM router Neutron Division FDDI network Neutron Division ethernet Particle Physics 3COM router Particle Physics FDDI network Particle Physics ethernet Astronomy 3COM router Astronomy FDDI network Astronomy ethernet Computing 3COM router Computing FDDI network Computing ethernet Backbone ethernet connecting to non-FDDI villages Medical Research Council ethernet NSC router Y-MP and IBM 3090

RAL LAN Logical Topology

Five villages were interested in FDDI although others will undoubtedly become interested later.

A basic question was whether to construct a real FDDI ring with "dual attached routers" (a real dual fibre optic ring) or to construct a star with "single attached routers" (effectively a single fibre optic ring). The star topology would needed a concentrator. A star was chosen mainly because it would be easier to connect and disconnect villages. We did not think a dual attached ring would be significantly more reliable; the proverbial digger would probably take out the whole network anyway, as many of the cables follow the same route for much of their length.

RAL has a "B" class IP network. That is, we have a portion of the IP address space which gives us 16 bits of addressing. As we wanted to break the site into a number of subnets we had three options. First, to get a new "B" class network, subnet it (break the address space into smaller subnetworks) and move equipment to the new network with the new addressing. Second, to get a set of "C" class networks (which have only 8 bit addresses spaces) and, again, move to new addresses. Third, revamp the existing addressing. The first option was a "no-go" as "B" class networks are virtually exhausted and unobtainable. The second option would have needed about 20 class "C" networks and the 255 limit on the number entities in a network would have been a little restrictive. Thus, the third option was chosen. It turned out that only the Informatics village needed renumbering. The 16 bit "B" class address was split into 63 subnetworks of 1023 machines. This is a much more flexible option than the restrictions "C" class networks would have imposed.

There was a condition in the tender that the equipment should be able to route DECnet phase V since we believe that this protocol set will be important in the future. We have shown that the routers can deal with the protocol, but we are still bridging DECnet at the moment, as we are still learning the basics of phase V. We expect to route DECnet phase V in the near future when we understand it better.

We are bridging all other protocols.

We installed a small FDDI network six months ago to connect the IBM 3090 to the Cray and our ethernet. This has been very successful and easy to set up. We therefore had expectations that this installation would be similarly easy. Most equipment has idiosyncrasies of one sort or another; the 3COM routers which we purchased were no exception, but once these had been solved the equipment worked well and is giving good service.

There is still quite a bit of work to do as we move the ethernet traffic to the FDDI network. We are also looking forward to connecting it to SuperJANET in the very near future. So far, we are impressed with FDDI, its reliability and ease of installation.

Paul Bryant, Communications and Small Systems Group
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site