Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1989 □ 123456 □ 1990 □ 7891011 □ 1991 □ 121314151617 □ 1992 □ 181920212223 □ 1993 □ 242526272829 □ 1994 □ 303132333435 □ 1995 □ 36373839 □ Index □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFLAGSHIP
CCDLiteratureNewslettersFLAGSHIP
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1989
123456
1990
7891011
1991
121314151617
1992
181920212223
1993
242526272829
1994
303132333435
1995
36373839
Index
Index

Issue 23: November 1992

Flagship Issue 23

Flagship Issue 23
Full image ⇗
© UKRI Science and Technology Facilities Council

The front cover shows the Alpha logo.

Alpha AXP at RAL

Hard on the heels of its official announcement by the Digital Equipment Corporation, an Alpha-based computer system, the DEC 7000 AXP model 620, was delivered to the Rutherford Appleton Laboratory's Atlas Centre in November. The Digital system adds a powerful superscalar computing resource to SERC's central computer facilities, augmenting the services already provided on the Centre's IBM 3090 and Cray Y-MP8I systems.

The Alpha system has the following configuration:-

Model DEC 7000 AXP 620
Processors Two
Memory 256 Mbytes
Disks 28 Gigabytes
Interfaces FDDI
DSSI Disk and tape interface
Ethernet
Interface to StorageTek ACS
Operating system Open VMS Alpha Version 1.0

The arrival of the Alpha system marks the start of the second phase of a programme to provide the SERC community with an enhanced scalar processing capability well in excess of that currently provided by the Laboratory's IBM 3090 mainframe. The demand for time on a scalar processing facility at RAL is predicted to increase by a factor of three to four over the next few years and with the IBM 3090 already saturated, alternative ways to satisfy this extra demand had to be found. Early in 1992, it was decided that the most cost-effective way to do this would be the adoption of new RISC based technology and the Alpha purchase is the result.

The first phase of the programme started in September 1992 with the installation at the Atlas Centre of a Digital Equipment Corporation VAX 6000-620 dual processor system. This has about the same power as the two processors that have been removed from the IBM 3090 after the end of the second Joint Study Agreement with IBM. The intention was to replace the VAX with a similarly configured but more powerful Alpha-based system as soon as the Alpha was formally announced. The first phase was successfully completed by the end of October 1992 with the VAX 6000 fully operational and running at its maximum capacity. The agreement with Digital allows the VAX 6000-620 to run in parallel with the Alpha system for a period of six months from delivery of the Alpha. Its role beyond that date has yet to be decided.

Alpha itself represents the birth of a new era for Digital, bringing a change of architecture as fundamental as that which occurred when the VAX replaced the PDPl1 family in the late seventies. The change then was from the limited 16 bit architecture of the PDP11 to 32 bit VAX architecture and a rich and highly complex instruction set. The step now is from the 32 bit VAX architecture to Alpha's 64 bit architecture and a reduced instruction set (RISC). Alpha represents the latest generation of RISC design with superscalar multiple instruction issue. It runs with clock speeds as high as 182 MHz and offers impressive scalar processing performance. A set of benchmark programs representative of programs currently running on the IBM 3090-600E have been run on an Alpha-based system and they confirm its performance capability. The benchmark work showed performance variations across the program suite that were far more extreme than that experienced on other platforms with some programs running as much as six times faster on the Alpha than on other reference machines and others running at similar speeds. This variation is attributable to the architecture of the Alpha processor where performance is more sensitive to program design than is the case with older architectures. Digital themselves claim a 167.4 SPECmark89 performance for a single processor of a DEC model 7000 running with a 182 MHz clock.

This high scalar performance should benefit the scientific user community currently using the IBM 3090 system. The introduction of an Alpha-based system should, for example, enable the demand from the Nuclear Physics Board for substantially more scalar computing power over the next few years to be met at modest financial cost. It is also worth noting that Alpha systems may be beneficial to those users who are attracted by the high performance potential of SERC's vector supercomputers but who are unable to realise the full potential of these systems due to difficulty in fully vectorising their code. It is early days yet, but this class of user might well profit from a switch to the Alpha system.

Another attractive feature of an Alpha-based system is its ability to support either the VMS or the OSF/1 operating system. The first implementation at RAL will deploy VMS since this is a more mature system and is well known and popular with many potential users. The introduction of an Alpha-based OSF/1 service at some time in the future will be kept under review.

The move to Alpha will not be as transparent as moves between members of the VAX family. The Alpha is a RISC machine and it has a totally different instruction set from the VAX. It is not binary compatible with the VAX and unlike the PDP11/VAX situation of many years ago, there is NO COMPATIBILITY MODE feature. VAX users will therefore have to PORT their programs to the Alpha. Enough experience has already been gained with Alpha systems to enable us to state with some confidence that porting is a relatively easy operation and selected users in Central Computing and Particle Physics Departments at RAL have already ported a significant amount of code. An article describing early porting work appeared in FLAGSHIP 19 earlier this year. Thanks to a Digital and CERN collaboration, effectively the whole of the CERN library has been ported and is operational on Alpha. RAL are testing applications using this library. For the majority of applications, porting should reduce to a recompilation and linkage exercise. Any assembler coded routines are best rewritten. A translator is available that will translate a VAX image file (.EXE) to an Alpha image file. Code from the translator runs, but at reduced speed compared with code produced by Alpha-specific compilers and assemblers. It should be remembered that modern compilers, particularly those aimed at superscalar RISC architectures, are extremely sophisticated and very efficient. Assembler language coding should be reserved for highly specific system level tasks.

It is worth noting that although binary code is not compatible, Alpha supports both F and G VAX floating-point formats and the movement of data from VAX to Alpha should not cause problems.

RAL staff have always provided a high level of support for the central computer services and support for the Digital systems will continue this tradition. Until now, central support for Digital systems was limited essentially to the networking and graphics areas. This support can now be extended to cover a wider area including VMS, layered products, applications, libraries, access to the Atlas tape store and bulk storage facilities and of course support on Alpha porting issues.

Access to the VAX and Alpha systems should be relatively straightforward for all SERC users. The systems will be connected to the RAL FDDI local area network with external connections via Janet. IP connectivity will be available via JIPS as will X.29 terminal access via PADS. Those who are able can also connect via DECnet.

All SERC-supported personnel who are authorised to use RAL's Central Computing facilities and who are interested in establishing an account on the central Digital service should contact Resource Management. The use of the VAX 6000 machine is restricted to a selected set of users with CPU intensive applications, in particular those who are prepared to port their applications first from the IBM world to the VAX world and then to Alpha. If you feel strongly that you should be included in this set, please let us know.

The Alpha service has been open to trial users since the beginning of December, 1992. The initial service is slightly restricted, in that some aspects are still running on a field test basis and disk space is very limited. However, the service is solid enough for users to familiarise themselves with the system and to begin their porting activities. More disk space and a more robust version of the operating system will become available very early in 1993 and the system will be available to all users on a pre-production basis for all types of work including both program development and production work. Tape access and tape staging using the StorageTek ACS are also available.

The system is scheduled to become fully operational by April, 1993.

John Barlow, Head of Computing Services Division

1992

This time last year I mentioned that 1991 had been a period of relative stability in the computing services offered by the Atlas Centre, but I anticipated more rapid changes in 1992. The anticipation has become reality as this year has progressed and almost all aspects of our services are undergoing some kind of change for the better, I hope!

Perhaps the most visible, and in some ways the most straightforward, of the changes has been the replacement of the Cray X-MP/416 by the Cray Y-MP8I/8128. The Y-MP was brought into service at the beginning of October, some three months after the contract to buy the machine had been placed, and the X-MP was closed down in November. The changeover required a lot of effort by staff at Atlas, from Cray Research people and not least by users, and I would like to thank everyone who helped achieve what, at least from our perspective, seemed to be a fairly smooth transition.

The Y-MP's capacity is roughly three times that of the X-MP. As I write this in November, the Y-MP is already getting through more than twice as much work per week as the X-MP could handle and the loading is expected to build up further after the awards from the autumn grant rounds have been announced.

Some substantial and far reaching changes are taking place around the IBM 3090. During the year our second Joint Study Agreement with IBM came to an end and decisions had to be taken about the way forward on the provision of scalar computing capacity, the vector capacity largely having been taken care of by the new Cray.

Early in the year a detailed review was done by the particle physics community of its future needs for central computing facilities. This showed an ongoing need for centrally provided computing capacity and for very large managed data storage, the processing capacity rising about three-fold in the next two or three years. Other Boards have expressed a declining interest in continuing to use the IBM, but the capacity they might release is far less than the expansion required to cope with the particle physics growth. So the question arose of how best to provide the additional capacity at an affordable cost.

Our plan is to shift the emphasis in scalar computing towards equipment offering RISC technology price/performance. As a first step we have installed a powerful new facility which incorporates the new high performance Alpha chip that was announced this year by Digital. The article in this issue by John Barlow gives details of this new facility and of the ways in which it will be brought into service.

We are also are looking at ways of economically storing and managing much larger volumes of data than we can deal with at present. This is an area of rapid change, as regards both the users' needs and the options for meeting them, and there is little doubt that dealing with data will be one of the key roles of a large computer centre in the future. An article by David Rigby in this issue outlines our present thinking.

Another key area will be high speed communications. During this year the Atlas Centre computers have been connected to new 100 Mbits/s FDDI communications equipment. In the first half of next year we expect that SuperJANET, the new very high speed national networking facility, will start to come into service, and via our FDDI connection it will become possible to move much larger volumes of data to and from the Centre. SuperJANET will open up new ways of working, particularly as regards the movement of pictorial data, and we look forward to being able to exploit the enhanced network in supercomputing and other applications.

So, overall the year has been a busy and forward looking one in which we are gradually re-orienting many of our services to take account of changing user requirements and to take advantage of the opportunities offered by new technologies. I hope that next year we can consolidate our new facilities and build on them to the benefit of our users' projects.

Finally, let me offer seasonal greetings from the Atlas Centre to our users and suppliers, together with the very best wishes for 1993.

Brian Davies, Head of Central Computing Department

The Atlas Data Store

In July, an upgrade to the StorageTek automatic cartridge store was installed. This and other developments are the basis for a large managed filestore at the Atlas Centre.

The intention is to build a filestore which can be accessed conveniently not only from the large mainframes in Atlas, but also over the network from any other machine, speedily and without using unfamiliar commands. This data store is logically composed of two parts, the filestore itself and its interfaces to other machines, applications and users. These are discussed separately below.

The filestore is progressing well. It is principally managed by a subsystem currently running in the IBM 3090 VM system, and is already heavily used by the VM Transparent Tape Staging system to give rapid access to the contents of the Atlas tape library. It is designed to handle many TB1 of data, composed of files each ideally in the range 30-500MB or so, automatically using a hierarchy of storage media. The media used will change as new technologies emerge, but because at present they are the most cost-effective and appropriate at these scales, we are currently using the following.

  1. Disk for files currently (now or in the last hour or so) in use (and possibly a number of older files with low-latency access requirements). I/O to a file once loaded to disk can be random or sequential, at a speed of 2.4MB/S per file (20 MB/s aggregate). From networked machines, the aggregate data rate is currently limited to a few hundred KB/s, though by Christmas we expect to be achieving speeds of 1-2MB/S per file and 4 MB/s aggregate into the FDDI network by using an interface based on an IBM RS/6000. This will considerably improve performance over the network.
  2. The StorageTek ACS for all the recently-used files (in the last year or two). Each of the two "silos" in the ACS has 5946 storage slots, over 80% of which are available for this filestore. (The remaining capacity is used by the operating system filestore migration components of VM and UNICOS to store minidisks and files which have not been used recently. This allows the ordinary filestores of these systems to be much larger than the available disk space, in a manner transparent to applications and users.)

    The total capacity of the ACS is somewhere between 3.6TB and 4.7TB, depending on the lengths of cartridge tapes used. 807' Cartridges with 807 feet of tape (331MB) have been used successfully for some time, and those with 1040 feet of tape (427MB) are now also being used without problems. There are eight tape transports (tape drives) in the ACS. Access time is about 20 seconds to mount and thread a cartridge, plus some time to wind to the start of the data. The worst case is 50 seconds to wind to the far end of the longest tapes. Data transfer rate is 2.6MB/S through each of two independent paths.

    Experience so far with the data store is that in the ACS about 90% of the available tape capacity is usefully used, so there is room for about 4TB of files. These are loaded to disk, ready for use with a delay of 1-3 minutes or so, depending on the size of the file (though in practice the delays for applications are much less, because if more than one file is to be used, the access time of the next file can be overlapped with processing the current file).

  3. Manual tapes (probably 2GB DAT "Digital Audio Tape" cartridges) for older data (unused for a year or two) which will not fit in the ACS. Access to files held here will need a manual operator mount.

In addition, separate backup copies of all the files in all parts of the store will be stored securely in more than one place (probably on DAT and another medium).

Numbers 1 and 2 have been working for the last two years. Number 3 and the backup system are being developed now, and we expect them to start operation soon after Christmas.

The interfaces are also being developed, and we are keen to develop whatever interfaces would enable other machines, applications and users to access conveniently whatever data they wish to store in the Atlas Data Store.

For about two years, we have had a "virtual tape" interface in VM, which the VM Transparent Tape Staging system uses to allow applications and users to use data in the store exactly as if they were using a real tape on a real drive. More recently, we have extended this "tape-like" interface via the IP network to UNIX systems by using the Virtual Tape Protocol (see FLAGSHIP 13). So any application (user written, or, for example, TAR) on a UNIX system can read and write data in the Atlas Data Store as if on a local tape. A basic DEC/VMS implementation of VTP is also now available, and will be extended, which gives similar access from VMS systems. But tapes are only one of many common ways of structuring and accessing data. UNIX file-systems (remote via NFS), VMS file-systems (remote via DECnet), CD-ROMs, DOS disks and many others, all define data storage interfaces to applications and users which, in a similar way to the "tape" storage interface, could be mapped into the Atlas Data Store to give convenient access to large data storage equipment while retaining one's favourite interface. For example, a possible NFS server interface would allow PCs and UNIX systems to use the capacity of the data store as if it were a giant remotely mounted UNIX file-system; PCs could also use such an interface instead of floppy disks for backup; UNIX systems can use the existing tape-like interface for backup; there are many other possibilities. We would like to know which interfaces we should develop.

If you have a data storage or backup problem which may be appropriate for the kind of data store described here, we would like to work with you to put your data into it with an appropriate interface (possibly VTP) so that it can be used conveniently from the machines you wish to use to process it. For example, a complete copy of the EISCAT project data (originally on 1800 9-track tapes) is now stored in the StorageTek ACS and can be accessed from UNIX workstations via VTP. Let us know if you have a data storage requirement. Maybe we have a solution.

David Rigby Head of Systems Group

Supercomputing - The Way Forward?

Report on a Town Meeting held at the Royal Institution on September 24 1992

A Town Meeting organised by the Supercomputing Management Committee (SMC) of the SERC and the Information Systems Committee (ISC) of the UFC was held at the Royal Institution on Thursday 24 September 1992 to address the topic "Supercomputing - The Way Forward?". Professor P Day FRS, Director of the Royal Institution, welcomed some 200 delegates representing Universities, Research Councils, Computer Centres, Computer Suppliers and the Press. He noted that part of the Royal Institution's 1799 mission statement was to "diffuse knowledge and facilitate the introduction of useful mechanical inventions and improvements" which he felt was quite apposite to the day's proceedings.

Introduction

Professor P G Burke FRS, Chairman of the SMC, set the scene for the day by outlining the new management structure for Supercomputing in the academic community. He noted that the lead responsibility for Supercomputing had passed from the Computer Board to the Advisory Board for the Research Councils. He detailed the functions of its policy sub-committee, which advised on the overall strategic funding, and the working groups of the SMC which advised on the programme, on scientific requirements, on technical options and on procurements.

He went on to describe the present Supercomputing facilities which comprise the new Cray Y-MP8I/8128 at the Atlas Centre, the Convex C3840 at ULCC and the Amdahl VP 1200 at MCC. In addition, there are two massively parallel machines, the Connection Machine CM200 at Edinburgh and the Intel iPSC/860 at Daresbury, which are being used for specialised applications such as lattice gauge theory and molecular dynamics.

Finally, he identified the areas that needed to be considered by the meeting in looking at the way forward. These were upgrades to the National Centres, including massive data storage devices and visualization tools, provision of parallel computing as identified by the Technical Options Group, scientific and industrial requirements, software support with particular emphasis on parallel computing, high performance communications such as SuperJANET and initiatives in Europe and the US, for example, the European Teraflops Initiative (ETI).

The New Cray Y-MP

Dr B W Davies, Director of the Atlas Centre, described the timetable leading up to the launch of a production service on the Cray Y-MP, details of the hardware, and plans for migration from the X-MP. He emphasised that the peak performance of the Y-MP was 2.7 gigaflops compared with 0.9 for the X-MP, with memory increased by a factor of eight and disk capacity by five. He went on to talk about the planned applications packages including new ones such as Gaussian 92, Unichem and Cray's Multi-Purpose Graphics System. He explained how the extra capacity could be utilised in the short term before peer-reviewed grants could filter through the system. This will be achieved by allowing accelerated usage by existing grant holders coupled with provision for further applications within the grant period for computing time only.

Finally, he put forward the view that the peer review system should be more selective in allocating time on the national supercomputers. For example, at present the Atlas Centre has 200 projects using the Cray. Some projects use very large amounts of time while others use much less, but on average the usage per project is equivalent to half a percent of a supercomputer. This average is too low; if a project requires the equivalent of only half a percent of a supercomputer there are probably better ways of meeting its needs. The national machines should be reserved for the very large projects which really need them.

Scientific Highlights - Recent and Future

Professor C R A Catlow, Royal Institution and Chairman of the Scientific Working Group, gave an overview of the recent scientific achievements and future opportunities associated with high performance computing. He identified two key characteristics of supercomputing science: it is employed in the simulation of very complex real systems and problems and it is now integrated with other theoretical and experimental methods. To illustrate the former he mentioned his own field of condensed matter studies which had been concerned with simple solids such as sodium chloride in the 1970's but had now progressed to complex substances such as proteins, polymers, liquid crystals, superconductors and industrial catalysts. The challenge in this field is to move forward towards prediction.

After describing some other recent supercomputing highlights which illustrated the latter, Professor Catlow went on to outline the six key application areas which the Scientific Working Group had picked out as being typical of the way in which supercomputing is going. They are detailed in the Group's report, which was available at the meeting, and cover climate change, global modelling, modelling of macro-molecules (especially biological systems), lattice QCD, computational fluid dynamics, simulation of materials and the data management associated with the human genome project.

Finally, he returned to the theme of what supercomputing is really about: complexity and realism. Many of the problems being tackled scale non-linearly and the increase in computing requirements is far greater than the increase in the size of the problem being simulated. We need two or three orders of magnitude more in overall performance over the next seven or eight years. We also need very high bandwidth communications, huge data storage facilities, high quality visualization techniques and massively parallel systems, although there will be a continuing need for vector supercomputers.

There were three strategic issues that needed further discussion. Firstly, that the very large supercomputer facilities should only be used for the kind of science that really needs them, the point Dr Davies had made earlier. However, this in turn meant that there must be adequate funding for workstations to do the science for which they are more cost effective. Secondly, for the proper exploitation of massively parallel systems there must be a substantial investment in manpower, particularly for software development and implementation. Thirdly, it should not be forgotten that supercomputing is an important industrial technique and we should be encouraging joint industrial and academic schemes in development and application.

Technical Options: 1990 - 1995

Professor G A O Davies, Imperial College and Chairman of the Technical Options Group, began by giving an overview of his Group's work which had led to the procurement of the Cray Y-MP. He described the funding for supercomputing, noting how far behind the US and Japan we are in the UK. He also drew attention to the fact that national supercomputer usage does not match Research Council grants' spend, for example environmentalists using much more supercomputing compared with their share of research funding and engineers much less. The procurement cycle which resulted in the Cray Y-MP ran through the stages of request for information, operational requirement, short-listing, benchmarking, best and final offers to selection and debriefing.

It would be nice to go through a similar exercise for the massively parallel machine though it is more complicated because of the wide range of machine characteristics. The needs of different application areas may well mean that more than one machine is required. Also, the number of potential vendors is increasing all the time. The plan is to design the operational requirement by the end of 1992, to put together a package of benchmarks between now and next February, and to complete the whole procurement exercise by mid-1993.

There is a belief that machines comprising thousands of cheap processors are difficult to program or port to and that shared memory machines, which are much easier to handle, do not scale up. However, manufacturers are aware of these two fears and the expectation is that machines will be designed with virtual shared memory to make the programming easier, aided by various hardware and software tricks. There may be machines which do not fall into the massively parallel or vector category but are hybrids in which the distribution between local and shared memory will happen during the execution of a program. They will also have fast searching and message passing, high performance Fortran 90 compilers with automatic parallelisation and much third party software. One cannot select machines of this type just on a gigaflops per pound basis. User friendliness is a key issue, and users will also want upgradability, portability and stability of software.

UK Parallel Computing and the Role of the JISC

Professor A J G Hey, University of Southampton and Chairman of the ISC Parallel Centres Steering Committee, covered three main areas in his talk: the present support for parallel computing, the ISC initiatives in high performance distributed systems (HPDS), and software environments for parallel computers (SEPC) and finally high performance computing in general. In describing the present scene he ran through a list of initiatives ranging from the SERC parallel machines at Daresbury and Edinburgh to the DTI involvement with transputers and ESPRIT.

The HPDS initiative followed up one of the less well known recommendations of the Forty report. It involved fifteen university sites, £lM capital for equipment which consisted largely of transputer based systems and one support post at each site. The SEPC programme was based at seven sites with £600K capital. These initiatives were managed by the Parallel Centres Steering Committee. In the first few years the hardware and software of these parallel systems was in a very primitive state and it was difficult to run a sensible user service. Some sites did extremely well and were expanded, others fell by the wayside. The main lesson to be learnt from this activity was that it is better to focus on a small number of skill centres with a brief to disseminate skills and software to other HEIs in their vicinity and all over the UK.

Professor Hey went on to describe his vision of the future in which modestly parallel super workstations would permeate the whole of the engineering, university and industrial sites. Systems of the order of 10 gigaflops would be available "just down the corridor". The application areas that he felt would be most important are engineering design, business products and real time embedded systems. He gave several examples including car crash simulation, geographical information systems and fault tolerant engine controls. A natural conclusion from this is that industry will need graduates who are trained in the use of state-of-the-art parallel hardware and the development of parallel codes. Consequently a high performance computing culture needs to be developed at all the HEIs with relevant systems provided on campus. The role of the JISC (Joint Information Systems Committee of the Higher Education Funding Councils which will replace the ISC from 1 April 1993) is to foster the development of well managed computing facilities in the HEIs and to encourage the adoption of new techniques by computing services.

Finally, Professor Hey stressed the importance of a national strategy to co-ordinate the three main strands of high performance computing, namely HPC research funded by SERC, industrial applications funded by the DTI and the development of HPC skills funded by the JISC. Although diversity has its merits there is a great danger that lack of such a national strategy will lead to chaos, over-funding duplication and wastage of valuable resources, none of which we can afford.

Following Professor Hey's presentation, Professor H Liddell, Queen Mary & Westfield College and the London Parallel Applications Centre, said a few words about the work of the Systems Architecture Committee. She also described the portable software tools programme which it is hoped to start next year.

High Performance Computing and Networking in Europe

Professor D J Wallace, University of Edinburgh and Chairman of the SERC Science Board, gave a perspective on HPC and networking initiatives in Europe in the context of what is going on in the US and Japan. As background he began by describing the National Science Foundation's HPCC programme in the US and the Real World Computing Initiative, involving MPP and neural networks, in Japan. He detailed the sites, the hardware and the funding associated with these programmes.

Turning to Europe he focused on ESPRIT, the Rubbia Report and the European Teraflops Initiative. The Rubbia proposals were very similar to the US federal programme, and the main recommendation of the February 1991 report was that funding of the order of 1 billion ECUs per annum would be needed for the whole of a coherent programme in HPC. The ETI was somewhat different because it arose from the demand for teraflops systems from research scientists. Another report of significance is the Ei=3 report which has origins in industry but specifically excludes scientific supercomputing from its areas of interest. Professor Wallace's view was that both of these should be seen as pillars to support the proposals on the implementation of the Rubbia report, which were due to be announced shortly by the European Commission.

The final topic was the EC Human Capital and Mobility Programme which is designed to increase the human resources available for scientific and technological development. It focuses on access to large scale facilities for training and research purposes.

He concluded that high performance computing and networking are accepted as strategic technologies for Europe. The Rubbia report has identified the requirements and made recommendations on how to implement this strategy. It has the support of scientific and industrial groups and is proposing a co-ordinated programme which involves the Commission, industry and national government funding.

Industry and High Performance Computing

Professor P Stow, Rolls-Royce and Member of the ABRC Supercomputing Subcommittee, talked about the role of HPC in industry with particular reference to turbo machinery and CFD. Using examples from Rolls-Royce he explained how complex and expensive the design and development of pieces of engineering such as an aero-engine have become. He illustrated his talk with slides showing how CFD is being applied to all the major component areas such as turbine blades, heat transfer, exhausts, nozzles, etc. In the past there would have been heavy reliance on experimental testing but now the process is biased towards mathematical modelling of the physical processes involved. Computer aided design allows more exact modelling of each stage and faster iteration through all the stages of a design process. The big impact of supercomputers is that they are cutting down design times and development costs in a very competitive business.

However, initial mesh generation and final processing are done interactively on a workstation and this integration of workstations with supercomputers is an essential part of the high performance computing requirement. Education and training in these areas must be provided and there is considerable scope for academic input and leadership.

UK Network Plans and their impact on Supercomputing

In the final talk of the day, Dr G Manning, Chairman Designate of UKERNA, described the proposed new company, the plans for SuperJANET and the impact these could have on supercomputing. He said that UKERNA was to be a non-profit making company limited by guarantee with, hopefully, Scientific Research Association status which would mean tax exemption. Its functions are to operate the existing JANET network, to implement SuperJANET and to provide a national role in networking. He went on to describe the proposed management structure, the funding arrangements and the status of the negotiations for formal approval. There is still a need to obtain permission from the DFE and the Treasury and it is proposed to engage an external consultancy to draw up a document which sets out the reasons for and advantages of the proposed new structure. He hoped that this would be complete by November.

Dr Manning then turned to SuperJANET, which will be an optical fibre network operating at one gigabit per second for the UK academic area of research. The timescale is 1993-97 with a sum of £20M envisaged. He explained why initially it would be based on twelve sites with six of them involved in discussions of application areas. A tender exercise was carried out with an RFI sent to nineteen companies of which three were short-listed to receive an operational requirement. The replies to the tender are being evaluated and it is hoped to have a pilot network in place by the end of the financial year.

For applications there was a lot of overlap between the requirements of the six sites, and volunteers have been asked to bring forward proposals in ten areas. Dr Manning gave examples of five of these. The first is a user at one site using two different architecture machines at two other sites on a single task. A second area is remote access to scientific facilities such as ISIS and CERN. A third is concerned with textual and pictorial record retrieval, document distribution, and access to libraries. Fourthly there is distance learning involving video and sound and finally remote consultation where an example might be to have a pathologist giving advice across the network instead of having to be present physically.

Finally, Dr Manning mentioned the opportunities that he expected SuperJANET to offer to supercomputing. These included the transfer of large quantities of data, remote visualization, remote interaction and use of multiple machines on a single task. He expected that UKERNA would allow access to the network by companies which are engaged in collaborative research and eventually other companies. This would create interest and opportunities for industry to make use of supercomputers.

Discussion Session

The questions and comments that arose during the talks and in the final discussion session chaired by Professor Burke were largely concerned with matters of clarification. As regards massively parallel machines, some groups were ready now to exploit more powerful machines on their own specific applications. It remained to be seen whether any one type of machine would be able to cope with a wide range of different kinds of application or whether a number of specialised machines would be better. It was made clear that the technology is still in its infancy but it was noted that several of the major computer suppliers are now beginning to move into the area. Operating systems are being developed to support virtual shared memory which many people feel is the way forward, but on the other hand, major applications packages are not yet widely available on parallel architectures. It was suggested that parallel systems might have to be used even more selectively than conventional systems, but nevertheless there was already a sufficient portfolio of suitable applications to justify the acquisition of parallel equipment.

Overall, the town meeting gave a very good overview of the current state of Supercomputing in the UK and the options for the future. Given the nature of the subject and the speed with which it is developing, the meeting could not have been expected to produce complete and clear cut answers to the question "Supercomputing - The Way Forward?", but it did bring out many of the pertinent points.

Successful Migration to the Cray Y-MP

On the afternoon of Friday 2 October, the Cray X-MP/416 at the Atlas Centre stopped accepting new work. This was the first step in actual migration to the Cray Y-MP, although the planning and preparation had taken many months. During Friday evening and night, the executing jobs were allowed to run to completion, leaving the X-MP empty by Saturday morning.

The plan had been to move all the networking connections and the users' data and be ready to start work on Monday morning, but a disk problem postponed this to late Monday afternoon. After one night running user work, the disk problem recurred and most of Tuesday's work was lost, but by late Tuesday afternoon everything was in place again and the production service restarted. There was one more hiccough that week when it was observed that disk access to the users' permanent file-system was very slow, causing interactive sessions to become unusable, but this was fixed by introducing caching of that file-system into memory.

Overall the migration seemed to be an easy one for our users. Most of the difficulties were for those users who had some special configuration to use the pre-migration trial system; the average user had few, if any, problems.

Some work needed to be done by users, as user libraries had to be re-compiled, but this was straightforward. Since the Atlas Centre UNICOS system has only been in use for three years we did not find that users had lost their source code in the mists of time. The only lapse in our planning was over a Cray-supplied library that needed to be specially ordered and had been overlooked in our review of software in use. This caused a few users to continue with the X-MP for a couple of weeks while it was obtained, but all use of the X-MP has now stopped and it was powered off for the last time on 3 November.

We do appreciate the patience shown by all Cray users during this migration and hope that they are now ready to exploit the extra resources available.

Now that the migration is complete we encourage users to take account of the extra resources (CPU, memory, disk) available when planning their research and making grant applications. The maximum amount of work will be squeezed from the Y-MP if user programs are multi-tasked (i.e. run on more than one CPU). The latest Cray Fortran compiler can detect parallel parts of programs and generate the code to execute them on several CPUs. This is called auto-tasking; it gives the benefits of parallel execution without changes to the source code. We encourage all users to try autotasking and we will be providing courses in its use. Contact User Support (us@uk.ac.rl. ib) for more information.

John Gordon Head of Applications and User Support

Software on the Y-MP

The following software has been implemented on the Cray Y-MP. On-line information on how to access these packages is being accumulated in the following directories:

Cray Y-MP:
/home/ymp8/us/app1/README
RS/6000:
/home/unixfe/us/app1/README

where app1 is the name of the package. If the help you require has not yet been implemented, please contact User Support.

All packages are now available except where indicated: see key for details.

Packages

ABAQUS
finite element analysis v 4.9
AMBER
Molecular Mechanics/Dynamics v 4.0
CADPAC
ab initio quantum chemistry Release 5
CRYSTAL 92
ab initio quantum chemistry package for periodic systems
FLOW-3D (Restricted Access)
Computational Fluid Dynamics Release 3.1.2
FLUENT
Computational Fluid Dynamics v 4.11
GAMESS
ab initio quantum chemistry
GAUSSIAN 90
ab initio and semi empirical quantum chemistry
MPGS (Requires Silicon Graphics Hardware)
Multi Purpose Graphics System
PHOENICS
Computational Fluid Dynamics v 1.6
REDUCE
Symbolic Algebra Manipulation v 3.4
STAR-CD (Restricted Access)
Computational Fluid Dynamics
UNICHEM
Quantum Chemistry Environment (including CADPAC, DGAUSS, MNDO90)

Libraries

AIT
Applications Integration Toolkit (Data Transfer Routines)
BCSLIB
Boeing Mathematical/Statistical Library
Crayfishpak
Numerical Analysis (PDE solver) Library
CVT
Cray Visualization Toolkit
GHOST
Graphics Library
GKS
Graphics Library
HSL
Harwell Subroutine Library
IMSL
General Purpose Maths and Stats Library v 2.0
NCAR (Restricted Access)
Graphics Library
NAG
Numerical Analysis Library Mk 14 (Mk 15 available soon)
NAG
Graphical Supplement Release 3
UNIRAS
Graphics Library v 6.3A

For more information on any of the above packages, contact User Support by electronic mail at us@uk.ac.rl. ib

Chris Plant Applications and User Support Group

SERC acquires AVS

SERC has just ordered the Application Visualization System (AVS) under the terms of a CHEST deal. AVS is a system that allows users to visualize data; it consists of a very large number of modules (to which users can add others) and a visual editor for constructing the processing network for the data. As a result of this purchase, SERC users will have the leading visualization system available on a wide range of computers.

Background

The Advisory Group on Computer Graphics (AGOCG) commissioned an evaluation of visualization systems which started in June 1991. The systems evaluated were AVS 3, apE 2.1 and Khoros 1.0 and consisted of a paper study, a number of case studies and a usability study carried out on each system. The evaluation was completed in January 1992 and the recommendations were to promote the use of visualization systems via AVS while reviewing the situation in late 1992. More information can be found in the technical report detailing the evaluation which is now available as AGOCG Technical Report 9.

What is AVS?

AVS is a system which allows users to visualize their data by constructing applications from a series of software components called modules. Each module performs a specific task, such as

AVS provides a number of generic data types onto which application data can be mapped and imported into the AVS system. These data types provide support for images, arrays of data, geometry and chemistry fields. There is also support for unstructured data where data is associated with discrete objects. This is used to import the results from applications such as finite element analysis. A key feature of the system is that it is extensible by the user, since additional modules can be written, in C or FORTRAN, and integrated into the system. This has encouraged AVS users throughout the world to set up a repository of contributed modules, greatly expanding the original set of modules. Details of this repository will be published in a future FLAGSHIP article.

Platforms Supported

SERC has ordered AVS for the following machine/operating system combinations:

This is effectively the entire range of platforms covered by the CHEST deal, apart from Stardent Vistra and Cray Y-MP/EL. These were omitted because it was not thought that any SERC users used these platforms.

Distribution

In order that SERC is regarded as a single site by CHEST, all distribution is to RAL. CCD Graphics Group will then distribute the media to people responsible for each platform, who will then distribute it to all users of that platform who request a copy. Contact between SERC and AVS will be via staff in Central Computing and Informatics Departments at RAL, although others will certainly build up expertise and provide informal support.

Documentation

AVS is extensively documented. One complete set of documentation will be provided for each platform with the initial media distribution. RAL will send this to the person responsible for each platform at the same time as the media. Additional copies of the documentation are available, under the CHEST deal, from Manchester Computing Centre (MCC); details of the documentation and MCC's charge for them have not been received, but will be published in FLAGSHIP as soon as they are.

The world-wide interest in AVS is reflected in a news group (comp.graphics.avs) and an anonymous FTP site for the repository of contributed modules. AGOCG's Visualization Coordinator, Steve Larkin at MCC, is organizing AVS introductory courses.

Chris Osland, RAL CCD Graphics Group

Flagship Issue 23 Back Cover

Flagship Issue 23 Back Cover
Full image ⇗
© UKRI Science and Technology Facilities Council

By Anne Zorner

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site