Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1989 □ 123456 □ 1990 □ 7891011 □ 1991 □ 121314151617 □ 1992 □ 181920212223 □ 1993 □ 242526272829 □ 1994 □ 303132333435 □ 1995 □ 36373839 □ Index □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFLAGSHIP
CCDLiteratureNewslettersFLAGSHIP
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1989
123456
1990
7891011
1991
121314151617
1992
181920212223
1993
242526272829
1994
303132333435
1995
36373839
Index
Index

Issue 08: March 1990

Flagship Issue 8

Flagship Issue 8
Full image ⇗
© UKRI Science and Technology Facilities Council

The front cover is a still from William Latham's second film The Evolution of Form currently being generated on the IBM 3090-600 at the Atlas Centre. The originals are in full colour and of very high resolution.

The End of MVS as we know it

In 1984, MVS replaced MVT as the major batch operating system for IBM computers at RAL (MVT had been running since 1969!). MVS continues to be IBM's most used operating system: new IBM developments in software and hardware are usually implemented first in MVS. For that reason, amongst others, we have kept it running, although most users have moved to CMS, and had even planned an upgrade to MVS/ESA so as to provide extra facilities for users.

These plans have proved to be impractical in the face of decreased demand for the MVS service. By the start of 1990, use of MVS had dropped to less than 1% of the IBM 3090-600E workload. At this low level, the costs of licence fees, disk space, and labour are not justified by the work achieved. Thus, we can only conclude that the MVS service is no longer viable, and we have planned a gradual rundown of MVS by the end of 1990.

In doing this, we are following the trend of the worldwide academic community. Despite its popularity elsewhere, within the academic community MVS is being dropped in favour of VM/CMS. On BITNET, more IBM and IBM-compatible machines run VM than MVS. In the HEP community, only DESY still runs MVS.

In the UK's academic community, there are at least twelve machines running VM/CMS. The University of London Computer Centre has announced the rundown of its MVS service, while Manchester has dropped its general MVS service altogether (Manchester stills retains MVS for use on their Amdahl VP11OO and VP1200 supercomputers). Cambridge alone of the UK universities still runs a general MVS service.

We recognise that it will take the remaining MVS users at RAL some time to migrate their work to CMS or other services. It was with this in mind that we created our rundown schedule. Over the next couple of months, we will be contacting all remaining MVS users, to assess their needs and to help them plan their migration.

We plan to do all we possibly can to make this transition as painless as possible. This includes the provision of any new software required, migration courses, and personal assistance. However, please keep in mind that leaving migration until the last minute will spread our resources very thinly; plan to do it sooner rather than later!

If you have strong views about needing MVS, don't wait for us to contact you. If you get in touch now, perhaps we can come to a satisfactory agreement.

John Gordon, Applications and User Support Group

Sculpting with Pixels

The front cover of this issue and the illustrations in the article Sculpting with Pixels are stills from William Latham's second film The Evolution of Form, currently being generated on the IBM 3090-600 at the Atlas Centre. The originals are in full colour and of very high resolution.

For the last few months, the IBM mainframe at the Atlas Centre has been producing beautiful and intriguing images, partly regular and yet related to the natural forms that fascinated Constable and Ruskin. William Latham has been sculpting with pixels. Using highly flexible software tools developed by the IBM Scientific Centre and the power of the IBM mainframe at Atlas, he is composing a continual metamorphosis in a film entitled The Evolution of Form.

William Latham's official job title is Artist in Residence at the IBM Scientific Centre in Winchester. He trained as a printmaker, sculptor and animator at the University of Oxford and accepted the post of Research Fellow with IBM in order to pursue the latter two aspects. But his medium is not bronze or marble: rather film and video. The sculptures he creates are models in the computer and are only made visible by drawing them on a screen or recording them onto film or videotape. He must be one of the few sculptors that starts by constructing his model!

In 1988, William finished the first of his works in this new medium, a six minute film entitled The Conquest of Form. This was the centrepiece of an exhibition shown in many art galleries around Britain, including the Arnolfini Gallery in Bristol. The rest of the exhibits were stills from the film and explanations of the way the film had been made. The Conquest of Form was also shown at SIGGRAPH'89.

Since September 1989, William has been working at the Atlas Centre, producing the sequel. His work had grown sufficiently in complexity that he required a machine of the 3090-600E's power; as a result, RAL and IBM formed a collaboration to allow William to continue his work at RAL and to investigate the potential for vectorizing the software he was using.

William Latham: The Evolution of Form

William Latham: The Evolution of Form
Full image ⇗
© UKRI Science and Technology Facilities Council

William's second film - The Evolution of Form - is scheduled to be around seven minutes (10,000 frames) long. Each frame, as in The Conquest of Form, shows a sculpted object illuminated by a number of lights. Unlike his first film, not only is every view different, but the geometry and texture of the model also change continually. The result is difficult to describe but has elements of both a kaleidoscope and a mobile. The components of the model can pass smoothly through each other, providing a non-realistic counterpoint to the inherent plausibility of the objects themselves.

The figures in this article and the front cover of this issue of FLAGSHIP are stills from The Evolution of Form, which won William first prize in the Research category at the IMAGINA animation festival in Monte Carlo earlier this year. Both films are due to be shown in an exhibition at the Natural History Museum, London from June to September this year, reflecting the close relationship between the forms William sculpts and natural forms.

To have referred to his sculpture as an object was a convenient shorthand, but does no justice to the complexity of the model which, as can be seen from the pictures, have many hundreds or thousands of components. These are defined using a system called ESME (Extended Solid Model Editor) which creates models suitable for WINSOM (the WINchester SOlid Modeller). Both of these systems have been developed by IBM Scientific Centre to facilitate the manipulation and rendering of solid models.

WINSOM has its origins in the display of complicated molecular structures. It supports a wide range of primitive geometric objects BLOCK, CONE, CYLINDER, ELLIPSOID, HELIX, TETRAHEDRON, TORUS - as well as a contour shell of a 3D field. These objects may be combined repeatedly using set operations and blending functions to produce ever more complex objects. The intensity, colour, position and focus of countless lights may be defined, as can the position and direction of the virtual camera.

William Latham uses fractal techniques to generate texture maps used to alter the appearance of objects in the model. WINSOM's texture mapping is unusual in that it is a 3D mapping, fully permeating the object; this ensures correct results when objects are sliced.

William Latham: The Evolution of Form

William Latham: The Evolution of Form
Full image ⇗
© UKRI Science and Technology Facilities Council

Each frame in the final film is produced at least twice. The first time it is produced at low resolution (512 or 1024 pixels square) and recorded onto videotape at Atlas and/or Winchester. This provides a preview of the whole film and allows William to check that the model is developing in the way he wants. The final run will produce high resolution images (2048 or 4096 pixels square) which will be recorded onto 35 mm colour negative film. A Matrix QCR film recorder with Oxberry pin-registered camera has been loaned by IBM to the Atlas Centre for the making of this film. It will be controlled by the Topaz computer which, until recently, was part of the RAL Video System. (The video system now has a new Topaz processor.)

Every frame in the final film is rendered using ray-casting and then anti-alias ed, requiring between 2 and 10 minutes CPU time per frame. At the resolution required for the film recorder, each frame requires about 4 megabytes, the whole film (10,000 frames) around 40 gigabytes. Luckily this does not all need to be online at once - as each sequence is produced it goes onto film and the files can then be deleted. In fact all the files will be preserved on Exabyte tapes (about twenty of them) since the sequence provides an invaluable source of material for evaluating future systems such as High Definition Television (HDTV).

The IBM WINSOM software is not a formally released IBM program product but under the terms of RAL's agreement with IBM it may be used for research purposes on the 3090-600E at Atlas. We have developed a viewing facility for WINSOM images that runs on the Silicon Graphics IRIS 3130 and output may also be previewed from the Abekas A60 video disk (see article on the new Video Facility in the next FLAGSHIP). If you have an application for a Constructive Solid Geometry system like WINSOM, please contact me on 0235-44-6565.

I would like to thank William Latham for his assistance with the preparation of the text and figures of this article. If you are interested in more details of both the software and William's use of it, there are two interesting articles in the IBM Systems Journal, Vol 28, No.4, 1989.

Chris Osland, Head of Graphics Group

SUP'EUR in Geneva

This article is reprinted, with permission, from the January 1990 edition of Supercomputing

The third meeting of the SUP'EUR User Group was held from 19-22 September 1989 in the Holiday Inn, Geneva, hosted by CERN, the European organisation for Particle Physics. Following the custom of previous meetings, this too was held in an unfinished building but, apart from a couple of attempts by workmen to drill into lecture rooms, the facilities were excellent.

The purpose of SUP'EUR is to assist European IBM supercomputer users and support staff in academia and research to make the best use of their hardware and software by exchanging their experiences and problems and presenting their common needs to IBM. Previous SUP'EUR meetings were held as part of conferences or symposia; this meeting was a conference in its own right, allowing SUP'EUR to present itself to potential members. There were 122 participants who attended 29 sessions between Tuesday afternoon and Friday morning. On Tuesday afternoon tutorials were given on the topics of Vector Processing, Parallel Computing and the integration of workstations and mainframes. On Friday morning there was a review of the progress made, a business meeting and a presentation by Irving Wladawsky-Berger, a Vice President of IBM.

In between lay the bulk of the meeting. These sessions, with two parallel streams each afternoon, were of three types. The first type were review talks on a variety of topics such as: Supercomputing in Europe by Ad Emmen of SARA; Supercomputing in the US by John Connolly of Kentucky State University; The 3090 architecture by Don Gibson of IBM; Academic Networking in Europe by Michael Hebgen of Heidelberg University; Computing at CERN by Chris Jones; Vector and Parallel Fortran by Randy Scarborough of IBM. The second class of talks had more of the flavour of news. They covered topics like: The latest release of IBM's VS Fortran by Beverley Moncrieff of IBM; The status of Fortran 8X by Mike Metcalf of CERN; IBM's supercomputing latest products (systems extensions) by Leslie Toomey from IBM. The third category were presentations on work accomplished on IBM 3090s. Some of these were pure applications such as The computation of hypersonic flows around blunt bodies by Richard Schwane from the Aerodynamisches Institut of Aachen. Some were reports of more general work such as The Vectorisation of the NAG FORTRAN Library; others described educational techniques used by member institutes, for example, Training in Supercomputing for graduate students at the University of Bordeaux, by P Fabrei. This third category of presentations - the ones reporting real work by real users - was the most encouraging for the future of SUP'EUR because it demonstrated a body of work that is being accomplished on IBM 3090s that justifies their description as supercomputers. To give a flavour of the meeting I have selected a talk from each category previously described and will discuss it in greater detail.

Francesco Antonelli described the work he did at CERN in developing a Bit Vector Subroutine Library to manage randomly sparse arrays. Index Vector techniques are well known; this work extends the technique by storing the index in an array of bits (a Bit Vector) which is manipulated by the IBM 3090 VF's Vector Mask Register (VMR) manipulation instructions. These assembler instructions are used in Fortran-callable subroutines and allow the user to manipulate real and integer arrays which are indirectly addressed by Bit Vectors.

Several classes of routine are provided:

Conditional:
These create a Bit Vector by testing a Vector (real or integer) and setting bits dependent on some comparison with a scalar or another vector.
Scatter/Gather:
These copy elements of a vector under control of a bit vector rather than the more traditional index vector.
Bit Vector Logical and Manipulation Routines
These perform logical comparisons of bit vectors.
Extended Linear Algebra:
These provide BLAS functionality for arrays indexed by bit vectors.

Significant speed-ups have been measured for common functions used in particle physics Monte Carlo codes but no results were given for a full implementation using these techniques.

Beverley Moncrieff, the manager of IBM's scientific languages from Santa Teresa Laboratory, described Version 2 Release 5 of IBM's VS Fortran compiler. The most significant part of this announcement was the inclusion into VS Fortran of the Parallel Fortran product which has been available to some sites for a couple of years. She also described V AST-2, a Fortran preprocessor to be used in conjunction with IBM's VS Fortran to help tune a program for vector and parallel operation as well as scalar optimisation. V AST-2 supports a subset of the Fortran 8X array language to make vector constructs more obvious and will convert F-Ioops to Do-loops to allow vectorisation. It will also, under user control, unroll, collapse or interchange Do-loops, push Do-loops into subroutines, expand subroutines in-line and change the order of statements within loops to break recurrences. In fact, it knows most of the tricks that experienced programmers use to increase the vector content of their code. The difference is that V AST-2, not being human, does not make typing mistakes. It allows less experienced programmers to speed up their code without worrying about the correctness of the changes they make. It also knows about IBM's Parallel Language Extensions and uses them when Parallel Execution is to be used. V AST-2 is produced by the Pacific Sierra Corporation who also produce FORGE, a similar product that optimises Fortran for Cray-Fortran compilers.

John Connolly's talk described the state of supercomputing in the United States. In the traditional manner he described its past, present and future. Like most speakers at this meeting he gave his definition of a supercomputer - a general purpose machine which is comparable to the best machine available at the present time. This definition is time-dependent and is not based on peak performance but on throughput. After a brief survey of the history of supercomputing in the US in the last 100 years, John then described the recent history of academic supercomputing based on the National Science Foundation's five state-of-the-art National Supercomputer Centres from their inception in 1985 until 1989. From 1986 when all five Centres had machines in place, their capacity (measured in Cray-l equivalent) has increased from 30 to 94. The NSF programme currently provides access to 3,000 projects in 120 Universities with approximately 10,000 users. Approximately 10% of scientific researchers now use supercomputers in their research. This compares to 50% who use computers of some kind. The supercomputer users are predominantly physicists, chemists and engineers, with a small but growing use by biologists, medics and economists but almost no computer scientists. The role of the Supercomputer Centres is: to provide leadership in the development of new techniques; to provide training for new and inexperienced users; to act as a test-bed for new technology; to be a good customer for more advanced supercomputers, thus motivating the supercomputing industry; to improve the human-machine interface.

The wide use of the five NSF sites would not be possible without good networking. NSFnet is a high speed (1.5 Megabaud) backbone funded by NSF which connects a number of regional networks partly supported by NSF. The future of academic supercomputing in the US is dominated by the recently announced Federal High Performance Computing Programme. This programme, funded by a number of Government Agencies: NSF, DOE, NASA, NIH and DARPA has a budget of $2 billion over five years. The four parts of this programme are: High Performance Computing Systems - a development of teraflops capability with corresponding improvements in memory, mass storage and input/output systems; advanced software technology and algorithms development of needed computational techniques; national research and education network - upgrading of NSFnet to a Gigabaud backbone and T3 service to 1,000 research institutions; and basic research on human resources - the expansion of computing science education to provide more suitably educated workers. The combination of these four should enable US academic researchers to meet the grand challenges; these are fundamental problems in science or engineering with potentially broad economic, political and/or scientific impact, that could be advanced by applying high performance computing resources. The examples that John gave (in priority order) were: prediction of weather climate and global change; semiconductor design; superconductivity.; structural biology; drug design; human genome mapping; quantum chromo dynamics; astronomy; transportation (flow dynamics of aircraft etc); vehicle signature (low detection military vehicles); turbulence (aerospace vehicles); vehicle dynamics; nuclear fusion; combustion system; oil and gas recovery; ocean sciences; speech; vision; and undersea surveillance. It was interesting to speculate on the interests in this list of the various funding bodies. This talk included some interesting comments on present and future supercomputing. For example, the average vectorisation level of codes run at the NSF centres is about 50% and almost none of them use parallelism. The general rule of thumb given was that on an n processor computer, only those users who have access to more than 1/n of the machine will see an advantage in parallelisation. At current levels of n = 4 very few users will take the trouble. But as n is doubling every one or two years this situation will change rapidly.

Taken overall, the SUP'EUR meeting was a successful one with a good balance of news, reviews and reports from users. As usual the most important part of these meetings was the opportunity to mix and share experiences with others who face the same day to day problems. This is especially useful in a niche group like SUP'EUR.

The next SUP'EUR meeting is planned for autumn 1990 in Aachen followed by a join.t meeting of SUP'EUR and SUPER! in Rome in 1991.

John Gordon, Applications & User Support Group

Time for Reflection

This article concludes our series of reminiscences of Atlas 1.

Much has been written about the Ferranti Atlas 1 machine, and some events have been recalled recently in this journal, but little has been said about the impact of Atlas on the computing scene in the 1960's. It now seems appropriate to comment on a number of features of both the machine and of its setting in the Atlas Computer Laboratory.

The Director, Jack Howlett, has given an account of the events leading up to the purchase of the machine and to the setting up of the Laboratory. It was fortunate that the Harwell site was chosen for this installation, because the expertise to handle a large machine, occupying much more space than anyone had previously experienced and demanding high levels of engineering, was on hand from both AERE and NIRNS/RAL. Air conditioning to handle the 150KW heat output was needed, but became a matter of contention with the Treasury and the higher levels of administration who tended to regard such extravagances as something the Americans rather liked. They had to be convinced that it was really necessary. In a similar vein, the case had to be made for some programmers and some scientific back-up, since it was thought that a few people pushing cards and tape into the computer, assisted by others attending to the paper output, were all that would be required.

However, strong support for a more appropriate staff structure ensured that the Laboratory had the resources to carry through a unique project for that time.

Whilst other organisations had to fit their computers into existing buildings, the Atlas Laboratory could start from a green field site, and could plan the operation of a computer service in an efficient way; the layout of the building, once it had been decided to separate the engineering area from that of operations, was then arranged to minimise the transport of tape, cards and paper, and, as importantly, to ensure that the loading and unloading of the 1" magnetic tapes were done efficiently. The layout was studied by other centres that were subsequently established in the Universities, and it had a strong influence on their own designs.

It was originally intended that the AERE should participate in the use of Atlas with others on an equal basis; 30% each to AERE, NIRNS and to the Universities, with other internal and external users, such as government departments, taking the remaining 10%. In the event, AERE did not take up its 30% share and this was made available to the Universities. The initial response from many research workers was of some disbelief that any time at all would be available to them, it would all go to Atomic Energy and High Energy Physics was a common assessment. However, it was made plain, through visits with Bob Churchhouse to practically all the Universities in the country, that facilities would be made available if the research demanded large scale resources. So started a quite elaborate postal batch service supported by a British Rail parcel service.

In the early days, in 1964/65, many problems emerged as the computer was installed and commissioned. Interruptions to the service were frequent, due either to hardware faults or to software bugs, but almost none to the failure of the air conditioning plant and the basic electrical engineering supplies. It is perhaps not appreciated enough how the Engineering Division, under Percy Bowles, played a key role in maintaining a high level of serviceability of the plant and insisting on the highest levels of electrical and mechanical engineering. This was a most fortunate feature, because Atlas itself needed all the attention that could be given.

Many different views of the success or otherwise of Atlas have been expressed, nearly all based on quite inadequate facts about the way the machine performed and how the problems were overcome.

First of all, it should be stated that the advanced features of the design which included

were shown to be well-conceived and well-implemented elements which did not need any fundamental reconsideration, and these features have, of course, now been absorbed into general practice.

Broadly speaking, the early difficulties were all associated with the 48K core memory. Without a reliable main memory, all the other features speed, the operating system, etc. - could not display the undoubted potential of the machine. This was not a production machine, and it lacked the application of what has now become known as production engineering. Once the deficiencies of the main memory had been identified and corrected a high performance level was reached and maintained until the shutdown in 1973. The hardware was dependent on germanium transistors which were temperature sensitive, but this was handled quite well by some effective cooling provisions.

The response of visitors from the US at this time, on seeing the facilities and power available, was one of surprise that the machine was not on a production line. Atlas was in a class all on its own and it could have had a bright future. So why did the interest flag? It is probable that this was due to a weak marketing policy, associated with US embargoes on foreign inflows, but also that the identification of its future role in government, industry and commerce was not pursued vigorously enough.

One of the features of the operating system was that of providing an instruction counter which allocated the flow of instructions to the source of the work. This valuable view of what was being done allowed a good deal of information about the machine's performance to be compiled and studied. The role of these statistics in bringing the machine to a high level of performance has not been much discussed, but on reflection, it is evident that a good deal of the understanding of the performance was due to this ability to monitor. In 1967, neither IBM nor any other manufacturer could compete with the "performance statistics" produced on a regular basis from Atlas. It is possible that the availability of these "statistics" pushed other manufacturers into producing comparable data at a later time.

The supervisor, or operating system, was a subject of great interest. Simple forms of operating systems were already in being, but this was an ambitious project which owed much to the personal dedication of several individuals, including the incomparable David Howarth. No one knew how to control a software project of this complexity and many mistaken judgements about time scales and additional requirements were made. Because in principle all manner of things were possible, there was a tendency for the managers of ICT (by this time) to oversell the facilities to be made available. Led by Bob Churchhouse, the Atlas Laboratory was able to bring a sense of priority to this development of software; a list of 43 proposed enhancements which all demanded David's attention, was ruthlessly cut to 10 (later modified to accommodate London Atlas to 12) which were really required, and David Haworth was able to work with a sense of achieving set goals. (The other 31 disappeared and were never heard of again - apparently they were not of such great interest).

The development of software on a service machine raises many problems not only for the operations staff, but for the users. The undesirability of this guinea pig approach was identified at the time, but even now after 25 years there is still a tendency to make users accommodate and adjust to systems which are being set up or modified. The awareness of the user was one of the features of a batch service; both operators and reception staff could often feel part of a project even though they could not contribute to the detailed scientific or computational work. Now with remote access via networks the service staff rarely see the user.

Users from many disciplines visited the Laboratory to discuss their projects. Some were over enthusiastic about their ability to use vast amounts of time, whilst others thought that just a small amount would do. An analysis of the time actually used compared with that estimated to be required, for all Atlas Projects, showed that this ratio(R) had a U-shaped distribution; the mean at R = 1, was the least likely! In a research environment this is perhaps not surprising: it is still not clear just how successful the requirement for researchers to estimate expected resources has been, and whether the pricing of the resources has helped to clarify the choice between the use of central services and the purchase of local workstations.

The initiatives taken by Jack Howlett in supporting a number of application areas can also be seen to have been correctly assessed. Not all Laboratory Directors were able, and were supported in being able, to direct resources into important new areas of interest. Crystallography, computational chemistry, database work for NERC - these subjects and others benefited greatly from this freedom to judge the options of where future activities would be generated. It has taken much longer than expected for the power of computation to be understood by those on the periphery of scientific research. Whilst in the late 60's the need for large data processing activities was understood for High Energy Physics and some Space work, the idea of a "computational model" was not immediately obvious. Maybe it is one aspect of computing in which professionals failed to give a sufficiently strong lead. All the areas of work for the Cray now installed at RAL formed the basis of a case for a large vector machine in 1972, but somehow the administrators of science and engineering support remained unconvinced of the direction in which computing would inevitably go. The skills required to handle parallel processing could have been nurtured at this time and would have been of great significance now that we have not only vector machines but transputers. Unfortunately we are now faced with a long training curve to exploit these new facilities. Of course, some workers did attempt to use vector oriented machines. Professor F Walkden was able to use the ILLIAC IV at San Jose California from his office in Salford in 1974, via the ARPANET link in UCL.

Looking back at the various additions to the Atlas and to the ancillary services in the Laboratory, it can be seen that practically all were important and relevant to today's computing scene. The Sigma II, the SD4020/FR80 microfilm recorders, the large disk, the early Moore-Reed acoustic couplers and terminal for home use, all have provided the bases of experience for what we now see as required for present day use.

Jim Hailstone, In charge of Operations, User Support and Resource Management at Atlas from 1961-73
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site