Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF SE ENG Alvey Transputers Literature
Further reading □ Overview □ 1987 □ 12345 □ 1988 □ 6789 □ 1989 □ 101111a121314151617 □ 1990 □ 181920212223242526272829 □ 1991 □ 303132333435 □ 1992 □ 363738394041 □ 1993 □ 424344454647 □ 1994 □ 484950515253 □ 1995 □ 545556575859 □ 1996 □ 60
CCD CISD Harwell Archives Contact us Heritage archives Image license terms

Search

   
InformaticsLiteratureNewslettersECN
InformaticsLiteratureNewslettersECN
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1987
12345
1988
6789
1989
101111a121314151617
1990
181920212223242526272829
1991
303132333435
1992
363738394041
1993
424344454647
1994
484950515253
1995
545556575859
1996
60

Engineering Computing Newsletter: Issue 30,

January 1991

Editorial

New Year is traditionally a time to look forward with optimism to the year ahead. Unfortunately 1991 starts with the news of the cancellation of EASE 91 at Birmingham in March and the curtailment of some of the other EASE Education and Awareness programme events.

The good news is that this, your Newsletter, will continue to offer you, the Engineering Board's IT Community, a vehicle for news, comment and information as it always has done in the past.

Sheila Davidson

EASE and the SERC Financial Crisis

SERC is facing some difficult financial times over the next two years. In the current financial year, a £7m overspend is forecast and in the year 1991/92 the situation will be worse. In consequence, there as been a ban on recruitment within SERC and measures introduced to reduce expenditure.

As a result of these actions, the seminar on Engineering Design and Modelling scheduled for 27 February at the University of Sheffield has been cancelled. We have been forced to cancel EASE 91 at Birmingham and had to temporarily withdraw sponsorship of attendance at EASE events, including AIAI ones. There will also be a delay in introducing the EASE Enquiry Service. EMR contracts are subject to special review and early termination will be an option in some cases. We regret these actions but they are unavoidable given the constraints under which we are working.

Informatics Department was in the process of recruiting a number of staff when the ban was imposed, so we are likely to be seriously short of effort for the next 16 months. This must have an impact on a number of EASE activities such as the Education and Awareness Programme, the Newsletter frequency, the number of assessments that can be undertaken and so on. A freeze has been placed on the EASE Software Environment work pending further discussions. The seminar in January has therefore been cancelled.

When we know the full extent of the cuts agreed by Council, we will give details in the next issue of the Newsletter. We hope you have enjoyed the thirty issues that have appeared over the last two and a half years and will attempt to keep the publication continuing.

F R A Hopgood, Informatics Department

Accuracy in Numerical Modelling in CFD

The Computational Fluid Dynamics (CFD) Community Club held a workshop on Accuracy in Numerical Modelling in CFD at The Cosener's House, Abingdon on 15-16 November in collaboration with the Institute for CFD. It was chaired by Prof K W Morton (Oxford) and was attended by over 80 people of whom more than 20% were from industry. The purpose of the meeting was to discuss accuracy in the various stages of numerical modelling in CFD: the choice of physical models, numerical formulation, problem solution and validation, and to identify future Club activities to remedy existing deficiencies. The meeting consisted of presentations on the above topics and parallel discussion sessions on numerical formulation, problem solution and validation.

Prof J J McGuirk (Loughborough) reviewed turbulence models of current interest in practical flow problems, in particular those aspects which affect numerical accuracy. He surveyed second order upwind methods which are currently used and he discussed and illustrated some of their shortcomings. He finished by showing some remedies which are currently under investigation.

Dr M K McVean (Met. Office) spoke about monotonicity preserving advection algorithms in an atmospheric boundary layer model. He compared Roe's scheme with Van Leer's, in one dimension. He went on to extend the work to two dimensions and showed how Van Leer's scheme accurately handles the advection of fields containing sharp gradients.

Mr M Rudgyard (Oxford) presented a paper on numerical and artificial viscosity in discretising the flux operator. He surveyed the different methods indicating their stability and accuracy limitations. He showed why artificial viscosity is introduced to capture shocks. He gave examples of schemes unsuitable for unsteady problems which are suitable for steady ones.

Dr A Craig (Durham) outlined a philosophy for adaption and discussed different methods of estimating the error in the solution to a steady state problem. He went on to describe a variety of adaptive procedures, and to give a number of examples from elastics applications.

Dr E Suli (Oxford) showed the stability constraints for a range of efficient explicit and semi-implicit time discretisation methods. He went on to consider Taylor-Galerkin, Characteristic Galerkin and Lagrange-Galerkin methods for convection-diffusion problems. He finished by noting that the solution of problems with multiple time scales was still an open area.

Dr N Qin (Glasgow) presented a study of the accuracy in heat transfer prediction for hypersonic steady flows. He identified the need for the high resolution of both strong shocks and shear layers in hypersonic viscous flow simulation. He made a comparative study of several methods and compared his answers against experimental results. Ms C P Skeels (Sheffield) discussed the one-dimensional river model and clearly identified all possible sources of inaccuracy. The stability limits of the methods used to discretise the problem and their respective accuracy were discussed and illustrated by examples.

Dr J K Reid (RAL) presented a stable variant of the method conjugate gradients for non symmetric linear equations due to Van der Vorst. This method enjoys the efficiency of the conjugate gradient squared method without its associated stability problems. He finished by comparing some numerical results of the two methods.

Mr M P Carr (Aircraft Res Assoc) discussed validation in CFD, which he defined as the detailed surface and flow field comparisons with experimental data to verify the code's ability to accurately model the critical physics of the flow. He identified which parts of the computational model required experimental validation and which could be validated by other means. He finished by discussing the difference between validation and calibration.

The meeting divided into three parallel sessions on numerical formulation, problem solution and validation. After lunch the chairmen of the parallel sessions reported to the meeting about what had been discussed in their respective sessions.

The subsequent discussion focused on the need for a collection of test problems and their solution, which could form a valuable part of the validation process. It was noted that the Institute of Mechanical Engineers (I Mech E) was currently specifying test problems, some of which covered CFD. Most of those present recommended that the Club devote effort to the definition and maintenance of a model problem set. About half the delegates indicated their willingness to contribute to this work. It was also suggested that the Club liaise with the I Mech E and other relevant bodies in this work.

The question of verification and quality assurance of software was discussed. It was agreed that vendors of CFD software and language compilers put too much emphasis on performance and not enough on code safety and correctness. It was agreed that it would be useful to hold a seminar on good software engineering practice, with an emphasis on its application to the development of CFD codes.

The Workshop made the following recommendations to the Steering Group of the EASE Community Club in CFD to develop its programme of activities.

The chairman then brought a most informative and enjoyable meeting to a close, and announced that the next seminar would be held on 7 February 1991 at RAL on Mesh Generation Applied to CFD.

A more detailed account of the Workshop and a copy of the presentations can be obtained by contacting me.

Conor Fitzsimons, RAL

EASE Education and Awareness Events

Novel Architecture Computing

Related SERC and Other Initiatives

A growing number of agencies are funding, or proposing to fund, parallel and novel architecture computing initiatives. This article is an attempt to draw together some of the relevant background information for the Novel Architecture Computing Committee (NACC) which oversees central Science and Engineering Research Council (SERC) foundation support to a number of centres. The role of this support is to provide the infrastructure to encourage the widespread application of parallel and novel architecture computers to problems in science and engineering of interest to the SERC. Therefore, the main emphasis in this review is on programmes which support numerically intensive research applications. Most of the projects have lifetimes of about four years and there is considerable movement in the area from year to year. What we present in the review is a snapshot of novel architecture computing initiatives as of the Summer of 1990. It should be stressed that the nature of some of these programmes may be restructured with the transfer of supercomputing activities from the Computer Board to the Advisory Board of the Research Councils. Contact addresses are provided for those who are interested in pursuing the details, or future progress, of any of the initiatives in more detail.

The SERC has Council-wide and cross-Board activities listed as items 1 and 2 below. The SERC is involved with other agencies in jointly funding the parallel/ novel architecture computing projects listed as items 3 and 4 below. The Boards of SERC have their own activities listed as items 5 to 8 below. The Computer Board's initiatives and some American activity are also discussed.

The main sections in the review are:

  1. Novel Architecture Computing Initiative
  2. Grand Challenge Machine at Edinburgh
  3. Central Computing Unit, RAL
  4. SERC/DTI Joint Framework for Information Technology (JFIT) Programme:
    • Information Engineering Advanced Techno]ogy Programme
    • Programme on the Application of Parallel Systems
    • The ESPRIT Programme
  5. Engineering Board/DTI
    • Engineering Applications of Transputers
  6. Science Board's Computational Science Initiative
  7. Science Board's Advanced Research Computing
  8. Engineering Board's CFD Initiative
  9. Computer Board's Initiatives
R J Blake and F M Guest, SERC Daresbury

Letters to the Editor

Superworkstation Assessment

I would like to comment on Julian Gallop's article on Superworkstation Assessment in Issue 27. Recently I have been involved with benchmarking three of the machines mentioned in the article, namely Ap01110 DNl0000, Silicon Graphics, and Stardent 3000.

My experience with the vector performance of the DNl0000 and the Silicon Graphics, based on the Perfect Club benchmark suite and SPEC, differs significantly from that stated by Julian. The Perfect Benchmark consists of 13 programs covering fluid flow, chemical, physical, engineering design and signal processing applications. The SPEC benchmark consists of 10 programs, but unlike Perfect these are not all scientific and/or engineering applications. Some of the programs are known to be vectorizable and I can confirm that the DNlOOO is certainly not worse, if not better, than Silicon Graphics. I can also add (as this is not available in Julian's article) that as far as the vector perform ace is concerned, the Stardent 3000 is definitely my first choice among the three (this assessment does not represent a ranking!!).

Francis Yeung, SERC Daresbury Laboratory

As many people are aware, much variation is possible in the relative performance of different systems when different performance tests are run. This is true with workstations and it is even more true with superworkstations which possess a vector capability and multiple processors. It is useful to have reports from others who have performed comparisons themselves in order to better find out the extent of this variation and it is welcome to have Francis Yeung's experiences.

Although the details are in the full report, it is worth mentioning our experiences with the HP DNl0000 and the SG4D/240. We were supplied with a 3 processor DNl0000 and the 4D/240 contained 4 processors. At RAL, tests from several suites were run. In running the tests, the policy was to avoid making changes to the code, but to use as many of the optimization levels as possible and, in the case of the SG 4D/240, an additional preprocessor which took further advantage of the architecture. The choice which gave the best performance was used in the published results.

Often a test would have a portion that was vectorizable and a portion that was not. An example was an in-house implementation of the Navier-Stokes algorithm. On the matrix assembly phase, the HP DNl0000 was faster than the SG 4D/240; on the solution phase, where we expect vectorization to be possible, the SG 4D/240 was faster than the HP DN10000.

A similar situation occurred in the suite of matrix performance tests. The HP DNlOOOO was faster in the diagonal and sparse matrix tests; the SG 4D/240 was faster in the full matrix tests. The SG 4D/240 also showed up the faster of the two in the MHD performance tests, which are highly vectorizable. The one exception to this pattern in the tests that we ran, are some tests that originate from the SPEC benchmark! We felt justified in our comments about our experience of the systems' relative abilities to perform well on vectorizable code.

In our experience the optimization levels produce significant effects on the performance, often a greater effect than changing the machine. The policy that we used may differ from that used by Francis Yeung.

There are two additional provisos we should mention (which do not as it happens affect the comparison with Francis Yeung's results, but should be noted by others wishing to compare the systems). Our tests were run with one user at a time: we were treating these systems as workstations, not multi-access systems. Also, when we ran our tests, the Fortran compiler supplied with the HP DN10000 was unable to exploit more than one processor at a time. To quote from our report, a future software release that will allow several processors to be used by a single Fortran program could alter this scenario. In addition, multiprocessing the tests would also alter relative performance of the systems.

Inevitably an assessment such as this hits systems at different phases in their development and in our case the Stardent ST3000, at the time, was too new to be fully available at the time of running performance tests. I welcome the further information from Francis Yeung on this.

I emphasise that what Francis Yeung and I have discussed here is only one aspect of several. As is evident from the full report, there is no single Best Buy.

Julian Gallop, RAL

FINEL

I wish to clear up the mystery about FINEL. There is a single finite element package called FINEL that is used (in a number of different versions) by a number of institutions, though it may not account for all the sightings recorded by Deborah Pollard (see Issue 25, August 1990).

FINEL was initially conceived and written by Dennis Hitchens of Imperial College. It was developed further and used commercially at Babcock Energy Limited under the leadership of Chris Chatterton who formerly worked there. In many ways FINEL was ahead of its time, being genuinely portable, having free format input, bandwidth minimisation as part of the normal solution cycle and simple mesh generation.

FINEL was much used during the development of the NAFEMS Benchmark suite. At Sheffield it is still used for teaching and for the solution of linear and temperature related problems, with FEMGEN/FEMVIEW for modelling and display. The lack of commercial support and development is of course a problem. However it is precisely the existence of good quality free software like FlNEL that makes one question the charges for commercial packages.

Chris Cartledge, University of Sheffield

Application of Novel Architecture Computers to Problems in CFD

The Theory and Computational Science Division at Daresbury Laboratory has been heavily involved in a number of Engineering Board programmes which aim to encourage the application of novel architecture and parallel computers to problems in engineering with particular emphasis on Computational Fluid Dynamics (CFD). The Advanced Research Computing Group at Daresbury undertook to supervise the acquisition of £1.5m of parallel computing equipment to be placed in academic departments for use in numerical modelling in engineering. Proposals were invited from some 14 vendors covering distributed and shared memory and single and multiple instruction multi-processors. The main conclusion that was drawn from the exercise was that it was too soon to pick a single best parallel machine for engineering (or any other) applications - the currently available architectures and corresponding software environments were too various, and the field was moving too rapidly. Instead, it was recommended that it would be more useful to consider acquiring a set of different machines so that the initiative, viewed as a whole, represented a balanced programme across the different hardware platforms. After consultation with the community, the purchase of the following five systems was recommended:

These recommendations were accepted and Engineering Board subsequently issued an invitation to heads of departments of engineering and related disciplines for applications from appropriate groups for these systems. A review panel recommended the following allocations to groups actively involved in applying parallel computing methods in a range of different engineering areas:

To support and coordinate the Parallel Hardware Initiative, and to encourage the development of appropriate software and algorithms, the Advanced Research Computing Group at Daresbury were asked by Engineering Board Secretariat at Swindon to prepare a case for a Collaborative Computational Project on the Application of Novel Architecture Computers to CFD (CCP12). In the original proposal it was suggested that the CCP should focus on three key areas:

This project would involve the development of global grid partitioning and synthesis techniques, and static and dynamic load balancing. The problems of portability across shared and distributed memory machines would also need to be addressed; development of real time integrated CFD environments where the adaptive grid generation, solution and graphics are tightly coupled and run in parallel. Engineering Board Committees considered the proposal and allocated pump-priming funds of 1 my/ annum with a sum for travel and subsistence and organising meetings and workshops. A bid for 3 my/annum to support the CCP12 has been included in Engineering Board's Forward Look.

A Steering Group with representatives from some 22 groups, drawn in the main from the proposals rated highly in the parallel hardware exercise, were invited to attend the first meeting of the CCP12 held in September 1990. The meeting concluded that in its first year the CCP should focus on a major awareness exercise surveying parallel hardware, languages, operating systems, compilers, program development tools, programming environments and applications libraries within the context of engineering computations. In terms of research, the CCP was to proceed by collecting and reviewing commercially non-contentious codes which are representative of key applications and begin to implement the codes on a range of different parallel computer architectures. The primary aim is to explore the mapping between different numerical schemes and different architectures with realistic engineering problems and subsequently develop algorithms to realise the potential of powerful computation. With the resolution of property rights issues in the future, the scientific focus will shift towards developing and implementing flagship engineering codes on parallel and novel architecture systems. The CCP plans to hold its first workshop on parallel computing in May 1991.

R J Blake, SERC, Daresbury Laboratory

Poplog Version 14 now Links to the X-Window System

Although not part of EASE, Poplog provides many of the features listed in Issue 28 of this newsletter, except that its primary aim is to support interactive, incrementally compiled, highly expressive languages for rapid prototyping and exploratory development and testing of algorithms. It includes compilers for Common Lisp, Prolog, Pop-II (similar to Lisp, but with a readable Pascal-like syntax), Flavours (an object-oriented extension to Pop11) and Standard ML V2.0 (a polymorphic typed functional language). The compiler tools allow users to add new languages which then automatically run on all the machines supporting Poplog, ie UNlX workstations and servers or VAX with Ultrix or VMS. Compared with many AI environments it is very compact, eg Common Lisp takes under 2 Mbytes, the other languages much less.

Poplog also allows programs in languages such as Fortran and C, eg the NAG library, to be dynamically linked and unlinked. On this rests the major recent development: an interface to the X-Window System (XIIR4), supporting window-based interaction either on a stand-alone workstation or on a remote X terminal linked to a powerful central machine running Poplog.

This was meant to be ready early in 1990, but X has several different layers, and at higher levels it allows for a variety of standards, eg Motif, Open Look. Instead of requiring users to develop all interface software from the bottom up, we wanted to make it easy dynamically to link in already available packages, eg Athena, Motif and Open Look widget sets. Providing such flexibility proved very difficult, and delayed Vl4 by nearly a whole year. It required several deep changes in Poplog, including making it easier to share data between Poplog and external programs, support for 'callbacks' from external programs to Poplog programs, and extensions to the event-handling facilities in Poplog. The internal representation of Poplog data was changed so that, for example, a Poplog array of strings looked like an array of strings (though without null termination) to a C program. It was also necessary to allow users to create temporary Poplog data-structures that were guaranteed not to be re-located by a garbage collection, so that pointers to them could safely be handed to external procedures. A collection of 'ready made' widgets was also developed, for 2-D graphics, text windows, menus and the like, along with tutorial documentation and illustrative library programs and utilities.

After much effort, and apologies to waiting users, Unix and Ultrix versions are now ready, and VMS will follow shortly. Applications developed on one machine should then be able to run on a variety of different host/server combinations supporting Poplog and X. An overview by Ian Rogers is in the proceedings of the November 1990 European X-Window Conference. More sophisticated POPLOG+X based tools for developing interfaces will emerge from the IED-funded collaborative UIDE project led by BMT Ltd.

Once part of the Alvey infrastructure, Poplog is now supplied direct to UK academics by Sussex University (which recently drastically cut prices!). However, it remains primarily a commercial product and commercial support is available. Between 1983 and 1989 it was distributed by SD-Scicon, but in May 1989 a management buy-out involving six of the people at SD concerned with Poplog led to the formation of a new independent company, Integral Solutions Ltd (ISL).

UK educational prices for Poplog start from £600 + VAT.

Aaron Sloman, University of Sussex

Mesh Generation Applied to CFD

The EASE Community Club in Computational Fluid Dynamics (CFD) will hold a Seminar at RAL on 7 February 1991. The meeting will be chaired by Dr N P Weatherill (Swansea). The purpose of this seminar is:

The first session, titled Established Methods, will contain presentations on widely used methods of mesh generation such as Advancing Front, Delaunay, Multiblock and Transfinite Interpolation.

The second session, titled New Approaches, will contain presentations on novel approaches to mesh generation in CFD such as multigrid for unstructured meshes, unstructured quadrilateral meshes and feature-aligned meshes.

The final session will contain three presentations, which will introduce a discussion on Mesh Quality in CFD. The first presentation will focus on ways of measuring geometric features in the mesh, the second will discuss Error Estimation and how this impacts on mesh refinement in CFD. The third will present work done on this topic outside CFD. These presentations will introduce a discussion on the topic of Mesh Quality.

Conor Fitzsimons, RAL

NISS and CHEST

Readers who are new to these services may find this introductory information useful. The National Information on Software and Services (NISS), and Combined Higher Education Software Team (CHEST) projects were created in 1988 with initial funding from the Computer Board for Universities and Research Councils. Both are split-site projects, with NISS working from the Universities of Bath and Southampton, and CHEST from Bath University and Leicester Polytechnic.

The primary aim of CHEST is to arrange and administer deals with suppliers of good quality commercially used software and datasets, on terms which reflect the needs of the academic community, and are also of benefit to the supplier.

NISS, on the other hand, provides services which help disseminate information throughout the Higher Education Community. NISS utilises the JANET computer network to enable users to communicate and share information about a range of academic and computer-related topics.

Typing call niss at the PAD prompt on a computer or terminal connected to JANET should access the NISS Gateway (at JANET address 000062200000). The Gateway menu lists other NISS online services currently available, and Section D3 of one of these services - the NISS Bulletin Board - holds information and news about recent CHEST deals.

Annette Lafford, University of Bath

Not in the Know - Acronyms

I must get this off my CHEST. Do you know, my MUM does not give two PHIGS about my EMU-TEK! She KBS my UNIX and gets very UNIRAS whenever I try to AGOCG or even EASE the X-Window. A simple OOP produces an enormous STEP and I often finish up with a good CFTAG. She claims I am a CAD, but it's just my ESPRIT. I simply cannot REDUCE the SPARC, the NAG I still feel for IRENA. Ah well... and so to business.

The Committee for Rapid Acronym Proliferation has asked me to present their version of the next issue of ECN:

ISO %*(NERC WISS/COMBINE)**£

At this stage of the project we had hoped to be able to use a hypertext set of nested recursively-callable high-level acronyms to enable single character representation of a complete copy of ECN. As you can see, although excellent progress has been made, we still have some way to go. Work on an AID (Acronym Inverting Device) for the acronymically disabled will be given high priority in the next grant application round.

Seriously folks, if we are concerned about communicating shouldn't we at the very least give the readers a chance to keep their heads above the acronymic flood by presenting titles in full whenever they are introduced? I for one would appreciate the extra effort involved.

George Wilson, Polytechnic of the South West

Editor's Note: Although acronyms are spelt out as far as is practicable, an explanation on page 1 does nothing for a similar acronym on page 5, for example, so this is a continuing problem. I would welcome readers' suggestions before deciding on a solution.

AI for Engineers

Tool Evaluation

An evaluation of MUSE has been undertaken which it is intended to release in February. Muse is an AI toolkit designed for real-time applications. It includes rule-based and object-oriented programming, a general AI programming language (popTalk) and agenda-based control. The system runs on Sun workstations.

Prototype applications in muse exist for data fusion, command and control, on-board fault diagnosis on helicopters, monitoring and control of a paint plant and flight monitoring. Prototypes currently under development include computer network management, control of manufacturing cells and for electronics manufacture.

Technology Tutorial in Qualitative Reasoning

Due to popular demand we will be repeating this one-day Technology Tutorial in Qualitative Reasoning on 7 March 1991 at AIAI in Edinburgh. Qualitative Reasoning primarily involves predicting the behaviour of a physical system using only non-numeric values, but preserving all important behavioural distinctions, starting from a structural description of the system. Dr Brian Drabble, a member of the Knowledge Based Planning group at AIAI, will be presenting the tutorial.

Training

Terri Lydiard, AIAI

Forthcoming Events

EASE Technical Reports

Training Courses

PEVE Training Unit, MANCHESTER
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site