Jump Over Left Menu
Engineering Computing Newsletter: Issue 44
- Parallel Processing in Engineering Community Club being formed
- FTK - A Fortran Toolkit
- Modelling Fluid Flow sing Vortex Methods
- 3D Visualization in Engineering Research Seminar
- Improving the Quality of Fortran Programs
- World Transputer Congress 1993
- Introduction to Human Factors Aspects of User Interface Design
- Forthcoming Events
Parallel Processing in Engineering Community Club being formed
Despite its relative lack of maturity, parallel processing is steadily gaining acceptance as a potentially cost effective solution to a range of computationally intensive engineering problems. Within the last year, a number of major established computer suppliers have announced new products incorporating parallel processing. Many of these are based on the new generation of powerful RISC processors linked together in a variety of ways. There are also increasing signs of convergence in the programming styles being supported.
An increasing number of engineering research grant-holders are proposing to use parallel processing techniques. With the rapid pace of development in parallel processing, and the wide and often confusing range of software techniques available, there is a growing need for a forum in which engineers from all disciplines can meet to seek guidance, compare notes and profit from each other's experiences.
To this end, a new Community Club on the topic of Parallel Processing in Engineering is being formed based on the successful model in other domains including Computational Fluid Dynamics and Visualization. The aims of this new Community Club will be:
- to bring together researchers seeking to exploit the potential of parallel processing in their own areas of research
- to provide a forum for the exchange of practical experience about applications of parallel processing
- to increase awareness of parallel processing and its potential through workshops, seminars, tutorials and courses
- to identify requirements which can be met through the provisions of the EASE Programme
- to encourage the emergence and use of relevant standards which will assist the development of software for parallel systems
The activities of the Community Club will be co-ordinated by a small Steering Group of representatives of the different engineering disciplines within the research community with an active interest in parallel processing.
The Parallel Processing Group at RAL will provide organisational and technical support for the activities of the Club. The Parallel Evaluation Centre established and operated by the Group will be available to members of the Club to try out a range of current affordable parallel hardware and software systems.
As a first step towards the formation of the Club, we are inviting engineering researchers with an interest in parallel processing to contact us, preferably by email, to register their interest. We will shortly be sending out a brief questionnaire to gather further information about current parallel processing activities within the engineering research community. If you would like to join the new Parallel Processing in Engineering Community Club please send a message including your address, phone and email or contact me directly.
C P Wadsworth, Head, Parallel Processing Group, Informatics Department
FTK - A Fortran Toolkit
The FORTRAN Toolkit (FTK) is a software engineering tool designed to operate at the level of source code. It may be used for a number of purposes:
The mechanisms to accomplish these tasks are:
- source level error detection and correction
- configurable source code formatting
- static source code manipulations
- data usage checking and rationalisation
The intention of this article is to summarise the capabilities of FTK, so readers can decide whether they would find the functionality provided useful. A summary of the core capabilities of FTK is provided below. The possibilities for formatting source are endless in their variety and FTK provides a rich set of directives. This area is not addressed further here.
FTK has been developed by "Simulation and Computer Consultants. It is available on a variety of platforms including VAX, Sun SPARC and high-end PC. You can obtain FTK by applying to Parsys.
FTK works in either batch mode (messages inserted as comments in the output code) or in interactive mode (messages appear on the user's terminal). In batch mode FTK works like a conventional compiler, except the output is processed FORTRAN source rather than an object file. In batch mode FTK can be used to screen large amounts of FORTRAN source, whereas in interactive mode FTK can be used to dynamically troubleshoot user-selected regions of code. Context sensitive help is available at any time in interactive mode.
Input takes the form of directives (commands), source files and specification files. Specification files, in turn, consist of lists of directives, source files and other specification files. Input details may be supplied:
- on the command line
- as interactive commands
Directives may additionally be supplied in the FORTRAN source itself in a comment notation. For example, to unwind a loop with static bounds:
C% UNWIND DO 1= 1, 12 UDOTT3(I) = UDOTT2(I) ENDDO
Directives in general take the form of simple sentences constructed from keywords that may be abbreviated e.g. CHK ARG for CHECK ARGUMENTS.
Specification files are useful to encapsulate:
- long lists of FORTRAN files for large programs
- long lists of directives, for instance specifying all aspects of a particular in-house formatting style
The basic premise behind the error detection capabilities of FTK, is that the very nature of FORTRAN itself will cause the programmer to make unforced errors that most compilers will accept as valid code. For example:
IF (FLAG2) THEN COLUMN = 0 DO 10 I= 1.3
has precisely the same meaning as:
IF (FLAG2) THENCOLUMN = 0 DO0I = 1.3
and this is probably not what the programmer intended! More specifically, FTK checks for:
- the apparent use of FORTRAN keywords, auxiliary keywords and intrinsic functions as variable names
- spaces embedded in names or numbers
- statements which extend beyond column 72
Many of these situations do not necessarily indicate an error - indeed there are so many auxiliary keywords (such as STATUS and TO) in FORTRAN that an accidental use as a variable name is fairly common. Accordingly, directives can be used to switch on or off detection of any particular type of potential anomaly picked up by FTK.
The coincidence of calling with formal arguments has traditionally not been checked by FORTRAN compilers. FTK will detect arguments of inconsistent types.
The directive CORRECT ARGUMENTS will correct calling arguments (in data type and size) to match formal arguments. Corrections to numeric, Hollerith and string arguments can be switched on and off independently. For safety, all changes are flagged to the screen or marked as comments in the code.
The use of the same name for different objects may indicate an error. This can be detected by CHECK NAMES directive. Again checking for some less dangerous situations can be switched on or off.
- EXPAND INLINE
- expand all subprograms in line working from level of deepest nesting upwards
- SYMBOL LISTING
- list symbol table
- COMMON LISTING list
- common addresses
- CALL STRUCTURE
- list call structure
The default sizes in bytes of the basic FORTRAN data types are alterable to make the porting of FTK simple between architectures. It is also possible to change the types of variables in an application.
The DATA DICTIONARY directive will produce output in four files with standard extensions as below:
- <xxx>_PAR.FTI: an include file containing: type and value of all PARAMETERs declarations of all structures
- <xxx>_CMD.FTI: an include file containing: type and COMMON block declarations for all variables
- <xxx>_STF.FTI: an include file containing: declarations of all statement functions
- <xxx>_BDA.FTK: a module file containing: a single BLOCK DATA module all DATA statements
This facility is a valuable aid in documentation and maintenance, bringing a complete picture of data in the application under the one roof. The files could act as new rationalised include files for the application as a whole. In conjunction with COMMON LISTING, a definitive placing of all program data is provided.
The facilities of FTK are presented in an welcome orthogonal manner. Virtually all the directives can be supplied either on the command line; in the FORTRAN source; interactively or in specification files. In other words the same set of commands work in all circumstances. In short, FTK is easy and flexible to use.
The interface to FTK is currently textual. To compete with other software engineering tools, a graphical interface could usefully be provided. FTK is driven via a regular textual language, and ideally a UIMS (User Interface Management System) should be capable (with minimal change to the FORTRAN source that FTK is written in) of mapping these commands to a graphical interface.
My reservations regarding FTK are more to do with its placement as a product. The facilities that FTK offers should largely be provided by a good FORTRAN compiler. Though some features like the interactive use are somewhat unlike a compiler. FTK has a well defined window of capabilities, that one feels may be too narrow for a product. On the other hand, FTK is still under development and additional features are continually being added. One might consider data dependency or FORTRAN 90 support.
The author informs us that typical]y after each demo of FTK a number of requests are made to include additional features in later releases of the product. The author is pleased to respond to such requests by adding new features to meet specific user requirements. One recent example was the handling of the POINTER statement, a Cray FORTRAN extension, in FTK.
Loop unwinding is done by some compilers but not others, so the fact that FTK transforms FORTRAN at the source level - means that some FORTRAN developers can now be presented with new facilities and functionality. However, it's a hassle to incorporate (including paying for!) an additional software tool into your working set.
One level of FTK operation is slightly above the lexical level. One cannot just dismiss FTK as low level, as this form of manipulation is precisely what is missing from most FORTRAN tools.
The other significant level of FTK operation is the name/argument checking and the production of a data dictionary. When I initially pick up a FORTRAN code, generally to parallelise it, the first thing I look at in order to gain understanding is the global data in the application. Here, I would find FTK exceptionally useful.
The facilities of FTK are largely for occasional use, when you are picking up an old code for maintenance or formatting a new code nicely towards the end of the development cycle. FTK helps sorts out problems that arise through the deficiencies of FORTRAN. One could debate for hours on the compatibility of FORTRAN and good software engineering practices, but in the real world FORTRAN is in use and bugs have to be fixed. It may be difficult to justify purchasing tools for problem solving alone. On the other hand one could argue that FTK looks specifically in the areas that problems are likely to occur and consequently has a high rate of bug detection. Preventative maintenance, after all, is always cited as being the most cost effective in the long term. Our personal experience is that ALL the FORTRAN codes we have tried with FTK have thrown up a large number of dubious practices. Most have been safe, but several have indicated serious potential problems which may or may not be affecting the execution of the application adversely. The code may be working on the current platform, but the inconsistencies of the type FTK detects are those which tend to stop FORTRAN executing properly when ported to another platform.
More expensive FORTRAN source manipulators such as SPAG (around 1000 pounds) exist, that have a more extensive range of capabilities. As an example of what FTK does not do is that it does not pick up the uninitialised variables that long established FORTRAN codes tend to feature! The cost of FTK will probably be recouped if you consider the cost of manually finding just a few inconsistencies in FORTRAN code. The use of a tool like FTK should be seriously considered if you spend a considerable amount of time involved with the development or maintenance of large FORTRAN codes.
The Parallel Evaluation Centre at the Rutherford Appleton Laboratory exists to support the academic community by providing information and impartial advice on Parallel Processing and related matters. Please contact the staff of the Centre through: Virginia Jones.
For more on FTK contact: Chic McGregor at Parsys.
Modelling Fluid Flow sing Vortex Methods
The SERC CFD (Computational Fluid Dynamics) Community Club held a one day workshop on Modelling Fluid Flow using Vortex Methods on 17 March 1993 under the chairmanship of Prof Peter Stansby (Manchester). It was attended by 33 academics.
Dr C Greenough (RAL) welcomed the delegates to a specialised subject area which possibly does not get enough discussion.
Prof Stansby opened the meeting and expressed the wish that it would be one of several held at Manchester. He proceeded to give a brief historical perspective of the vortex method from its first use in the 1930's using hand calculation to the present time, using supercomputing. He briefly describe the Random Vortex Method which is a fractional step solution of the vorticity equation first suggested by Chorin (1973) for 2-D flows.
Dr Peter Bellamy-Knights (Manchester) presented a computational vortex method for low Reynolds number bluff body flows. He described a method for the numerical solution of the Navier-Stokes equations, with stream function and vorticity as the dependent variables. for the impulsively started, unsteady, two-dimensional flow about a circular cylinder in a uniform stream of viscous, incompressible fluid. The convection process is treated by the well known Cloud-in-Cell (CIC) method. The approach adopted for the diffusion process however starts with all vorticity being represented by point vortices. These diffuse as Oseen vortices while they are convecting as point vortices. At the end of the timestep, the vorticity of each vortex is re-distributed onto the surrounding grid points and represented by new independently moving point vortices. This approach gives an economical numerical algorithm for solving the Navier-Stokes equations in a range of Reynolds number (20-500) leading to both symmetric steady wakes and periodic oscillating wakes.
Prof Mike Graham (Imperial College) described a method for solving the Navier-Stokes equations for unsteady incompressible viscous flow using a vortex based method. In this method the diffusion of vorticity is evaluated by finite differences on the mesh whereas convection is represented by a Lagrangian moving vortex particle method. The method may be regarded either as a viscous extension of the Cloud-in-Cell methods or as a finite difference Navier-Stokes solver which uses Lagrangian representation of convection. The final part of the presentation dealt with extensions of the method to three-dimensional flows using a vorticity-velocity formulation together with moving vortex particles known as vortons.
Prof Lewis (Newcastle upon Tyne) made a presentation describing the boundary integral vortex element model for potential flow, including its extension to the turbomachine cascade and meridional flow problems. The extension or this to viscous flows by vortex dynamics modelling was also outlined including some recent studies of ring vortex diffusion. and a scheme being developed for dealing with axisymmetric body, duct and annulus flows.
Dr Martin Downie (Newcastle upon Tyne) described two examples of Discrete Vortex calculations in engineering applications. The first problem considered was the viscous damping of bodies floating in waves with six degrees of freedom. The viscous forces acting on the body can be evaluated by matching an interior flow solution. computed by the Discrete Vortex method, to an external boundary integral solution of the global potential flow. The method can be shown to give good results for bodies with sharp edges, rounded edges and bilge keels. He showed how the two-dimensional flow about a circular cylinder with a protuberance is simulated using the Discrete Vortex method. The results are used, in conjunction with the limited experimental data available, to estimate increases in the base shear of a jacket structure in a current and in waves. Finally, Dr Downie described the use of parallel algorithms implemented on transputers to speed up the calculation times.
Dr David Summers (Napier) presented a method for generating vorticity at a solid boundary which consists of closed-loop filaments which impose the condition of no-slip at the boundary. The method is based upon constructing a Hertz potential Τ(r) orientated normal (n) to the surface such that vorticity, ξ, is ξ = ( × Τ(r)).n. Some indication of how the method works, and the numerical problems which must be overcome to make it work, were illustrated by looking at some test cases. Finally, Dr Summers drew attention to recent developments in reformulating the three-dimensional vortex method in variables which constitute a Hamiltonian System and how this has useful implications for the present work.
The final session of the day was a panel discussion where the lecturers stimulated a lively debate about the relative merits of vortex methods versus other conventional methods and about the need for 3D visualization for vortex methods.
A copy of the presentations at the workshop can be obtained from the names below.
Debbie Thomas, Manjit Boparai, RAL
3D Visualization in Engineering Research Seminar
Rutherford Appleton Laboratory, 24 March 1993
This seminar, chaired by Mr J R Gallop and Mr W T Hewitt, was attended by over 50 attendees with the entire seminar being recorded on video.
The aim was to demonstrate 3D input techniques and the production of 3D output with today's technology.
The day started by presenting a novel hardware for true 3D input - 3D Scanners (Mr S Crampton, 3D Scanners Ltd). The scanners are used for capturing 3D surface of an actual object by measuring spatial coordinates of a large number of points on the object and are widely used in a range of applications: product design, visualization/animation. multimedia, inspection/analysis, etc.
From 3D input methods and devices the seminar continued to cover 3D output. A large part of the seminar was devoted to displays, ranging from what additional information is needed to give 3D impression from a 2D image, to presenting methods that are producing 3D image on a flat screen.
Mr Hewitt (University of Manchester) tried to convince us that 3D can go into a 2D image.
Techniques suggested included:
- context of the image
- perspective projection
- texture of an object
- motion of an object
- illumination of the scene
- depth cueing
- multiple views
- shadows and stereo vision
Use of stereo effect in order to produce 3D output on a 2D screen was covered by two talks: Mr I Sexton (De Montfort University) was presenting and comparing all currently known techniques used for producing a stereo image, while Dr N Dodgson (University of Cambridge) demonstrated work of the Cambridge Autostereo Display (3D TV).
Prof N J Phillips (Loughborough Univ of Technology) described hardcopy output in the forms of holograms.
From 3D display technologies the seminar went on to making a physical copy of 3D computer generated objects - the method known as stereo lythography (Mr A Graves, 3D Systems Inc Ltd). With this the circle was closed: going from real 3D objects that are scanned and coordinates of the points on their surfaces entered in computer, via ways of representing 3D scenes on 2D displays using extra information, stereo and holography we arrived at producing real 3D objects via 3D computer representations.
Attendees were able to see demonstrations of almost all 3D examples mentioned during presentations: on display were various holograms, a Cambridge Autostereo Display rendering pictures and animations, and a selection of objects generated by stereo lithography.
The seminar concluded with discussion where all speaker were invited to the panel and were able to answer questions from the audience.
Rajka Popovic, Informatics
Improving the Quality of Fortran Programs
CFD Community Club Workshop, 27-28 July 1993
This two day event is aimed at those who wish to learn about the development of quality engineering Fortran software. It is primarily aimed at the CFD community but should also be of interest to researchers in other fields.
The event will be in two parts:
Part I is designed so that those who would like to get an overview of the subject can attend for just one day. The day will consist of presentations given by practitioners in the field. The topics to be covered will include maintainability. portability, static and dynamic analysis, and metrics. There will also be short presentations by the vendors of the software which is being made available in Part II.
Part II is intended to be attended in conjunction with Part I by those who would like to get a more detailed knowledge of the subject. Part II will consist mostly of hands-on sessions where delegates will be encouraged to bring along their own code to run through the packages which will be available.
Both parts of the event will be held at RAL with Part II taking place in the newly constructed training room, which houses 9 Sun Sparcstation IPXs networked to a server.
There will be three QA products available for use. These are:
- QA Fortran, Programming Research Limited, Esher, Surrey
- PlusFORT. Polyhedron Software Ltd.. Witney. Oxon
- Testbed. Program Analysers, Newbury, Berks
Participants are invited to register for either Part I only or for both Parts I and II by sending back the insert as soon as possible. Both Parts will be limited on numbers so you are advised to fax in your form to reserve your place.
Mrs Debbie Thomas, Informatics
World Transputer Congress 1993
The 1993 WORLD TRANSPUTER CONGRESS 1993 (WTC'93) sponsored by The Transputer Consortium (TTC) will be held simultaneously with the German national transputer Conference, the TRANSPUTER-ANWENDERTREFFEN CONFERENCE 1993 (TAT93).
The WORLD TRANSPUTER CONGRESS incorporates the Transputer Applications series of conferences organised by the UK SERC/DTI Transputer Initiative and the Transputing series of conferences organised by the worldwide Occam and Transputer User Groups. This combined event is the major international transputer conference of 1993 and is likely to attract over 500 delegates. It is fully supported by Industry, Commerce, Academe and the CEC (through the Human Capital and Mobility Euroconferences Programme).
The conference themes include: education and training issues, formal methods and security, performance and scalability, porting existing systems, parallelisation paradigms, tools, programming languages, support environments, standards and applications.
Applications include: embedded real-time control systems, workstations, super-computing, consumer products, artificial intelligence, databases, modelling, design, data gathering and the testing of scientific or mathematical theories.
There will be two main technical streams held at the Eurogress Conference Centre in Aachen, each preserving the distinct identities of WTC'93 and TAT93 and enabling coherent strands to be followed through both. WTC'93 papers will be presented in English, whilst TAT93 papers will be in German or English. There will be seven parallel sessions in both the WTC'93 and in the TAT93 streams A single proceedings will be published, with the WTC'93 papers being in English whilst the TAT'93 papers will be published in the language of presentation. Selected papers may be published in both languages. The full programme will be available from mid June from the WTC Registration Office.
In addition to a Supplier Exhibition there will also be an Academic Demonstration Exhibition and a Book Exhibition with many of the major publishing houses participating. These will take place alongside the Congress in the Eurogress.
A range of tutorials will be presented on the two days immediately after the main WTC'93 conference 23rd and 24th September 1993. Some of the tutorials are designed to introduce delegates to the transputer architecture and to highlight successful programming methodologies for transputer-based parallel systems. Other, more advanced, tutorials will provide specific discussions of application areas which are appropriate to parallelism and transputers.
- C Programming for Transputers
- This tutorial will include a survey of the various C compilers which target the transputer, basic message passing concepts, concurrency on single processors, communication between concurrent processes and multiprocessor concepts.
- Transputer Architecture and T9000 Overview
- This tutorial introduce transputer architecture then explore the new features provided by the T9000.
- Message-Passing Parallel Design Concepts
- This tutorial will present ways of creating excess parallelism, methods of deadlock elimination, how to reason about real- time throughput, strategies for buffering and routing, and how to incorporate the new T9000 virtual channel technology into older software designs.
- Software Engineering for Parallel Systems
- The aim of the workshop is to provide a forum for the exchange of information on current approaches to software engineering for parallel systems.
- Image Processing
- The tutorial will introduce many aspects of image processing and rendering on transputer-based architectures. It will review details of the T9000 transputer design which make it suitable for this type of task.
- Formal Methods in Transputing
- This tutorial discusses which techniques are most suited to the transputer environment, and provides examples of their use.
- Program Design Methodologies for Robotics and Control
- The first section of this tutorial provides a hands-on opportunity for delegates to design and implement a small control system. The second section will show how control software may be built in a modular fashion using a graphical process configuration tool, Software Through Pictures.
- Transputers in Communications Systems
- This tutorial highlights the opportunities which the transputer provides, and shows how the improved communications facilities and increased processing power of the new T9000 transputer facilitate further applications.
- FORTRAN for Message Passing Microprocessors
- A new generation of high performance parallel architecture, based on microprocessors with high-speed interconnect, has the potential of providing an affordable desktop resource for compute intensive applications. This course will guide attendees in the process of moving their FORTRAN codes onto such systems: presenting material on the general principles, particular techniques and on the product choices available. The tutors will provide information on the standardisation efforts in progress and the best systems to support the portability and future-proofing of source code.
Susan Hilton, Informatics
Introduction to Human Factors Aspects of User Interface Design
Rutherford Appleton Laboratory, 18 June 1993
This course is an extension of the one developed under the EEC COMETT Programme. It is designed to give an introduction to human factors issues in the design of the user interface. The course is being presented by SERC staff members at RAL.
Who should attend:
The course is aimed primarily at Computer Scientists and Engineers who wish to learn more about human factors in the design of user interfaces to computer systems. A degree of familiarity with computers is therefore necessary.
Overall Targets of Course:
- outline the purpose of, and the techniques for, the introduction of Human Factors information and experimental methods into the design process (in the context of high-power technical workstations such as the Sun). This will, of necessity, involve some discussion of the various procedures of user interface design
- to use human visual perception as the main driving example for this. This will involve outlining the physiological and psychological processes associated with visual perception from low-level mechanisms (such as retinal receptors and the lens) through to the cognitive processes involved in illusions
- the next component will be devoted to a treatment of colour models and perception. These units will be drawn together by a consideration of the implications of this work for user interface design
- to build upon the previous targets to outline the phases of user interface design in the context of the modem user-interface methodology known as direct manipulation
- the course will also cover User Interface Design Environments and how these tools can be user to design consistent user interfaces
- the course will conclude by discussing real-world examples of good and bad design. This will indicate how a combination of the consideration of principles and information derived from human factors, together with practical experience and rules-of-thumb combine to produce truly user-natural interfaces
Ben Shneiderman (1987) Designing the User Interface Addison- Wesley ISBN 0-201-16505-8
Ronald M. Baecker & William A.S. Buxton (1987) Readings in Human-Computer Interaction Morgan Kaufmann ISBN 0-934613-24-9
- EUSIPCO-94: VII European Signal Processing Conference, 13-16 September 1994, Edinburgh.
- ITTI Products Display: University of Warwick, 22 June 1993
- BMVC93: 4th British Machine Vision Conference, University of Surrey, 20-23 September 1993
- IEE Colloquium, The Teaching of Digital Image Processing in Universities, Savoy Place, 21 October 1993