Jump Over Left Menu
Engineering Computing Newsletter: Issue 27,
- The EASE Software Environment
- The CTI Centre for Engineering at QMW
- Computational Fluid Dynamics Community Club
- Superworkstation Assessment
- Letter to the Editor
- Software Quality Assurance in an Engineering Research Environment
- AI Support for Engineers
- Post Experience Vocation Education in IT Unit - Courses
- Forthcoming Events
This edition includes another selection of articles. offering a wide choice of subject. Of particular note there is a Superworkstation Assessment and information on Software Quality Assurance, usually popular because they are so useful. Please keep me informed of your favourite topics.
The programme for your EASE 91 Conference is nearing completion and will be available next month. In the meantime you may like to know that the Keynote speakers include Professor M Wozny (Rennsselner Polytechnic Institute, NY), Dr C B Besant (Imperial College) and Professor J Douce (Chairman of the CompUting Facilities Committee ofSERC).
The EASE Software Environment
Nearly a thousand of you will by now have been interviewed during Engineering Applications Software Environment (EASE) visits to your establishments.
You may recall being asked questions about what you would like to see in an EASE software environment. Those answers, together with input from many of the EASE workshops held over the past few years, have been used as the basis for a proposal which has been seen by the Computing Facilities Technical Advisory Group and will be discussed by the Computing Facilities Committee in late September. If these proposals are accepted it will mean that something like a third of the EASE manpower at Rutherford Appleton Laboratory will be devoted to this project over the next few years. To a large extent this will not involve new activities, so much as bringing together much of the work already in hand.
It is expected that the next issue of this newsletter will contain an overview of our proposals and subsequent editions will explore some aspects in more detail. We will also be holding a seminar in January 1991 where potential users are encouraged to come along to discuss what we are doing, and offer guidance about the further evolution of the project.
The EASE Software Environment will, of course, be extensively discussed at EASE 91, in Birmingham on 25-27 March 1991.
Ken Hartley Rutherford Appleton Laboratory
The CTI Centre for Engineering at QMW
Factors Influencing the Use of Computers in Undergraduate Engineering Education
In the summer of 1989, the CTI Centre for Engineering sent a questionnaire to departments of engineering within UK. universities. The Centre has ascertained the extent of computer use, the types of hardware and software being used by academic engineers in their teaching and has elicited the feelings and opinions of these engineers towards computer based learning in engineering education. Several reports on these findings have been published. Because it is an aim of the CTI Centre for Engineering to promote the use of computers in undergraduate engineering education, respondents were given the opportunity to write in any items of software that they wish existed or wished that they could have in their teaching, items of software or hardware that they now reckon to have been a poor choice, and factors which keep them fro~ making greater use of computers in their teaching. Phrased in different ways, the replies from 76 questionnaires and our own experiences at QMW may be grouped under five headings: hardware, software, time, people and miscellaneous.
Eleven respondents wrote that they need more facilities in order to (more) effectively use computers in teaching. A further six remarked specifically that they need more hardware. In line with this was the mention of difficulty when there is not enough hardware for an entire class to use all at once. This was a contributing factor to problems with timetabling computerized sessions. A related desire was expressed for a monitor which can be viewed by an entire class. These exist, but are too expensive as yet to be in common use. Outdated hardware causes problems; presumably it is unable to run contemporary software. Hardware limitations were also cited by five respondents and a problem with compatibility also appeared.
A lack of suitable software was noted on ten replies. Software is often not pitched towards teaching novices or does not reveal the engineering behind the elegant computer solutions. Software can be time-consuming to learn to use. Often programs do not fit into the curriculum or are too trivial in nature to bother with. The poor quality of available software, the lack of cheap realistic programs, the poor design of some software for teaching (by which is meant, probably, poor user interface and difficulty of use) and a lack of complete descriptions and evaluations, all conspire to put lecturers off using computers in teaching. Two respondents admitted to wanting more expensive software; another wants more sophisticated software and a third simply said "more software" is required if more use is to be made of computers. Only three engineers replied that a lack of awareness of the availability or existence of software is a hindrance. Fifty questionnaires bore lists of software wanted for use in teaching. Many record specific packages, others mention functionality within existing packages, or desired connectivity between existing packages. Still others consist of vague lists such as Tutorial packages for teaching CAD. Counting plural entries like Packages for... as just 2, software desired comes to 157 packages or functions within packages. Even with the overlap in the unspecific lists, it must be concluded that a lack of particular software items is a major factor in preventing greater use of computers in engineering education.
Factors related to time were frequently reported. Timetabling is a major area of difficulty, with fifteen of the questionnaires citing it. If computing facilities are limited, the timetabling constraints are compounded. Viewing engineering computing as a separate component of the engineering curriculum, and perhaps a less important one at that pushes it out of the timetable at some institutions. There is not enough time to develop suitable software or to learn how to use software from other sources. There is not enough time to organize or fmd out about software. "Lack of time" without qualification was put on seven questionnaires.
Thirteen questionnaires carried pleas for more staff to support computing activities and these staff should be departmentally based and full time. Stated in a different way, one respondent wrote that existing staff feel that the effort required to use computers in teaching is too high. Lack of interest by existing staff in a department was mentioned as a hindering factor by seven and lack of staff familiarity with computing as a problem by five. Staff conservatism is also a factor. One questionnaire stated that senior staff were especially reluctant to engage computers in their teaching. Staff in other responding departments were said to be unconvinced of the pedagogic value; a few thought that using computer simulations might be detrimental to teaching and one wrote that there is a feeling that undergraduates need to experience real engineering systems in preference to computer models or simulations. In one department, staff believe that they are already making the best use of computers and do not wish to increase it. Lack of awareness of packages and a lack of confidence in using packages were also cited. Apathy among students is mentioned by two departments.
A selection of problems and suggestions fit into this category. Among the problems are poor quality documentation, a lack of regard for teaching as compared to research, the fact that software cannot be purchased from special equipment grants, difficulties in assessing computerized work done by students, burdensome annual maintenance fees and ongoing costs of running a computer unit and lack of cooperation from the computer services department. One respondent stated that computer use for teaching civil engineering is not well established. There is a call for departments to collaborate in buying expensive software, and for more national software purchasing schemes like ECAD in the UK, which has made electrical engineering software available at special prices to the academic community. And finally. leaving the obvious till last: money. Direct reference to a lack of this wonderful stuff prevents at least fourteen departments from engaging in more computing. Lack of this resource also underlies many of the obstacles outlined above: facilities, software, hardware and additional staff posts all cost money.
The CTI Centre for Engineering is meant to be an information resource to stimulate the computer use for teaching and therefore would like your views as to how this might be achieved or on any of the matters raised above.
Deborah Pollard, Queen Mary and Westfield College
Computational Fluid Dynamics Community Club
The first workshop of the CFD Community Club was held at Rutherford Appleton Laboratory on Thursday 19 July 1990. It was chaired by Dr A D Bryden (RAL) and was attended by more than 70 people from industry and higher education. The purpose of the meeting was twofold: fIrst, to review the current state of and future trends in visualisation, and secondly, to give examples of the effective use of visualisation in CFD. The meeting consisted of a variety of presentations on visualisation and some short video examples of visualisation in CFD.
Mr J R Gallop (RAL) gave a survey of the available software for visualisation in CFD and of the methods of data interpretation and presentation. He reviewed some graphics libraries, e.g. GKS, PHIGS and PHIGS+, and gave examples of many visualisation systems in use today. Mr Gallop concluded by investigating the Application Visualisation System (AVS) and SunVision in more detail.
Dr D M Causon (Manchester Polytechnic) presented results on visualisation in high speed external aerospace problems. He illustrated the use of a Silicon Graphics Iris workstation in the visualisation of results from a range of three-dimensional Euler and Navier-Stokes solutions to problems such as a forward facing step, a muzzle blast, the HERMES shuttle orbiter and a crucifonn missile confIguration.
Mr B McNamara (Leabrook Computing) considered the design of CFD graphics for a teraflop computer. He outlined the evolution of such a computer and addressed the implications this would have for the interpretation of results and storage of the large amounts of data generated by such a computer. Estimates of total computation per time step were given and he discussed the use of disk and video graphics farms based around existing technology. This presentation provoked some lively discussion which continued over lunch.
Dr C Hill (Imperial College) discussed a strategy for the design of visualisation tools. In this he focused on how a visualisation system supports the need of CFD users, using the SPEED code from Imperial as a case study. Dr Hill identifIed the three main components of a design strategy being a visualisation toolkit, the computing environment, and data management. He finished by reporting the progress made so far, and the work planned over the next six months.
A two part presentation from the Oxford University Computing Laboratory followed. In the first part Ms A Ryan showed how it was possible to achieve visualisation on a Sun 3 workstation using the PHIGS library. She identifIed some of the problems inherent in the visualisation of results and indicated how these could be overcome without recourse to expensive graphics systems. Dr D Handscomb discussed a method for recovering the incompressible flow pattern, given a finite element fluid-flow computation that results in the values of the total flux across each element boundary. The method was presented in detail for the two-dimensional problem, and difficulties associated with its generalisation to three dimensions were discussed.
After the tea break, Mr C F Connolly (Rolls-Royce) gave a short talk on the visualisation work at Rolls-Royce and introduced a video presentation on their Graffiti system. This was followed by a short video from Mr McNamara which had been referred to in his presentation in the morning session. The presentations concluded with some excerpts from the NASA Ames video about their PLOT3D visualisation software.
The chairman then brought an informative and enjoyable meeting to a close, and announced that the next workshop would be held on 15-16 November at Cosener's House, Abingdon on Accuracy in Numerical Modelling in CFD. A copy of the presentations at the workshop can be obtained by contacting me.
Conor Fitzsimons, RAL
Workstations which deliver exceptional computational and graphics performance are becoming available from several suppliers and offer the potential to be an indispensable tool to solve demanding engineering problems. As well as having fast 3D graphics, these workstations (one term is 'superworkstations') can run significant size engineering calculations, offering over 5 LINPACK double precision MFLOPS and 100K polygons per second.
Typically systems in this class have vector processing capability and a limited degree of parallel processing (up to 8 processors). However systems which appeared to offer comparable performance without these techniques were not excluded. Several suppliers are now able to offer superworkstations and this technology is likely to be marketed in physically smaller and cheaper packages. CFTAG requested RAL to assess superworkstations and report to CFTAG and to the EASE community.
After a preliminary planning phase, active assessment took place between February and June, reporting to CFTAG in June.
The assessment needed to study the systems' computational, local networking and graphical capabilities. To be considered for assessment a superworkstation should offer Unix TM, Fortran 77, C, Ethernet, 3D graphics software and the X11 Window System. The supplier's range should offer a system that has at least 32 MEytes main memory, 500 MB ytes of disc and 24 planes available for true/direct colour display. To focus the assessment, an upper limit of Â£150K list price was set for a system with the facilities just discussed.
The suppliers who took part in the assessment are shown in table 1 below. The basic details of their hardware and the systems benchmarked are also shown in table 1.
|Model benchmarked||ESV 30||DN 10000 VS||-||4D/240||ST 2000||ST 3000|
|Maximum number of processors||1||4||1||8||4||4|
|Number of processors benchmarked||1||3||-||4||4||2|
|Maximum size of frame buffer||88||80||99||268||32*||70|
* The Stardent ST 2000's graphics architecture requires fewer bit planes as described later.
Megatek recommended that the Sigma 70 model 4300 be benchmarked, but as it was a new product it was not possible in the timescale of the assessment.
Stardent currently offer two ranges. The Stardent ST2000 (ex-Stellar) and the Stardent ST3000. RAL already possess an ST2000. The Stardent ST3000 only became available during the timescale of the assessment and only limited benchmarking was possible.
Characteristics and Performance
All suppliers stated that they offer Fortran 77, conforming to the ANSI standard. Hewlett-Packard. Silicon Graphics and Megatek stated that their Fortran 77 offerings are also validated. All suppliers offer C.
Several benchmarks were run to find out the performance by experience. Both artificial and application software suites were run. The application software included mesh generators, equation solvers and morphological image processing software. The assessment programme was designed to find out the scalar performance of the systems and also the performance of the more advanced architectures, using vector and parallel processing, on real problems, without changes to the application software. Some of the application software was already suitable for a vector processing architecture.
On each superworkstation tested, there are several optimisation levels. In addition to the conventional optimisation options, other options allow the compilation of code to take advantage of vector pipelines and parallel processors where available. Each complete program was compiled with each possible option, the results analysed and the best results used. The results confirmed that the highest option could not be relied upon to produce the best performance.
In general, the Silicon Graphics 4D/240 gave a consistent performance; it usually came second in these tests. The Stardent ST 2000gives excellent performance where the application being ported is well matched to the architecture. This could be achieved without changing the application source code.
Aspects of graphics hardware such as the number of bit planes and hardware features need to be treated with care. The Stardent ST2000's graphics processor performs many of its functions in main memory, compared with other systems which use the frame buffer. The main memory and frame buffer space requirements are therefore balanced differently on the ST2000 compared with the other systems.
When considering graphics software, a frequent reason for purchasing superworkstations is likely to be their computing and visualising performance on 3D and higher dimensional problems. For many situations, the base graphics requirements fall within the scope of existing graphics standards (GKS or PHIGS) particularly when application software is migrating from other systems. For other situations, the graphical requirements include the manipulation and display of complex models and representations of data such as may be found within the developing graphics standard PHIGS PLUS (earlier versions were referred to as PHIGS+). Beyond that, volume visualisation will require techniques and programming interfaces still being developed. GKS is available on most systems. PHIGS or PHIGS+ is offered in some form on all systems, either directly or as third party. For the 3D graphics benchmark, PHIGS+ or an alternative was requested. PHIGS+ was offered in only two cases (on the Stardent ST2000 at RAL and on the Evans and Sutherland ESV30) and there was missing functionality in both cases. The impression is that PHIGS and PHIGS+ products on superworkstations are not yet mature. It was encouraging to observe that the 3D graphics performance of PHIGS+ on the Evans and Sutherland ESV30 was superior to the systems which offered proprietary graphics software.
|3D graphics performance||*****||***||-||****||***||-|
|Application performance: scalar||****||***||-||****||**||-|
|Application performance: vector||***||*||-||****||*****||-|
All systems offer the X11 Window System, with Evans and Sutherland offering PEX in addition. Table 2 is an attempt to summarise the general characteristics and performance of each machine, on a scale * (poor) to ***** (excellent). A - indicates an apparent lack of the feature or where benchmarking was not performed for reasons already described. The assessments are non-competitive and do not represent a ranking.
CFTAG considered the report at the June meeting. It decided to seek advice from AGOCG on the criteria for graphics software before it could decide on which systems could be recommended at its next meeting.
When the full report is published shortly, it can be obtained on request.
Julian Gallop Rutherford Appleton Laboratory
Letter to the Editor
In Issue 25, Deborah Pollard refers to a non-commercial finite element package called PINEL and suggests that it might be an in-house name adopted by several different university engineering departments for completely separate programs. The IBM PC User Group has a program called PINEL available in its software library, written by Mike Bailey of MFB Software Ltd, and a review may be found in Connectivity, Vol 7, No 5, p33 (May 1990). It is just possible that some engineering departments are actually using this program, seeing as it is available as shareware. On balance, however, Deborah's explanation seems more likely!
R I Woods, University of Surrey
Software Quality Assurance in an Engineering Research Environment
This article is a summary of a report being prepared on the tools available, and their application within the engineering research community, in support of Software Quality.
There are hundreds of tools, many of which are methodology dependant, now available to support different phases of software development and they can be considered to contribute, to some degree, to software quality. There is also a class of tools which directly supports the Quality Assurance function and which is applicable, in most cases, irrespective of the development methods. These are the tools for the validation, verification and testing of software systems. It is this class of tools that we are interested in.
There are different ways of classifying these tools. One way is to do it depending on whether the software system needs to be executed or not, ie Dynamic and Static analysis tools. Dynamic testing tools have the longest history. These tools essentially instrument the software under test with software probes. This has the effect of making the program self-measuring. Significant events during testing of the software can be selectively recorded for analysis. The analysis of the results can take different forms: for example, to show that testing conforms to a certain standard of test coverage or to enable the designer to carry out time optimisation of the program. In regression testing, the results can be used to compare with the results from the previous tests. Related to these are the capture/ playback tools in which the manual input is recorded for automatic test execution later.
Another useful dynamic tool is the assertion processor. Assertions are placed in the programs at critical points. Normally, these are treated as comments. When used with the processor, these assertions are tested during execution for violation. Appropriate actions can then be taken when a violation occurs.
Static analysis tools do not rely on an execution of the system, but carry out a scan of the source code. Included in this classification is the program prover. This is a program which compares the specification (written in discrete mathematics) and its implementation. On the more practical side, the static analysis tools carry out the analysis to, for example, detect data anomalies, unreachable code, control and data complexity, or detect non-comformance to standards.
Our initial finding is that the engineering research community, in general, make very little use of these tools. Perhaps the misconception within the community is that these tools are difficult to use, time consuming and costly. We have found that, with the exception of Static Tools associated with Formal methods (e.g. Program Prover), most of the tools are relatively easy to use. Learning time for most of them is in the order of 1 or 2 days. Perhaps the only obstacle to more widespread use is the cost.
The advantages of using a tool are that there would be a substantial increase in the confidence in the product as well as saving in testing time. Dynamic testing tools automate the testing process and do it with greater completeness. The saving in time can be appreciated in regression testing by using a capture/playback tool.
We have information on the following tools:
- Unix Workstation
- IBM PC-Compatible
- SOFTPROBE II
- DEC VAX
- VAX DEC TEST MANAGER
- VAX PERFORMANCE AND COVERAGE ANALYSER
We are also interested in hearing from anyone who has experience in using the above or similar tools.
An EASE Seminar, Verification, Validation and Testing Tools, will be held on Wednesday 14 November at Napier Polytechnic (Merchiston site), Edinburgh.
Teo Goh and Chris Sinclair, Napier Polytechnic
AI Support for Engineers
An evaluation of KAPPA PC Version 1.0 has been added to the series of technical reports on AI tools prepared by the Artificial Intelligence Applications Institute, University of Edinburgh, as part of the AI Support for Engineers project.
KAPPA PC is a PC-based tool for the development of object-oriented and knowledge based systems. It is written in C, runs under MS-Windows and requires only 640K. Although primarily an object-oriented programming system, it also allows functional programming and rule-based reasoning. KAPPA PC possesses a good development environment incorporating a number of graphical display tools and structured editors. Interfacing to external software or data is provided and the facilities of KAPPA PC can be extended using the C language.
This report is now available.
Strategic Research Issues in AI in Engineering
8 October 1990
A last minute reminder about this workshop which is the first of two one day events to be organised by the Community Club for AI in Engineering, in conjunction with the lEE. It stems from an initiative to promote the development and application of AI tools and techniques within Engineering applications. It is hosted by Professional groups C4 (Artificial Intelligence) and C8 (Control and Systems Theory) and will be held at IEE, Savoy Place, London on Monday 8 October 1990.
The 1990/91 training programme is about to commence at AIAI. The course schedule for 1990 is given below. Over the summer months the courses have been rationalised and the practical training facility has been re-equipped. Due to support from SERC via their EASE Programme some training places are available free of charge to academic engineers. However booking is essential to ensure your place.
- Knowledge Based Systems Skill: 1-5 Oct
- Knowledge Elicitation (fully booked) : 8-9 Oct
- Knowledge Engineering : 10-12 Oct
- Planning & Scheduling : 15-18 Oct
- Common Lisp : 2-26 Oct
- Introduction to Prolog : 29 Oct- 1 Nov
- Parlog : 12-14 Nov
- Advanced Prolog: 26-29 Nov
Terri Lydiard, AIAI
Post Experience Vocation Education in IT Unit - Courses
October to December 1990
- UNIX Introduction : 15-16 Oct
- C++ Programming : 22-24 Oct
- Communications: An Awareness Course : 30 Oct
- Introduction to Object-Oriented Programming Concepts : 5 Nov
- Introduction to Object-Oriented Programming using Smalltalk-80: 6-8 Nov
- TCP/IP Protocols: A Practical Introduction : 13-14 Nov
- UNIX System Administration : 15-16 Nov
- Common LISP : 19-21 Nov
- An Overview of Structured Methods: 23 Nov
- The C Programming Language: 3-5 Dec
- An Introduction to Formal Specification Using Z : 3-5 Dec
- UNIX for Users: 11-12 Dec
All the above courses will be held in the Department of Computer Science at the University of Manchester.
All practical work for the courses is carried out on Sun SPARC stations in the new PEVE Training Suite.
Martyn Spinks, University of Manchester
- Object Oriented Techniques for Engineers 16 October 1990, City University
- Engineering Data Integration - Problems and Solutions 7 November 1990, Loughborough University
- Computational Fluid Dynamics Community Club 15-16 November 1990, Abingdon, in conjunction with the Institute for CFD.
- Verification, Validation and Testing Tools 14 November 1990, Napier College, Edinburgh.
Basic Software Development Tools for the Fortran Oriented Engineer 6 December 1990,
Rutherford Appleton Laboratory.
This seminar is designed for Fortran users that currently only use an editor and a compiler as their software
development tools. Its aim is to inform such users as to what other elementary tools are available to assist
them in the production and maintenance of their Fortran codes.
- Basic Software Management Tools (SCCS and Make)
- Kent Software Tools
- An Overview of the LDRA Testbed System
- QA Fortran: Developing and Maintaining Reliable Code
- Using a Software Metric Tool to Drive Quality Improvement in Third Party Software. A Case Study
- Toolpack 1: An Introduction for Engineers
- Spag/Forcheck: Tools for Software Maintenance
- EASE 91 25-27 March 1991, University of Birmingham
- Designs for a Global Plant Species Information System 12-16 October 1990, Greece International symposium for the exposition of a range of designs for a global species diversity information system for plants. Such a design should enable scientists in all countries to access information on the names, classification and geographical distribution of all the world's plants. The designs would involve technical aspects both of biological computer information systems and decision-making amongst taxonomists. Assessments of the type of demand from conservation, agroforestry, natural products research and other research applications will be set in scenarios for implementing such a system.
- Software Engineering for Real Time Systems 16-18 September 1991, Cirencester
ERCIM: Large Scale Scientific Parallel Computing
November 20-23, 1990 Amsterdam, The Netherlands
Introduction: Large scale scientific computing and parallelism
Modern super (vector-) computers like the CRAY Y-MP, the NEC SX-models and the Fujitsu VP's play an important role in scientific research, and these machines are of vital interest to scientists and engineers working in the application areas (like physics, chemistry,meteorology, oil exploration and computational fluid dynamics). As it turns out, many real-life problems are more suited to numerical simulation on a supercomputer than to experimental analysis in the laboratory, because physical experiments are prohibitively costly, dangerous, or impractical, or because the phenomenon being studied is too small, large, slow, or fast to be easily observed at reasonable costs. In ordbr to meet the ever growing needs of the scientific and engineering computing community, CRAY and NEC have recently started to market multiple processor versions of their machines, with up till eight CPUs. Consequently, coarse-grained parallelism of algorithms running on these machines is becoming increasingly important, in addition to fine-grained parallelism as faced in the vectorization of numerical algorithms on one processor of a supercomputer. Parallelism is also encountered on mini-supers like those of Convex, Alliant, Sequent and Encore.
At present, massively parallel machines like the Connection Machine, with thousands of processors, and transputer systems with hundreds of processors, are being studied intensively, as a possible alternative to parallel vector computers. However, programming massively parallel machines is still much more time-consuming than programming modern supercomputers. Time will learn which of both approaches will ultimately be most successful for broad classes of scientific and engineering problems.
The purpose of this course is to teach (prospective) researchers and engineers how to exploit modern parallel (vector-)computers, in order to produce efficient codes on these machines. Experience has learned that gaining experience and know-how in the field of vector and parallel computing can be a time-consuming activity. Participants can benefit from insight, knowledge and experience of the course teachers in this field, and have a rather easy entrance to the field in this way.
After some basic topics (survey of commercially available architectures, programming techniques for vector and parallel computers, performance analysis and benchmarking, basic software, portability, data organization, numerical libraries), important numerical techniques for vector and parallel systems will be discussed, like the direct solution of linear systems of equations, iterative sparse matrix techniques, and efficient block algorithms. Finally, applications in the fields of ordinary differential equations, shallow water equations, Navier-Stokes equations, and the factorization of large numbers will be treated.
On each day, after the last lecture, there is opportunity for participants to discuss their own case studies with the lecturers. CWI has access to various parallel (vector-) computer systems (Alliant FX/4, IBM 3090 with 6 VFs, NEC SX-2, Cyber 205, Cray X-MP), on which these cases can be implemented and tested.
- Program Day 1 (Tuesday, 20 November)
- Introduction and survey of commercially available architectures (H.J.J. te Rlele, CWI)
- Portability / efficiency: a compromise (W.M. Lioen, CWI)
- Performance analysis and benchmarking (K. Dekker, TU Delft)
- Programming techniques for vector and parallel computers (H.A. van der Vorst, University of Utrecht and CWI)
- Day 2 (Wednesday, 21 November)
- Exploiting the Transputer (C.P. Wadsworth, Rutherford Appleton Laboratory, Chllton, UK)
- Parallel solution of linear systems of equations (W. Hoffmann, Unlv. of Amsterdam)
- I/O and memory management (D.T. Winter, CWI)
- Day 3 (Thursday, 22 November)
- Efficient block algorithms on parallel computers (W. Jalby, INRIA, Paris)
- A parallel scientific library (M. Louter-Nool, CWI)
- Iterative matrix techniques (H.A. van der Vorst)
- Parallel solution of the shallow water equations (E.D. de Goede, CWI)
- Day 4 (Friday, 23 November)
- Parallel solution of the Navier-Stokes equations (G. Lonsdale, GMD, Bonn)
- Parallel solution of ordinary differential equations (B.P. Sommeljer, CWI)
- Parallel factorization of large numbers (H.J.J. te Rlele)