Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF SE ENG Alvey Transputers Literature
Further reading □ Overview □ 1987 □ 12345 □ 1988 □ 6789 □ 1989 □ 101111a121314151617 □ 1990 □ 181920212223242526272829 □ 1991 □ 303132333435 □ 1992 □ 363738394041 □ 1993 □ 424344454647 □ 1994 □ 484950515253 □ 1995 □ 545556575859 □ 1996 □ 60
CCD CISD Harwell Archives Contact us Heritage archives Image license terms

Search

   
InformaticsLiteratureNewslettersECN
InformaticsLiteratureNewslettersECN
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1987
12345
1988
6789
1989
101111a121314151617
1990
181920212223242526272829
1991
303132333435
1992
363738394041
1993
424344454647
1994
484950515253
1995
545556575859
1996
60

Engineering Computing Newsletter: Issue 54

January 1995

Engineering Decision Support

IT Awareness in Engineering Workshop

Informing Technologies to Support Engineering Decision Making

The latest event in the highly successful Information Technology Awareness in Engineering workshop series was held at the Institute of Civil Engineers on 21 - 22 November 1994 in London. A pleasant venue with effective organisation contributed to the success of this well-attended event. As well as providing an opportunity for an exchange of ideas between engineers and IT specialists, it also provided a broader socio-cultural perspective: the importance of the human dimension for successful decision support systems.

Cover

Cover
Full image ⇗
© UKRI Science and Technology Facilities Council

Demonstration Sessions

Demonstration Sessions
Full image ⇗
© UKRI Science and Technology Facilities Council

The workshop was coupled with an informative exhibition of working systems and support software, with both academic and commercial systems represented. A large part of the second day of the workshop was devoted to an update on a new initiative - the Innovative Manufacturing Initiative (IMI) - followed by parallel discussion groups centred upon the three launch themes of the IMI. The workshop was supplemented by a well-produced set of proceedings edited by Professor James Powell of the University of Salford (these proceedings are still available at £25 from Ruth Tubb, Information Dissemination Group, R1, RAL. The proceedings provide a good overview of leading-edge UK research in this area of engineering.

The first session concentrated upon the origins and background to decision support systems (most people taking Keen & Scott-Morton's (1978) work as the starting point). A social science perspective was introduced via studies of managerial decision-making and the use of psychological profiling in group decision-making; the introduction of multi-disciplinary perspectives in design, especially as this related to new product design. A presentation of the RED (Rigorously Engineered Decisions) project indicated that over-reliance upon classical decision theory may be far too limiting for many real-life engineering decisions. Reports of excursions by engineers into studies of human behaviour and the usefulness of Computer Supported Co-operative Working (CSCW) (from the perspective of the DUCK project) to support collaborative engineering design concluded the first session. My overall impression was that a strong case had been made for more work in this area, that this work would (of necessity) have to be multi-disciplinary and that we needed detailed studies of microdecision-making and studies of decision-making on real-life engineering projects.

The focus then shifted for the afternoon session to a consideration of various approaches to decision-making and an evaluation of their effectiveness in a number of engineering domains. The approaches included case-based reasoning, adaptive search and constraint-based modelling. The description of the POINTER (People Oriented Information Tracking for Engineering) system stressed the importance of integrating various types of (engineering) information and the utility of a distributed system - in spite of the additional problems this can cause! It seems that we have a range of useful techniques available: the challenges lie in scaling-up, providing the support for flexible use of a variety of techniques (matching the technique to the problem), handling the information overload - or converting it to knowledge - and supporting the people who have to deal with the engineering problems, the designs and the decision-making.

The first day concluded with the keynote address by Professor F B Prinz (CMU, moving to Stanford University) presenting a vision of an agile network of (mostly academic) specialist centres in the USA collaborating in the area of distributed design. The specific example, illustrating the move in industry away from highly vertical systems to a distributed organisation with sub-contracting components, was of wearable computers: VuMan, VuMan II and Navigator. The discussion following the address pulled out the problems of information overload and the slippery nature of design and decision-making: these leading to the difficulty of providing appropriate support tools.

Vince Osgood. Head of Innovative Manufacturing Initiative

Vince Osgood. Head of Innovative Manufacturing Initiative

The remainder of the workshop was devoted to an update of the Innovative Manufacturing Initiative (IMI) and associated parallel sessions for discussion and briefing opportunities. Three Research Councils - the EPSRC, the ESRC and the BBSRC - together with the Department of Trade & Industry and the Department of the Environment are involved in the Initiative which is to support high quality strategic and applied multidisciplinary research (and related postgraduate training), conducted within a business process framework in response to the need for more innovative manufacturing within the United Kingdom. IMI will support collaborative research geared to the needs of industry. There is a current call for outline proposals in the three launch sectors of the Initiative: Integrated Aerospace Manufacture, Construction as a Manufacturing Process and Responsive Processing. The closing date for receipt of these outlines is 13 January ]995.

There are two more workshops in the series - Virtual reality and Rapid Prototyping on 26-27 January and Object Technology in March.

Tony Conway, Computing & Information Systems Dept, Rutherford Appleton Laboratory

RAL Computing and Information Systems Department

On 14 November the Central Computing and Informatics Departments at the Rutherford Appleton Laboratory were merged to form a new Computing and Information Systems Department (CISD).

Creating the new Department has brought together a wide range of IT and computational skills in R&D and service-related disciplines into a single, comprehensive organisation that will increase the breadth and scope of work with customers and partners as the role and status of the Laboratory begin to change during the next year.

The Mission Statement for CISD is To provide high quality computational facilities, specialist services and high value IT solutions for customers inside and outside DRAL.

ECN will continue to be published according to the regular schedule, 6 times per year, targeting its normal readership. The Editorial team also remains unchanged. The new, larger Department will offer more local scope for increasing the breadth of articles and editorial comment. However we will still encourage you, our readers, to provide us with your articles on relevant topics and work. Whenever possible articles produced by our readers will take publication priority over locally produced articles.

Brian Davies, Department Head, Computing & Information Systems Dept

NQS - making Use of Idle Workstations

In October, 1994, the University of Sheffield Academic Computing Services began providing help and support for installations of the Network Queueing System (NQS), as part of the Joint Information Systems Committee (JISC) New Technologies Initiative. This help and support, provided by dedicated staff, is available to all UK Higher Education (HE) sites - you can use the Mailbase mailing lists (see below), or contact me direct.

The Network Queueing System

Users place their work into a queue, as they would with a printout. NQS takes the work out of the queue, and runs it on a suitable machine:

Limits can be placed on the resources that any submitted job can use, such as the amount of CPU time, or memory usage. You can also restrict how many jobs can run at a time on any given machine, allowing you to maintain a usable interactive service.

If Sheffield's experience is typical, most sites will have had a very low demand for a system such as NQS, but once installed, utilisation proves surprisingly high.

History Of NQS

In the past, many UK HE sites received a copy of NQS (called 4D/NQS) included with their purchases of IRIX from Silicon Graphics Inc (SGI). However, SGI withdrew support for 4D/NQS, leaving sites such as Sheffield largely dependent on future versions of IRIX being backwards compatible enough to run the existing 4D/NQS binaries.

Faced with this, after performing a survey of practices and opinions at other UK sites, we have taken a leading freely-available version of NQS, and are hoping to enhance it further to meet the future needs of the UK academic community.

Our resulting work will be freely downloadable via the anonymous file transfer (ftp) system, from our site details are given below.

Installing NQS

As part of our work to support other UK HE sites, we are providing a remote installation service, whereby arrangements are made for one of our staff to install, and configure, NQS at your site via SuperJANET. Please email me at the address below for more details.

Mailing Lists

There are three electronic mailing lists for NQS available via Mailbase:

World Wide Web

Our WWW server includes information and reports about NQS. The URL is:

http://www.shef.ac.uk/uni/projects/nqs/NQS.home.html

Availability

The source code to NQS is freely available, via ftp:

ftp://ftp.shef.ac.uk/pub/uni/projects/nqs

The source code currently works on HPUX, IRIX, Linux, OSF/l, SunOS 4, Solaris 2 and ULTRIX.

We are interested in hearing of any other platforms on which you would like to see NQS available.

Further Development

We aim to continually improve NQS. Announcements of new releases are posted to the NQS-Announce mailing list. We welcome all suggestions for new features for NQS.

Stuart Herbert, Academic Computing Services, University of Sheffield

Object Oriented Technology

Object Oriented (OO) is a rapidly maturing technology. As a programming paradigm, it is already being extensively used in many engineering fields and real products are emerging. As the benefits (and weaknesses) of the technology become clearer, the OO emphasis is broadening to the complete engineering software development cycle, from the requirements/specification phase through to software maintenance and reuse. This is mainly achieved by exploiting and adapting software development methodologies developed in the software engineering domain.

So far, the main impact of OO has been on the IT dimension of engineering software. Recently, attention has been directed towards the application of 00 to the engineering domain itself. Unlike the IT applications of OO, it is still too early to even identify trends, let alone predict the results. One example of this sort of work is the interest in the use of object models of engineering entities and the work integrating OO databases with the STEP product modelling standard. This leads towards the use of such object models to directly support performance prediction and simulation. Another interesting, but longer term, direction is the addition of intelligence to the objects. This promises not only to ensure more robust designs but, in conjunction with Virtual Reality, to revolutionise the design process.

There will be an IT Awareness in Engineering Workshop in March on this topic.

Damian MacRandal, Computing & Information Systems Dept, Rutherford Appleton Laboratory

CORDIS

CORDIS (European Community Research and Development Service) is a database service launched by the European Commission in December 1990, which provides information about its research and technological development (RTD) activities. It is thus possible to gather information on all aspects of this activity from a single database service. Whereas in the past information was available about the various research programmes, it was scattered throughout many different databases and limited in content. CORDIS brought these existing databases together into a comprehensive system.

The CORDIS database originates from the second framework programme within which the VALUE (Valorisation and Utilisation in Europe) Programme was launched. The overall objective of this Programme was to improve the dissemination and utilisation of research results. As part of the programme the EC began working on CORDIS with the following objective: To disseminate public information on and about all Community RTD activities, for the purpose of enhancing awareness of these activities, assist interactions and cooperation among individual programmes and their participants, and help promote co-ordination with similar RTD activities in member states. There are nine separate databases making up the service. The first three became available at the launch in December 1990: RTD-PROGRAMMES, RTD-PROJECTS, RTD-PUBLICATIONS.

The RTD-PROGRAMMES database describes entire research programmes (BRITE, ESPRIT, RACE, TEDIS etc) as well as work of the Joint Research Centre at Ispra. The coverage period is from 1986 onwards. The records include information on and can be searched by, the programme name or acronym, by subject codes, by Commission Directorate-General, etc.

The RTD-PROJECTS database contains descriptions of specific research projects within EC research programmes. This information includes project title, acronym, objectives, general description, duration, contractors involved, contact people, partners etc.

The RTD-PUBLICATIONS database has been substantially available under the name EABS for a number of years. It is concerned with providing information about the publications which arise out of RTD projects. The records include the basic bibliographic details, an abstract, programmes and projects involved, ordering details, etc.

In April 1991 the second group of three databases was launched: RTD-RESULTS, RID-ACRONYMS, RTD-COMDOCS.

The RTD-RESULTS database contains information on the results of research projects in the fields of science, technology and medicine. There are three types of record: standard records are for results arising from EC and other European research activity; VALUE records are for results arising from the EC's VALUE Programme; and COST (Co-operation in Science and Technology) records results from the COST Programme. Information contained includes an abstract, commercial applications, contact details, sources of funding support, contributing organisations, etc.

The RTD-ACRONYMS database, as the name suggests, provides information on the various acronyms and abbreviations, such as those mentioned above, used in the area of EC RTD. This database can be used in a variety of ways. The most common search would be to discover more information about a known acronym. Alternatively the user may browse through an alphabetical list, through a subject category listing, or through a listing organised by EC department (Directorates-General).

Comdocs are documents which are sent from the Commission of the EC to the Council of Ministers. They are mainly used to communicate proposals for legislation but are also used to convey more general information to the Council, such as proposals for new RTD programmes or reports of activities in various areas. CORDIS contains records describing all RTD related Comdocs. Also included are relevant SEC documents which are similar to Comdocs but are for internal use only and not available to the public. The records provide the full title of the documents, dates, document numbers, subject codes, etc.

The third group of databases are RID-NEWS, RTD-PARTNERS and RID-CONTACTS.

RTD-NEWS was launched in December 1991 and carries general news items concerning RTD activities and specific announcements and calls for proposals. This obviates the need to use other sources - either electronic or printed - to monitor opportunities. It also makes the task of providing up-to-date information services much easier than it was previously. It is also possible for individual researchers to acquire the information directly themselves.

The RTD-PARTNERS database was launched in January 1992. As potentially interested researchers are required to work in conjunction with a partner or partners in other member states or industrial sectors, it is vitally important to be able to discover suitable candidates for partnership. This database is intended specifically for that purpose. Researchers are able to place their requirements on the database for potential partners to respond to.

The long awaited ninth database RTD-CONTACTS was launched in May 1994. This provides contacts throughout Europe for the provision of information on EC RTD. Contact details are provided on individuals from the Commission and other European institutions responsible for developing EC policies and the day-to-day management of research programmes. The names of national representatives on programme steering committees, as well as the nominated contacts for EC RTD programmes at national level, will also be included. Other contact points covered will include:

The new database is designed to help users locate contacts in their own area of expertise as well as those who can provide help with administrative matters. It provides all the necessary contact information and will be a valuable aid for participating in EC RTD projects.

CORDIS is available through ECHO, the European Community Host Organisation. ECHO is a very unusual database host in that most of its databases are free of charge to the user. One of its main functions in fact is to provide a testing ground for new databases to ascertain their marketability. The CORDIS service is currently free of charge to all users although this policy is under review. Users have to pay the telecommunications costs involved in each search but there are currently no charges made for the actual use of the CORDIS databases.

The RTD-PARTNERS entry form is now available electronically and can be obtained by sending an e-mail request to CORDIS. The form can be completed electronically and then returned to CORDIS by e-mail for rapid inclusion in the RTD-PARTNERS database. There has been a great deal of interest in using the RID-PARTNERS database coinciding with the start of FP IV. Approximately 50 new entries have been received each week in the last few months. This means there are now almost 13,000 requests for partnership on the database. By completing the form, your organisation's details, potential programme and research interests will be available to more than 8,000 registered CORDIS users who currently access the database for 200 hours a month.

The process is as follows:

As the new electronic form is currently on trial, CORDIS would welcome any comments or suggestions on ways in which it could be improved.

Sarah Matters, Liaison Officer, UKRHEEO, Brussels

World Transputer Congress '95

Harrogate International Centre North Yorkshire, England 4-6 September, 1995

The Transputer Consortium (TTC) is pleased to announce that the WORLD TRANSPUTER CONGRESS 1995 (WTC '95) will be held on 4-6 September 1995 at the Harrogate International Centre, North Yorkshire, England. WTC '95 is the leading international transputer conference and exhibition and is the third in a series sponsored by and run under the overall management of TTC. WTC '95 is also sponsored by the Commission of European Union and SGS-THOMSON Microelectronics. The local partner for WTC '95 is the World occam and Transputer User Group (WoTUG).

Susan Hilton, TTC Secretariat, Rutherford Appleton Laboratory

C for Parallelism

C Background

C is becoming an increasingly prevalent language for commercial, scientific and engineering applications, where once COBOL and FORTRAN held a near monopoly. The reasons are various:

(1) Conceptual Integrity, Portability and Efficiency

C was for a long time just a de-facto standard based on the Kernighan and Ritchie book. Possessing the elegance of minimalism, the result is that C is one of the most portable languages about. The complex data structures (lists, trees etc.) required by many applications can be created, albeit in a low level way, at run time, using a few simple mechanisms.

And yes, although you can commit very many bad practices in C, the mechanisms by which you do this are still few and elegant!

As a criticism of its low-level nature, C has been called an assembly language. The truth of the matter is that although C can be used at this low level, it is perfectly possible to code at a relatively high level providing the programmer is prepared to consider, with a little bit more care, the things that he/she has taken for granted in the past and which have been the source of many hours of debugging.

The distinction with C, when coding at this high level, is that not all the low level implications of the programmer's coding are hidden from him/her. The most noticeable of these is that of parameter passing and the consequences of the minimalist approach in C to always passing parameters by value (give it the data object i and C always passes i's value ie the contents of where i is kept in memory). Already we are in the realms not normally approached by Fortran programmers that allow the system to generate code that passes data objects' I_values (let us call these addresses for now but understand that this is an approximation) about without any control or awareness by the programmer. When this happens unexpectedly (the person who coded the invoked routine makes a mistake say) a data value in the invoker's data space may change value unexpectedly and this unnoticed error is then free to infect the rest of the invoker's data space. Over the last 30 years software engineers have sought to localise such errors and, as an aside, have changed to frown on concepts such as common blocks, etc.

The programming language C will not pass a data object's address about without being instructed to do this BY THE INVOKER. The invoker must make a deliberate coding action for his/her data object's address to be passed to an invoked routine (ie the use of the & operator). This creates a C pointer object (NOT to be confused with Fortran 90's pointer or Pascal's pointer which are VERY different beasts) whose value (contents) is the address of the programmer's data object. By passing by value that pointer, the data object's address gets passed.

Given that the system has this & operator to obtain a data object's address and can create pointer objects, it seems reasonable to let the programmer define his/her own objects called pointers and to store (using &) and maintain the addresses of other objects. Then all we need is an operator that will, given a pointer, allow the manipulation of the object pointed to. This is C's unary ,*, operator. Given this we can begin to construct lists and trees etc, the basic essentials of modem day dynamic, complex run-time data objects.

Now those Fortran programmers reading this will consider all this detail low level, assembler like and may be put off by this. However, once this basic idea is mastered (that a pointer is an object that may have an address itself whose value is the address of some other object) then, because C is minimalist, most of the assemblyness of C is mastered and, in one stroke, most of the dynamic data structuring facilities required (lists, trees, etc.) fall into place. The result is that modem coding practise is available in one uniform, simple development language and not available as a succession of novel, unrelated augmentations to an existing language (that was not designed in the first place to support such new concepts).

Finally, as an illustration, how many FORTRAN programmers out there can tell me the value of i after:

      integer i
      ....
      i = 5 
      ff(i,3) 
      i = i + 1 
      

without wanting to know more about ff, more about the ..., etc? A C programmer could confidently tell you the answer.

(2) Availability

C is widely available, historically being the primary language available on UNIX systems, and seemingly now the language of choice for PC code development.

(3) S/W Engineering Capability

C is capable of supporting a certain level of good software engineering practice - certainly more than FORTRAN. Although it may be possible to write C significantly worse than the corresponding FORTRAN, it is also possible and easier (given reasonable exposure to both languages) to write C significantly better.

C provides user-defined data structures, though data can only be local to a function, local to a source file or GLOBALLY available; code can only be local to a source file or globally available.

To overcome these crude levels of data and code visibility, and indeed to make available more support for good software engineering practice, C++ was devised. This is an object-oriented extension of C, with C essentially embedded inside. One key feature of object-orientedness is data-hiding, making it ABSOLUTELY impossible (more difficult in the case of C++) to affect data which is not relevant to the task at hand.

The result is that C++ is an amalgam of two useful things: the C language and object orientedness. As long as you steer clear of where the two join (specifications of C++ in this area are a nightmare of overpopulous, complex and near incomprehensible rules), there will be benefit. The danger is that compiler writers interpret these rules differently and indeed, according to your preference, these rules will be different as there is no C++ standard yet (although such a standard is in preparation and nears completion)!

There are incompatible C++ compilers about which support different levels of functionality. Beware of non-portability - in particular some compilers do not support so-called multiple-inheritance.

There is now a standard C called ANSI C which differs a little, but significantly, from the original C of Kernigen and Ritchie (K&R). The availability of a standard has given a great impetus to the language for industry. Where ANSI C differs from K&R C, it generally gives a little more programmer-friendliness and a little less conceptual elegance!

C and Parallelism

Parallelism is not really an issue in C, as it is simply not addressed at all. C is an imperative language, designed to be executed serially (a statement at a time) with some rules about order of execution both inter-statement (ie flow of control) and intra-statement (eg expression evaluation). How then can C and parallelism be merged?

(1) Parallelising Compiler

There is no implicit parallelism within C compared to a functional language, say, which does not specify order of execution. However, using data flow and other compiler-related techniques it is possible to keep the functionality of C code the same but to execute certain (sub-) statements in parallel. This is a form of automatic parallelisation. Overall there has only been small success in parallelising limited types of C code, as the general case (with pointer use) is too complex or indeed impossible statically. A representative product in this area is ASPAR [1].

(2) Hi-Jack the Class Mechanism

C++ is all about instantiating objects (variables) of different classes (user or system defined datatypes). Operations on objects are performed by special code in the body of a class definition and NOT by the programmer explicitly. In essence code to manipulate objects is hidden, so, for example, the programmer can make use of a matrix multiplication routine defined within a matrix class as follows:

matrix a, b, c;
//initialise b and c 
a = b * C;

How the multiplication is performed is NOT up to the application developer but up to the system programmer, who would presumably write the matrix multiplication in parallel were the class matrix to be supported on a parallel machine. Again this does NOT explain how the matrix code is written IN PARALLEL merely how it may be used. The matrix code is probably written using a proprietary parallel system and possibly in another language.

Of course not all parallelism available within the application may be captured by object level parallelism. It may still be that subroutines in the main, non-class based code could usefully be executed in parallel.

(3) Library Use

Not only may C-libraries themselves be implemented in parallel, but C libraries can be provided which provide the building block operations of parallel programming (eg spawning processes, sharing variables between parallel processes and passing messages between parallel processes).

In the first case the parallelism is transparent to the user. In the second case it most certainly is not; the programmer has to explicitly write a parallel application in C using typical parallel programming techniques and indeed this accounts for the majority of parallel programming in C.

(4) Changing C

There have been very few attempts to extend the C language to encompass parallelism. Not that the idea is flawed necessarily, but what results is NOT standard C code so it can't run unaltered under a conventional C compiler.

One product of note is ParC [2] which adds occam-like constructs to C in a elegant manner.

Most of the attempts to change C have been done using the so-called language extension features of C++, which though powerful are limited syntactically (ie they must use existing C), however existing operators performed on objects can be given new meanings.

Conclusion

The most fruitful area for development, apart from standardising on good portable libraries for parallelism, would appear to be the development of good C++ class libraries - both at the application level with transparent parallelism and at the development level with efficient and well designed high-level classes that encompass parallel processing across a range of machine architectures. There is much talk of unification of the various parallel programming models, so application developers will not have to make a choice that ultimately limits themselves to one type of parallel architecture.

As it stands there are good de-facto standards for parallel programming using message passing such as PVM [3] and MPI [4]. The application programmer interfaces for parallel programming using shared memory tend to be more proprietary. Until automatic parallelising systems for C become as mature or effective as those for FORTRAN, these libraries represent the state of the art, as low level as it may be, in programming C in parallel.

The good news is that the parallelism in such C codes (particularly with message passing) is so explicit, that there should be no difficulty in running these on any future parallel machine. A large amount of software has been written under PVM, for instance, and this will be supported for many years into the future, no matter how weird and wonderful or prosaic the parallel architectures in the years ahead may be.

References:

[1] K Ikudome, G C Fox, A Kolawa, J Flower - An Automatic Symbolic Parallelisation System for Distributed Memory Parallel Computers

[2] Par.C System - User's Manual and Library Reference 4th Edition 1989, Parsec Developments, The Netherlands

[3] Geist, Beguelin, Dongarra, Jiang, Manchek, Sunderam - PVM 3 user's guide and reference manual (1994)

[4] Message Passing Interface Forum - MPI: A Message Passing Interface Standard; Esprit project P6643 (PPPE) (1994)

Community Clubs

Parallel Processing in Engineering Community Club (PPECC)

WORKSHOP: "Distributed vs Parallel: convergence or divergence?": Call for Papers

The Parallel Processing in Engineering Community Club (PPECC) is organising a Workshop on "Distributed vs Parallel: convergence or divergence?" to be held at The Cosener's House, Abingdon, on 14-15 March 1995. The Workshop will be chaired by Professor Peter Dew (University of Leeds).

Scope

There is a widespread acceptance that high performance computing environments are needed to solve a wide range of problems arising in science and engineering. Typical applications can be classified as:

In recent years there have been significant advances in the technology to handle these applications. Parallel computers are now much more widely available and with the emergence of high speed networks (eg FDDI and ATM) these computers can be connected into high performance distributed computing environments. This brings about a convergence of the issues being address~d by the Distributed and Parallel Computing communities. The widespread use of message passing (eg PVM and MPI) provides a common software platform where applications can be developed on a network of workstations and PCs (clusters) and ported to massively parallel processors (MPP) for production runs. The prospect of a "seamless" integration offering transparent and scalable migration of applications from clusters through to MPPs is becoming very attractive to users.

Differences do remain, however, at least in use. Distributed computing has traditionally focused on distributed access, local or geographically remote, typically with a single locus of control at anyone time (eg remote procedure call) rather than concurrent activity on multiple nodes. Fault tolerance and the management of systems with multiple administrative domains have been a particular concern of distributed systems. Parallel computing has focused on concurrent processing and emphasised the goal of scalability to high levels of performance.

Perceived difficulties in developing applications, and in the limited range and level of available software tools, have also limited exploitation of these technologies to date.

How does the user respond to the changes in parallel and distributed computing? How much does the application programmer need to be aware of the differences? What differences remain? Which will persist? Are they differences of kind? Or of degree? Or of concern? What software tools would further assist the exploitation of either/both?

The objectives of the Workshop are:

Topics of interest include (but are not limited to):

Workshop Format

The programme will consist of selected presentations from submitted position papers, a small number of invited presentations from keynote speakers, discussion sessions in subgroups, and a closing plenary session. The number of places is limited (about 45) to foster active participation by all. A booklet of position papers will be distributed three weeks before the Workshop. A report will be produced containing the conclusions of the discussion sessions.

The Workshop will start mid-morning on 14 March and end mid-afternoon on 15 March. The fee for attending the Workshop will be £95, including accommodation at Cosener's House for the night of 14 March and all meals. Accommodation (bed and breakfast) for the night of 13 March preceding the Workshop will be available at a cost of £30.

Position Papers

Participation in the Workshop is by position paper. Papers of about 2-6 pages addressing one or more of the topics above, or related issues, are invited.

Brian Henderson, Computing & Information Systems Dept, Rutherford Appleton Laboratory

SEMINAR: Embedded Parallel Processing

23rd February 1995, Rutherford Appleton Laboratory

The PPECC is organising a one-day seminar on Embedded Parallel Processing. This event will be chaired by Prof G W Irwin, Queens University, Belfast.

As embedded systems become more and more advanced - with increasing processor power and reducing processor costs - so do their processing demands, often against real-time or other constraints (eg cost, size, power consumption).

Parallel processing is opening new opportunities, both in enhancing the capabilities of existing systems (eg functionality or performance), and in developing new applications not otherwise feasible at acceptable cost.

The seminar will report on some of the latest work at the forefront of the exploitation of parallel processing in embedded systems research. The programme includes talks covering three prominent areas - control, signal processing and image processing - and a talk on the issues in porting real applications to embedded systems.

There will be opportunities for discussion following each talk and extended discussion sessions at the end. The seminar will be of particular interest to all engineers who are using, or considering using, parallel processing in their embedded systems research.

Brian Henderson, Secretary, PPECC

Multi-Processor SPARC-station added to PEC

The Parallel Evaluation Centre (PEC) has purchased a 2-processor Sun SPARCstation Model 10 to add to its range of multi-processing/multiprocessor systems.

The operating system is Solaris 1.1, which provides a pre-emptive operating system supporting multi-processing and multi-processor machines. A programming library is provided to make thread-creation available to programmers to generate multi-threaded applications.

The 128 Mbytes of main memory of the system can be accessed by each processor and we can therefore use this machine as a small shared memory system.

Versions of parallel support libraries PVM and P4 are available which seek to optimise use of the shared memory mechanisms available for applications.

The machine specification is as follows: 2 processor system, Superscalar SPARCs, 50 Mhz clock speed, 36KB of on-chip cache (20K instruction, 16K data) with 128 Mbytes memory.

Contact the PPECC Secretary for more details on this and other machines in the PEC.

Questions, Questions, Questions

In addition to the kit available for evaluation at the Parallel Evaluation Centre (PEC) at RAL, an Advisory Service is also offered for the Community Club. This service will seek to provide answers for queries ranging from detailed points about particular systems through to wider advice on the choice of parallel hardware and software (eg for a grant application) and on strategic considerations (eg portability).

We shall be pleased to address your questions either by telephone or e-mail (preferred for detailed points) or by visit appointment to the Centre. Four specialists are available with particular expertise in the following areas:

Chris Wadsworth:
Parallel computing strategy, Programming methodologies, Portable parallel software, Parallelising serial software, Cost-effective parallel computing.
Simon Dobson:
Parallel sharing, Novel languages, Small-scale parallelism, Windows NT.
David Johnston:
Transputer systems, Transputer assembly language, Embedded Communication systems, Parallel Fortran.
Brian Henderson:
Transputer systems, i860 and C40 Parastations, ANSI 'C', Parallel Fortran, Performance evaluation.

News from the CFD Community Club (CFDCC)

CFD in Ship and Yacht Design - A Review

On 16 November the CFDCC held a seminar on the use of CFD in Ship and Yacht Design. The venue was the Department of Marine Technology at the University of Newcastle, and Peter Bettess of the Department chaired the meeting. Talks were given on a variety of problems which arise in the design of marine vehicles and the ways in which techniques of CFD can be used to provide insight into these problems.

The day opened with a talk from Paul Gallagher of W S Atkins on Developments of CFD for Seakeeping Calculations. After a brief historical section on W S Atkins' use of commercial software, he discussed the in-house TSUNAMI code, a mesh adaptive application of 3-D CFD to wave resistance, seakeeping and fluid loading, showing good agreement with other published work. Next Stephen Turnock from Southampton University spoke on Ship Rudder-Propeller-Hull Interaction, a topic of importance in understanding manoeuvring, coursekeeping, resistance and propulsion of ships. He showed results from the Southampton experimental programme and from his Surface Panel method which suggested that the method was well suited to analysing the interaction and producing a design tool.

The last talk before lunch was from Steve Fiddes of Bristol University who gave a lavishly illustrated presentation on an Aero-Structural Model for Yacht Sails, including some work on designing sails for America's Cup yachts. Here vortex-lattice methods were the basic underlying technique and Steve mentioned work which had been done in coupling the fluid solution to structural analysis.

After lunch Steve Watson (DRA Haslar) spoke on the state-of-the-art in CFD for Ship Design, in particular the use made by DRA of various codes. Mike Graham (Imperial College) followed this with a discussion of the use of Vortex Methods to model Separated Flows on Ship Hulls. Vortex shedding is both a problem as a source of resistance to forward motion of a ship and an advantage in damping unwanted motions such as roll. Thus many ship designs include features to encourage the formation of vortices during roll and CFD can now analyse these.

Ted Glover, formerly of the Department of Marine Technology at Newcastle, gave an account of the development of the theory of propeller flow, from Prandtl theory through lifting surface design using vortex-lattice methods to the current practice of surface panel solutions. Now RANS solvers are being applied to propeller boundary flows to predict the onset of cavitation. To finish the day Grant Hearn (Newcastle) lectured on CFD in Conceptual and Detailed Design of Floating Structures, illustrating the choices and compromises to be made between complexity of analysis and speed of results. He concluded that the best approach was to use a database of simple flow cases to produce analytic parameterised models and to decompose a design into a superposition of these basic flows.

The overall message of the day was that Ship and Yacht design was an area where the simpler approaches to CFD such as panel methods and vortex lattices still have an important role to play, although the problems being presented by designers are now approaching the point where more complex solutions are needed.

John Ashby

NAFEMS - A new CFD Initiative

Concern is being voiced that CFD is being misused or mis-applied. The concern is that some CFD practitioners may lack the training to recognise the limitations of CFD code.

NAFEMS has for many years defined training standards for new users of finite element (FE) structural analysis codes. Their approach involves attendance at formal training courses, followed by work under an experienced user's supervision starting with benchmarks and moving on to problems of increasing complexity.

FE structural analysis is now a mature technology. CFD involves the solution of non-linear equations which are inherently more complex, so the need for benchmarks and training for use of CFD codes should be at least as great as for structural analysis.

Over the years, NAFEMS has funded a number of standard benchmarks for FE codes and a range of acclaimed text books. Through its quarterly magazine, BENCHmark, NAFEMS keeps the Engineering Analysis community informed of the latest developments, trends and issues in the marketplace.

NAFEMS believes that it is ideally placed to play an important role in delivering the same benefits to the CFD community. NAFEMS is thus actively seeking to identify individuals and organisations to form a NAFEMS CFD Working Group. Interested parties should contact the author.

Tom Kenny, NAFEMS, NEL Technology Park, East Kilbride

News from the Visualization Community Club (VCC)

Visual Systems Group

The Visualization Group at DRAL has been responsible until now for organising the EASE Visualization Community Club, in addition to its other commitments.

As part of the reorganisation of the Central Computing and Informatics Departments at DRAL, the Graphics Group and the Visualization Group have combined to form the Visual Systems Group, which will take responsibility for the the Visualization Community Club.

The new combined group will have a wider range of expertise, potentially available to the Community Club:

Julian Gallop, Head, Visual Systems Group

Intelligent Techniques for Data Mining

The third Scottish Neural Systems Network (NSYN) event this year was held in Edinburgh on 30 November 1994, and was attended by sixty people, predominantly from industry and commerce. The half-day event covered data mining, and was designed to demonstrate how useful and valuable patterns and relationships can be discovered in data sets using advanced techniques such as Neural Networks, Case Based Reasoning and Genetic Algorithms.

The event was unique in covering a wide variety of techniques and examples of where these techniques have been successfully applied. The implementation approaches discussed ranged from using off-the-shelf Neural Network and Case Based Reasoning packages to software tools developed specifically for data mining. Importantly, an end-user's perspective was also presented, which drew on experiences gathered from implementing a targeted direct mail campaign using a neural network approach.

Other application areas discussed included modelling customer behaviour, database enhancement, customer segmentation, retail modelling, sales analysis and data visualization. The message from the event was that Neural Networks, Case Based Reasoning and Genetic Algorithms all have an important role to play in data mining. Furthermore, there is a steadily increasing number of successful applications, that offer significant advantages over conventional methods for extracting high value information from data sets. This information can be used to increase business efficiency, determine cost savings and increase profitability. However, speakers emphasised the crucial role of data quality and data pre-processing in the successful development of these applications. Valuable techniques mentioned here included simple eyeballing and the use of visualization tools for large data sets.

NSYN (pronounced ensign) has been established with the support of Scottish Enterprise to empower organisations in Scotland to successfully exploit neural computing and related intelligent computing technologies. NSYN provides a Scottish forum for managers, IT professionals, engineers and researchers with an interest in the practical application of neural networks and intelligent computing. The group holds meetings at quarterly intervals, with additional specialist workshops, seminars and demonstrations being held in direct response to members' needs.

NSYN caters not only for those who just wish to learn more about the capabilities of the technology, but also for those actively involved in practical implementations and advanced research. In addition, the group encourages the development of successful applications, fosters collaborative ventures, and keeps members informed of new developments.

Membership is open to all organisations and individuals based in Scotland with a professional interest in neural networks and related technologies.

Jos Trehern, NSYN Technology Transfer Centre, Edinburgh University

Conferences and Meetings Notices

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site