Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF SE ENG Alvey Transputers Literature
Further reading □ Overview □ 1987 □ 12345 □ 1988 □ 6789 □ 1989 □ 101111a121314151617 □ 1990 □ 181920212223242526272829 □ 1991 □ 303132333435 □ 1992 □ 363738394041 □ 1993 □ 424344454647 □ 1994 □ 484950515253 □ 1995 □ 545556575859 □ 1996 □ 60
CCD CISD Harwell Archives Contact us Heritage archives Image license terms

Search

   
InformaticsLiteratureNewslettersECN
InformaticsLiteratureNewslettersECN
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1987
12345
1988
6789
1989
101111a121314151617
1990
181920212223242526272829
1991
303132333435
1992
363738394041
1993
424344454647
1994
484950515253
1995
545556575859
1996
60

Engineering Computing Newsletter: Issue 43

March 1993

Future Applications of IT in Construction and Transport

An International Conference to be held at Brunei University 13-15 September 1993

This conference is the culmination of a four year initiative by the Construction and Environment and Civil Engineering Committees of SERC's Engineering Board. The primary objective of this initiative was to stimulate research into the useful applications of information technology in the areas of transport and construction. This led SERC to fund some twenty researchers to explore this area against four major themes deemed to be industrially important:

The studies of these researchers have now been successfully completed and many of the deliverables are now being applied in industrial contexts. The conference at Brunel will the be the first opportunity for fellow researchers and industrialists to hear and see the fruits of this endeavour. The core of the conference programme will be presentations of these ITA research programmes.

Each paper will focus on two issues:

To complement these core research reports the conference organisers have invited keynote speakers from Europe, America and Australia to point the way towards future research in this area. This is particularly relevant and timely since SERC is planning a new thrust of IT activity in the general area of Design and Information Systems. This new activity will be under the aegis of the Engineering Board's initiative known as IT in Engineering. The SERC's IT Directorate and the Engineering Research Commission are each prepared to make up to £5 million per annum available for research in this area. The conference will therefore be an opportunity for IT and other engineering discipline researchers to get together to discuss possible new programmes of activity likely to attract SERC funding. The hope is that teams of interdisciplinary researchers will join together with a view to submitting collaborative proposals to SERC. Opportunity will therefore be provided in the programme for such groups to meet formally and informally. These groups will also be able to test out their ideas for future research against the IT Co-ordinators (Professor James Powell and Dr Roger Day) and many SERC Committee members who will be attending the conference. Finally there will be an opportunity for others, not receiving SERC ITA funds, to present their ideas either formally in the main programme or at poster sessions. The cost of the conference has been kept to a minimum with the help of SERC support.

Professor James Powell, Dr Roger Day, ITA Co-ordinators

FORTRAN for the transputer, i860 and C40

I attended the inaugural run of the new course FORTRAN for The Transputer, i860, and C40 which was held on 27th to 28th January this year at the Rutherford Appleton Laboratory. The aim of the course was to provide participants with State of the Art information about programming in FORTRAN on parallel machines. The seminars in the morning were supplemented by afternoon practicals to reinforce the material. The exercises were carried out using a software system called the Parallel Virtual Machine (PVM). This allows a network of computers to be viewed as a parallel machine and has recently emerged as a de-facto standard for this form of distributed supercomputing.

On a general level I found the mixture of lectures and practicals to be a good balance between theory and hands on experience, although I felt the course as a whole could have done with an extra day's teaching/practicals. This is a very personal observation due to the fact that not only am I new to the idea of parallel programming and transputers but also to FORTRAN itself. Having said this, I found the atmosphere produced by fellow attendees and course organiser to be friendly and helpful so I was not allowed to sink or swim due to my limited computing experiences. The documentation presented was good and the worked solutions to the exercises given at the practical sessions were extremely helpful, complete and concise.

I came away feeling that I actually learned a lot about the topic and sub-topics, and talking to the other participants I know that they felt the same, and I would certainly be interested in any more courses organised by the same team in the future!

Nick Polydorou, Birkbeck College

For further information on PVM and how to obtain this software for free see the technical article in this edition of the ECN. The course will be run again on 29-30th June.

PVM - The Poor Man's Super Computer

Introduction

In view of the ever increasing demand for processing power, it is an irony that most of the world's computing resource is wasted. It is hard to put a figure on how much processor profligacy occurs. For a start, most computers are not used outside working hours. Further suppose that each computer is used half of the working day and that when in use it is 10% loaded. This gives us a conservative lower bound of around 98% for this squandered resource, contrast this to the 30% loss of a water supply system.

While the rewards are high, the challenge of using this untapped potential across a computer network is considerable. Four principal objectives emerge (addressed below):

Insignificant Impact

The software that accesses the spare CPU cycles must run at low priority i.e. it must relinquish control when the mainstream software makes high compute demands.

It would be verging on the immoral to soak the CPU unbeknown to someone running a consequently sluggish interactive application on the same machine.

Safeguards must exist to protect security and to minimise the performance impact of mopping-up CPU cycles. The UNIX operating system provides these mechanisms, but our experience has been that it is quite possible to remotely use the CPU power of most machines on our departmental network of Ethernetted SPARCstations. These intrusions have not always been regarded favourably! It is remarkable how quickly some people can identify and kill alien applications running on their machine. It is worthwhile warning users and asking their permission. Our solution was to set off our investigatory software at midnight, in lieu of established administrative mechanisms.

Communication

Herein lies the greatest problem. If communications were infinitely quick, then the challenges of parallel processing would be essentially solved. In the real world, a dilemma exists. Does the cost of communication outweigh the benefits of spreading the computational load between remote processors? The answer depends on the trade-off between the granularity of the algorithm and the ratio of computer power to communications bandwidth of the hardware. Essentially the poorer the communications medium, the larger the amount of work that has to be done on each item of data before it is worth being communicated. Ethernet is one of the faster network technologies, yet it is twenty years only. In contrast, processor technology has moved so fast that several generations of improvement have occurred. Computing power has basically outpaced communications. The world awaits an upgrade. FDDI (Fibre Distributed Data Interface) is based on fibre optics and is currently expensive, yet is only one generation ahead of EtherNet in performance.

Wide area networks are even slower and the use of such resources specialises the nature of an exploitative application severely.

Transparent Access

The idea of tapping unused computing power is by no means new, however, and this has been done on an ad-hoc basis, by those knowledgeable enough to work out the systems programming for themselves. I recall an eminent professor telling a story of his being hauled-up by Systems Administration for accumulating two CPU centuries, despite running his jobs non-invasively at the lowest possible priority!

The Parallel Virtual Machine (PVM) from the Oak Ridge National Laboratory in the United States is an attempt to bring a de-facto standard to this area, and to present a simplified network transparent programming model to the user.

Robustness

A remote machine may break or be shut down and a particular communications link may fail. The odds of this happening with a large distributed network are considerable. Our experience was that applications written under PVM would hang at around 5pm. The reason was that staff would finish work for the day and then switch their machines off. Obviously injecting robustness into such a system will impact on communications efficiency. In this case, should the application, rather than the underlying system, cope with potential failure?

PVM

PVM is a software package that allows the utilisation of a collection of (possibly) heterogeneous computers, distributed across a collection of (possibly) heterogeneous networks as a resource for parallel processing. It consists of a daemon process that runs on every machine employed and C and FORTRAN libraries allowing users to access PVM facilities from their application code.

Programming Model

A PVM application consists of one or more instances of one or more components. A component is a conventional serial program. Each instance of each component is a process, which can be regarded as a sub-task of the application.

Processes can be initiated from within PVM, but clearly at least one process must be initiated by the user to set the chain of events in motion.

PVM supports a message passing model. Communication between processes is asynchronous with a blocking receive. If necessary, PVM will transparently use the machine independent XDR data representation standard to move data between computers with different data formats. Each process has an implicit send and receive buffer, so communication is a multi-stage process of: initialising the send buffer; packing the data into the send buffer; initiating the send; receiving the data and then unpacking it out of the receive buffer. Any necessary data format conversion in done in the packing/unpacking operations.

All processes in PVM have equal status - though of course it is up to the application how to organise these - and PVM is said to support unstructured crowd computation.

Each process must identify itself to PVM, either explicitly or by being initiated through PVM. Each process is uniquely identifiable by its component name (a string) and its instance number (an integer). Processes may cooperate by communication, synchronisation or by initiating/terminating each other but NOT by sharing data. Many of the capabilities will be recognised by those who have experience with parallel machines, such as networks of transputers or hypercubes.

Any good Application Programmer Interface (or API) should be describable in terms of a programming model. A bad programming model is revealed when the only way to document the system is by describing each subroutine or function call in turn.

PVM is somewhere in between the two extremes. There is an identifiable model but the programming interface does not meet the ideals of orthogonality i.e. that it is the minimal set that provides full functionality. Some of the PVM primitives are expressible in terms of the others and it is debatable whether other primitives, which may promote bad programming practice, are indeed necessary.

To this end, the facilities of PVM will be discussed briefly by going through the sets of related functions in the API. These are presented using an ABSTRACT SYNTAX with no binding to a particular language. Not all functions are documented - there are around 40 in total - but they are all closely related to the core set below.

Component Control

INSTANCE_NO = initiate(OBJECT_FILE_NAME, ARCHITECTURE= ,MACHINE= )
  • Start a process (optionally specifying machine and/or architecture).
  • The instance number is returned.
  • By default, location of process is intelligent system choice.
INSTANCE_NO = enroll(COMPONENT_NAME)
  • Identify calling process to PVM as COMPONENT_NAME
  • The instance number is returned.
leave()
  • Removes process from PVM's jurisdiction.
terminate (COMPONENT_NAME,INSTANCE_NO)
  • Terminates the instance of the component.

Synchronization

barrier(BARRIER_NAME, N)
  • Waits until N calls of barrier with the same name have been received.
ready(SIGNAL_NAME)
  • Sends named signal to wake up processes waiting on it.
waituntil(SIGNAL_NAME)
  • Suspends until signal received.

Communication

initsend()
  • Initialises (clears) send buffer.
put(ARRAY
  • Appends an array into the send buffer.
snd(COMPONENT_NAME, INSTANCE_NO, MSG_TYPE)
  • Sends typed message to a component instance.
  • If INSTANCE_NO is -1 then message is broadcast to all instances.
rcv(MSG_TYPE
  • Blocks until message of type MSG_TIPE received.
  • If MSG_TYPE is -1 any message will be received.
BOOLEAN = probe(MSG_TYPE)
  • Is message of appropriate type waiting?
  • A MSG_TYPE of -1 indicates any type.
  • Note: probe is like a non-blocking receive or poll and its use is not recommended!

Conclusion

PVM is an emerging de-facto standard and is a no-risk entry into the world of parallel programming - no specialised hardware needs be purchased. It is public domain software obtainable by ftp from a number of sites including src.doc.ic.ac.uk and will run on most UNIX machines.

It is hoped that effective PVM ports will be made to dedicated parallel machines with superior inter-processor bandwidths and latencies. Once an application has been parallelised successfully on a network of SUN workstations, say, then it can be ported at no software effort to a platform really capable of supplying the scalable power. For the general application, the limited bandwidth of today's networks inhibits the use of PVM. We were able to scale linearly the performance of a ray-tracing program by running it on 8 SPARCstations. By the time we had run the code on 16 SPARCs the performance increase had dropped (relatively) to around 12.

Contact

The Parallel Evaluation Centre at the Rutherford Appleton Laboratory exists to support the academic community by providing information and impartial advice on Parallel Processing. For further information on PVM or any aspect of Parallel Processing, please contact the staff of the Centre through Virginia Jones.

CHEST Update

CHEST Education Offers

Recent mailshots from CHEST have included information about the following products:

Your local CHEST Site Contact has copies of all CHEST mailings, and the mailings are also held on the NISS Bulletin Board in Section D3.

Agreements which are about to be finalised, and details of which will be sent to CHEST Site Contacts in the near future, include:

Negotiations are also under way concerning a possible future agreement for WordStar 7 (DOS) and WordStar for Windows - further details arc held in Section D3C4C on NlSSBB.

CHEST and NISS have arranged, in collaboration with IUHC, to provide information from suppliers about education prices for hardware. Users can access these details in Section 5 on NISSBB.

CHEST Directory

On-line versions of the CHEST Directory can be obtained from three sources, on NISSPAC (supports Boolean searching, sorting of records by price as well as alphabetically), on HENSA/micros (for downloading and printing), and also now on NISSWAIS (allows free text searching across the Directory). NISSPAC, HENSA and NISSWAIS can all be accessed via the NISS Gateway.

Annette Lafford

Report on Introductory School in CFD

On a cold and damp Monday morning, very shortly after Christmas, the eager participants arrived from all over the UK for the CFD Community Club's second summer school, which was held in the pleasant riverside location of the Cosener's House.

The course got off to a good start with Mr S P Fiddes (Bristol) introducing the Navier-Stokes equations and giving some consequences for their solution procedures. This was followed by Dr B A Younis (City) who explained that the Navier-Stokes equations can often be solved in a simplified form by recognising a special feature of the flow such as when it is steady, incompressible or inviscid.

Dr D G Rhodes (RMCS) gave an introduction to the nature of turbulence including a live demonstration. This was followed by Dr Younis presenting the different models used to simulate turbulent flows.

Prof D M Causon (Manchester Metropolitan) gave the first lecture on numerical methods for solving the equations of fluid flow concentrating on explicit methods. This was followed by a lecture on grid generation. Later in the course, in a lecture on shock capturing methods, Prof Causon outlined the disadvantages of the classical difference schemes (such as MacCormack's) and the need for modern shock capturing schemes such as TVD (Total Variation Diminishing) and ENO (Essential Non-Oscillatory).

The course continued with Mr Fiddes describing implicit time marching methods using the approximate factorisation technique for the linear advection equation in multi-dimensions. Mr Fiddes also gave a lecture on the impact of developments in computer hardware and explained why we could not assume that increasing computer speeds would solve all our problems in the future.

Moving on to incompressible flows, there were two lectures given on pressure-correction schemes by Prof J J McGuirk (Loughborough). The SIMPLE and SIMPLER approaches to pressure-correction were outlined and methods for non-orthogonal grids and extensions for compressible flows were discussed.

A new complementary lecture introduced this year, given by Dr D Bray (RMCS), was about some of the experimental techniques which are used to extract data for comparison with the computational results.

The remaining lectures were given on some current research topics of the speakers. These included the application of time-accurate Euler solvers to simulate an oil-platform blast problem, second-moment closure predictions of jet-on-jet impingement flow, a moving mesh system for unsteady flows and CFD for environmental flows.

Fluid dynamics was also the topic for study after lectures in the evening. In this case the flow of beer from a pint glass was keenly debated in several of the hostelries in Abingdon. As the week progressed, the attendees became progressively more involved in the hands-on practical assignments which form an integral part of the course.

On the final day the now-weary students mustered together enough energy to stand up and give presentations on the results they had achieved during the week and the research they were intending to perform. It was very useful to see some of the problems encountered and discuss possible solutions. This feedback session resulted in some interesting debates and was a lively finish to a successful week.

Debbie Thomas, Informatics

Introductory School in CFD - A Student's View

The week-long Introductory School in CFD which was organised during 4 Jan to 8 Jan 1993 was well attended and was useful to participants in different ways. The lecture sessions covered a wide spectrum of topics, starting from fundamental principles to specialist and research applications of CFD. The subject matter was presented lucidly, backed up with lecture notes and kept the interest of the participants (for most of the time!). Personally, I benefited in getting an overall picture on where CFD stands in applications to engineering problems. The practical sessions during the course were very helpful. Perhaps a few more computer terminals would have been useful for the rather enthusiastic participants. On the whole the School was a great success and the excellent lodging and boarding arrangements at Cosener's House were also very much appreciated.

I wish to end this note by bringing up one of my thoughts which occurred during the School. I think we should try not to be subservient to the giant and fast computing power and the Direct Simulations of NS Equations. I hope exploration may lead to better theoretical models to describe the turbulence behaviour of fluids, which is a macroscopic manifestation of the inherent chaotic microscopic behaviour at molecular/atomic level. Can we link this molecular level behaviour more directly than to introduce randomness to NS Equations which are based on classical concepts?

Prof H. V Rao, School of Engineering, University of Huddersfield

NISS Update

NISS Gateway

The new Gateway replaced the old version at the beginning of January 1993. Calls to the Gateway are still made via the "call niss" command users may note however that the new Gateway has a rearranged menu structure.

The number of services accessible via the new Gateway is in the region of one hundred, and includes services in the UK, Europe and the USA. The increase in the number and range of services has led to them being grouped by category type on the Gateway's top-level main menu. The categories include: Library Catalogues (nearly 60 of the UK OPACs); Campus Information Systems (a dozen of the most popular); Bibliographic Services (such as Melvyl, Carl, and several commercial services); Directory Services (including the popular Electronic Yellow Pages); Archive Services (eg HENSA/micros); and General Services (such as the NSFNET gateway, EuroKom and ASK).

The new Gateway runs over a high speed link on Sun servers. Implementation of this new version means that the Gateway can now accommodate a large number of simultaneous users and substantial volumes of traffic. The Gateway has recently been supporting over 30,000 user sessions per month.

Access to the NISS Gateway can currently be made via its NRS name (UK.AC.NISS), at an X.25 address of 000062200000. The Gateway has an IXI address (204334506201), and Internet access is also expected to be available early in 1993.

The NISS Bulletin Board (NISSBB)

NISSBB retains its position as an important information source for the community and was accessed by an average of over 280 users per day during 1992. There is also growing interest in NISSBB from outside the UK, and the service is accessed by users in Hong Kong, Australia, Europe and America.

The range of information on NISSBB continues to expand - recent additions have included details of online discussions on Distance Education.

NISSBB can be accessed via Option A on the NISS Gateway.

NISS Public Access Collections (NISSPAC)

The service now holds over 13,500 records and usage continues to increase - the peak for 1992 was in November, with an average of almost 140 accesses per day.

NISSPAC's move to a dedicated Sun system (from its current, shared, IBM 3090) has reached the final stages, and when complete further improvements to the service will be investigated.

NISSPAC can be accessed via Option B on the NISS Gateway.

NISS Wide Area Information Server (NISSWAIS)

NISSWAIS can be accessed via the NISS Gateway (Option C) and offers an alternative way of obtaining a range of information. Eight separate textbases are currently available in NISSWAIS:

A further textbase is shortly to be added to the service - The Good Software Guide, which includes software reviews, from Absolute Research.

Not only does NISSWAIS provide keyword searching of the textbases where this was not previously possible, such as for NISSBB, but it also allows searching across a number of sources simultaneously. Users can now, for example, select NISSBB and BUBL to be searched simultaneously for a particular keyword; or they could search the CHEST Directory and HENSA/micros together for a particular name or type of software. In all cases the records located as a result of a search can be easily emailed back to the user, and in the event of searches on HENSA/micros the software itself can be emailed with only one extra keystroke.

Although available as a trial version only during November and December 1992, NISSWAIS attracted an average of over 900 accesses per month.

NISSWAIS is a new service and we would welcome users' comments

Annette Lafford

Forthcoming Events

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site