Informatics Informatics Department RAL Annual Reports Excerpts

Jump To Main Content

Jump Over Banner

Home

InformaticsLiteratureRAL Annual Report Excerpts

Jump Over Left Menu

Engineering Board: Annual Report Excerpt 1986

RAL's Engineering Board programme broadly follows the pattern established in recent years, and is dominated by the work supporting the Information Technology (IT) Directorate. RAL has wide responsibilities in the Alvey programme for supporting and managing its computing network infrastructure and organising and coordinating its research programmes in software engineering, intelligent knowledge based systems and man machine interface.

The scale of support for the Board's computing facilities is similar to last year, but there are important structural and management changes. The interactive computing facility, single user systems and applications software programmes are now merged into the Engineering Computing Facility under a small management team.

RAL continues to play a significant role providing services to academic designers of microelectronic devices through its computer aided design, electron beam lithography and silicon brokerage facilities.

The Laboratory has continued to support UK radio communications through the Departmental Users Programme.

IT DEVELOPMENTS

Transputers and Occam

DCS - the Engineering Board's Distributed Computing Systems specially promoted programme, ran from 1977 to 1984. It created a strong research community in the UK interested in the theory of parallel computing and the development of notations and techniques for specifying and verifying such systems. An important theme of DCS was in the area of loosely coupled distributed systems where many processors could be harnessed to tackle a specific problem in a flexible way. Given the move to large scale integration, machines could be made up of many self-contained processors - each with its own memory, and could become more powerful, more reliable and cheaper than conventional computers.

To use such a machine effectively, it must be possible to program it so that processors can share out the work by communicating and synchronising with each other. The system must be able to cope with the malfunctioning of individual processors. If each of the processors is identical, it should be possible to reconfigure the system and provide a more fault tolerant system.

In 1978, Hoare of Oxford University proposed a model of computation called CSP - Communicating Sequential Processes, which concentrated on input, output and concurrency as the basic primitives. Messages passed between processes were the basic method of communication and these were synchronised so that the sender waits until the message is received.

From this, Inmos Ltd developed a microcomputer called the Transputer, designed for building high performance computer systems and Occam, a simple and effective programming language which encapsulates Hoare's CSP theory. The Transputer is designed so that it can carry out a set of concurrent processes with special instructions sharing the processor time between the concurrent processes and perform interprocess communication. Also, its external behaviour corresponds to a process so that Transputers can be linked with interprocess communication in a way similar to communication inside an individual Transputer.

Fig. 2.1. The Transputer from Inmos is designed for concurrent operation.

Fig. 2.1. The Transputer from Inmos is designed for concurrent operation.
Large View

Occam defines the computation to be performed as a collection of concurrent processes communicating with each other through channels. An Occam program can be executed by a single Transputer, a small network of Transputers or a much larger network. As a collection of processes is itself a process, an application can be defined hierarchically with a manageable set of processes being described at each level.

Transformation and Verification of Occam

RAL is providing support for an Alvey funded research project, The Transformation and Verification of Occam Programs between Oxford University and Inmos Ltd. As Occam is derived from Hoare's CSP, it is possible to define a set of laws between Occam programs which can be used as the basis of an automated transformation system. It is possible to define transformations of the original program which guarantee that the new program has the same meaning as the original one. This can be used firstly for improving the efficiency of a program, secondly to show two programs are equivalent and, finally, to transform to a restricted syntax for VLSI implementation.

A prototype system has been written in the language Edinburgh Standard ML, which is able to transform Occam programs to a normal form.

Transputer Questionnaire

The Computing Science sub-committee of the Engineering Board has started to receive requests for Transputer systems on grant applications. It recommended to the Computing Facilities Committee - CFC, that problems might arise if these were dealt with in an uncoordinated way and could lead to ineffective use of resources and possible wasteful duplication. As a result, CFC set up a working party to look at the problem and RAL coordinated the work.

To gauge the interest in Transputers in the Engineering Board area, a questionnaire was sent out to the research community and 130 replies were received from 54 universities and polytechnics. Of these, 34 already had a Transputer system for development purposes. Proposed uses included the anticipated ones of image processing, simulation, modelling, on-line control and signal-processing - where the emphasis was on interactive or real time use which would be impossible with the power of current computer systems. Some more unusual uses included molecular modelling, speech recognition, databases and robotics.

The working party concluded that there is a strong UK interest in engineering applications of Transputers and is preparing a case for establishing a coordinated programme to harness the academic manpower effectively.

Postscript

One of the earliest uses of computers was the production and output of documents. Initially this was just a matter of inputting and displaying a text file. The major advantage was that the file could be edited and reprinted quickly. This saved the enormous amount of retyping or physically cutting and pasting up documents to create revised ones.

Soon systems became more complex allowing most of the housekeeping associated with a document to be automated. Text could be adjusted in paragraphs to provide the optimum layout with or without right adjustments. Chapter headings, page numbering, cross-references, contents pages etc were all produced automatically. Modern document composition facilities define the document as text interspersed with commands in GML - Generalised Markup Language, which defines the formatting to be applied to the text.

Figure 2.2. Postscript examples

Figure 2.2. Postscript examples
Full Size Image

While the output printer remains a mono-spaced device capable of printing in only one character size, the Generalized Markup Language can be simple, particularly if it is not possible to mix text and graphics. A string of characters output to the device interspersed with a few control characters is often sufficient. The task becomes much more demanding when the output device is capable of producing pages using many fonts at various sizes and orientations with embedded pictures. Many of the laser printers, ink jet printers and phototypesetters on the market today are capable of producing output of this quality and, consequently, there is a need for a language or protocol to drive them. This requires the Generalised Markup Language to be extended to include the ability to change font size.

Postscript is a page description language developed by Adobe Systems Inc to describe virtually any page of output. It is a powerful programming language in its own right. Modern laserwriters and phototypesetters are available which can produce pages of high quality text and graphics using a Postscript program as the definition of the page and executing the program within the device. It is possible for almost identical output to be specified on a wide range of devices.

The missing link is the ability to compose documents interactively. The user should be able to define a document and preview it at his desk before sending it to the printer or phototypesetter. This clearly requires a high quality display on the user's desk and a system that can efficiently execute Postscript programs and display the pages that the program defines.

Informatics Division is producing such a system. The initial implementation was done on an ICL Perq 2 but it has also been ported to Sun 3, Whitechapel MG-1, and High Level Hardware Orion systems. The latter is a grey-scale rather than black and white system, so half toning can be produced exactly.

INTERACTIVE COMPUTING

RAL has provided interactive computing facilities for engineers since 1975. The Engineering Board set up a working party to clarify future engineering computing requirements in 1973 which resulted in the Rosenbrock Report. This recommended a coordinated provision of an interactive facility based on central facilities and local multi-user systems. At that time, it felt that industrial work used design methods which did not fully exploit the opportunities of man-machine interaction.

The facility started as two DEC10 systems at Edinburgh and UMIST followed by the purchase of two small GEC and Prime multi-user systems for evaluation. The number of multi-user systems located in university departments was extended over the next few years until 23 Prime and GEC systems were installed. At the time of the Rosenbrock Report, it was only just possible for good interactive facilities to be made available via multi-user mini systems. The decision to go for a facility based on such systems was adventurous and its success was largely because of the foresight of the Rosenbrock working party.

Figure 2.3. The graph shows how available computer processing power has grown at RAL over the past decade.

Figure 2.3. The graph shows how available computer processing power has grown at RAL over the past decade.
Full Size Image

The Rosenbrock Report anticipated improvements in single-user systems - or workstations, that would make them a viable cost-effective alternative to the multi-user systems. The effectiveness of the distributed multi-user systems was dependent on the wide area network - which evolved into JANET. The single-user systems would only be effective if this networking was extended into the campuses so that the individual systems could communicate.

Attempts were made to use single-user workstations in an engineering environment but the cost and performance of these systems did not make them generally useful. By 1980, higher powered and lower cost workstations were beginning to appear with high precision bit-map displays. Consequently, SERC agreed to a coordinated plan to ensure that the academic community made the best use of its manpower. Council selected a policy of creating a common hardware and software base to act as a nucleus for future developments in single-user workstations.

The initial Common Base was Pascal and Fortran languages running under the UNIX operating system implemented on a Perq workstation linked locally by Cambridge Rings and nationally by the X25 wide area computer network. An agreement was reached with ICL to manufacture and market the Perq in the UK. It was decided that other workstations would be added under the same Common Base as products become available.

During the early 1980s, about 200 Perq systems were purchased. Later, Sun 2 and Sun 3 systems were added to the Common Base hardware and, currently, there are about 200 of these with the community. The Common Base software was extended to include algorithmic libraries (NAG) and graphics (GKS).

Sun Deliveries to RAL

Sun Deliveries to RAL
Large View

The local area network technology was changed from Cambridge Ring to Ethernet as this became the de facto and, later, international standard.

Multi-user systems continued to provide effective interactive facilities until 1986. Processor, disc and memory enhancements have improved the performance of the systems as user expectations increased.

In the last few years, the Alvey initiative has installed an infrastructure based on multi-user systems supplied by GEC and Systime, running the Common Base Software, with additional software appropriate to their Alvey users.

The price/performance of multi-user systems is now beginning to be overtaken by the single-user workstations and the multi-user systems are approaching retirement or a change of function. Today's workstations are 20 times more powerful, have 16 times more memory and twice as much disc space as the original multi-user systems designed to support a community of 60 users.

The environment which used to consist of a multi-user system is being replaced by a set of single-user workstations attached to a local area network connecting the workstations to compute and file server capabilities plus a connection to the wide area network. It is possible that the existing multi-user systems may migrate to a server function thus providing a graceful path to the latest evolutionary step in engineering interactive computing.

In the last decade, the engineering community has evolved from batch use of computing via central and distributed multi-user systems to a modern environment of workstations and servers.

COMMUNICATIONS

Distributed File Systems

The concept of the on-line filestore goes back to the 1960s, with the development of discs large enough to cope with the needs of the user population. Files provide a means of easy access to both programs and data - the alternative was punched cards, paper tape etc. They also allow groups of users to share information easily.

With today's single-user systems, each user has a workstation which provides local computing power and file space. This can cause difficulties when several people are working on a shared project, since there is no single filestore and no single place where programs are run. This leads to file duplication, which wastes disc space and causes problems when copies are altered independently. It also makes the sharing of commonly used tools and databases difficult, as a copy must be kept on each machine.

A network can alleviate the problem, but the traditional network only provides facilities such as file transfer and remote login. The former implies the movement of complete files over a slow link, even when access is required to a small part of the data. Multiple copies are generated, hence increasing the likelihood of multiple versions. Remote login invalidates the concept of workstation ownership, since it implies a multi-user environment.

A better solution to the problem is to use a Distributed File System, which allows separate filestores to be joined to look like one large system. Everyone receives the same view of this filestore regardless of the machine being used, and will use the same syntax to access any file. This transparency means that there is no need to change existing programs to use the facility, and no need to copy files explicitly. Only those parts of a file actually required will ever be transferred. Two of the best known schemes at present are the Newcastle Connection, developed by the University, and Network File System - NFS, from Sun Microsystems. Since Sun also supplies one of the machine ranges in the Common Base Programme, NFS has been installed at RAL and its facilities are being exploited both on-site and remotely.

NFS leads to a saving in the number of discs purchased, since not every workstation now requires its own disc. It is possible for all the files that previously needed to be local to the workstation to reside on a remote machine. The removal of the need for a disc unit saves about half the cost of the workstation. Special machines called File Servers hold the filestore for everyone. Smaller workstations fit more easily into offices and it is much easier to handle the file management issues such as backup and archiving. Compared to the earlier terminal/multi-user arrangement, the new system distributes computer power to the user, leaving the filestore central.

The mechanism used to implement NFS is Remote Procedure Call- RPC, which enables sub-programs, rather than whole jobs, to be executed on a remote machine. RPC is sufficiently general that it allows other functions, such as compute server, to be provided. Users can off-load computationally intensive jobs from the workstation, merely by changing the method used to call the subroutine where the computation is done. This fits in well with the type of interactive programs commonly found in the engineering world, characterised by alternating periods of calculation and interaction.

High Speed Networks

Project Unison Unison is one of two high-speed networking projects funded as part of the Alvey programme. Collaborators involved on the project are; Acorn Computers Ltd, Cambridge University Computer Laboratory, Logica UK Ltd, Loughborough University and RAL. It aims to provide an integrated services high-speed digital network suitable for connecting the next generation of information technology products, in particular products associated with the office. Network design is based on the use of a pilot ISDN - Integrated Services Digital Network, which is being constructed by British Telecom.

Each Unison site has a primary rate - 2 Mbps, connection to the network. Work on interfacing to these high-speed links is complete and initial communication between sites has been established. Some preliminary testing with voice data has been carried out.

In a second phase, the links will interface to Cambridge Fast Rings - CFRs, a new version of the Cambridge slotted ring that runs at 60 Mbps. Very high performance is required from this interface and the design uses several linked Transputers to achieve this. Substantial effort has been made to establish familiarity with Transputer technology and the Occam programming language. Both the Transputer and Occam look very promising for communications and other real-time applications, such as image and voice programming.

COMPUTATIONAL MODELLING

The underlying theme of the programme is the use of computational modelling techniques to increase the productivity of engineering research workers. The main areas covered are finite element analysis, electromagnetic fields, semiconductor modelling and pre- and post-processing, ie general methods for entering engineering data and displaying results.

Semiconductor Modelling

RAL is involved in two related projects in device and process modelling funded by the Alvey Directorate. Its role is to design and implement a flexible software system into which other partners in the collaboration can slot specific modules.

A diagram of the overall structure is shown in Fig 2.4. RAL provides the binding elements - the contents of the boxes on the outside, and the device simulator module, with the other kernels written by university partners. The control program and common data base are almost complete, and work has started on the graphics shell, the device simulator and the integration of the technology modules into the overall structure.

Fig. 2.4. Organisation of the semi-conductor modelling system. RAL provides the binding elements - shown in the boxes on the outside. The program modules shown on the inside are written by RAL and university groups.

Fig. 2.4. Organisation of the semi-conductor modelling system. RAL provides the binding elements - shown in the boxes on the outside. The program modules shown on the inside are written by RAL and university groups.
Full Size Image

The pilot project in device simulation funded by the European Commission was completed in April 1986. A new larger project funded by the Esprit programme, in which RAL again provides the project management, started immediately afterwards. Its objective is to extend the development of robust and efficient algorithms to simulate the behaviour of three dimensional semiconductor devices, to incorporate them in a computer code and to compare the results with tests on selected problems. RAL's effort is concentrated in design and implementation of the project code in cooperation with other partners. A detailed design specification is in progress, and implementation will start in early 1987.

Data Exchange Project

The CAD*I project, which has been running for two years, is also funded by the Esprit Programme. It aims at devising methods for transferring product definition data between different CAD programs, geometrical modelling programs and finite element analysis programs. The file format designs are largely complete and RAL staff are now developing software interfaces for writing and reading them. The project is in contact with the relevant ISO Committee, and RAL staff have now joined the appropriate BSI committee.

Three Dimensional Transient Eddy Currents

A joint project with Bath University and Imperial College, London, is developing a package to solve three dimensional transient eddy currents. During the past year the steady state algorithm was extended to include transient behaviours, and further extensions will be made during the coming, final, year of the project. The package is being tested with a variety of test problems that have been made available through an international series of electromagnetic workshops, the first of which was hosted by RAL in March 1986. This workshop followed the fourth in the series of Eddy Current Seminars, which had 58 attendees from all parts of the world.